*** aaronsheffield has quit IRC | 00:02 | |
*** adriant has quit IRC | 00:11 | |
*** iokiwi has quit IRC | 00:11 | |
*** goldyfruit_ has joined #openstack-infra | 00:12 | |
donnyd | fungi: guess I will need to get a logo made up. I will get back to you on it | 00:18 |
---|---|---|
*** igordc has quit IRC | 00:23 | |
*** markvoelker has quit IRC | 00:29 | |
*** markvoelker has joined #openstack-infra | 00:29 | |
clarkb | guilhermesp: root@23.253.245.79 | 00:33 |
*** Goneri has quit IRC | 00:37 | |
*** adriant has joined #openstack-infra | 00:42 | |
*** ricolin has joined #openstack-infra | 00:48 | |
*** gyee has quit IRC | 00:56 | |
*** happyhemant has quit IRC | 01:00 | |
*** markvoelker has quit IRC | 01:03 | |
*** markvoelker has joined #openstack-infra | 01:04 | |
cmurphy | prometheanfire: seems like the configparser update in https://review.opendev.org/680914 broke keystone's py27 tests https://zuul.opendev.org/t/openstack/build/a858551fdb4749e0aa25a29abcad3e73/log/job-output.txt there's no configparser 4.0.1 https://pypi.org/project/configparser/#history | 01:05 |
clarkb | cmurphy: they may have deleted that release | 01:06 |
cmurphy | must have | 01:06 |
cmurphy | they still have the git tag | 01:06 |
prometheanfire | cmurphy: https://github.com/jaraco/configparser/releases | 01:08 |
gmann | cmurphy: clarkb prometheanfire that is breaking all py27 jobs, may be we should cap the version | 01:08 |
cmurphy | https://review.opendev.org/681630 | 01:09 |
prometheanfire | ya, if they unpublished it then we need to roll it back | 01:09 |
*** pcaruana has quit IRC | 01:09 | |
prometheanfire | I'm guessing because of https://github.com/jaraco/configparser/issues/45 | 01:09 |
cmurphy | ugh yeah https://github.com/jaraco/configparser/issues/45#issuecomment-530595951 | 01:10 |
prometheanfire | I'll leave it up to infra if they want to rush it but I +2+w'd it | 01:10 |
clarkb | I'll enqueue and promote that change now that it is approved | 01:10 |
clarkb | prometheanfire: jinx | 01:10 |
*** slaweq has joined #openstack-infra | 01:11 | |
prometheanfire | I'm just gonna leave this link there then | 01:11 |
prometheanfire | https://doughellmann.com/blog/2016/02/25/so-youve-released-a-broken-package-to-pypi-what-do-you-do-now/ | 01:11 |
cmurphy | lol | 01:12 |
prometheanfire | yep, unpublishing things is one of the things I hate most, makes me want to remove the dep from openstack to not have to deal with it :P | 01:12 |
clarkb | I also reenqued the nova fix that got kicked out due to this | 01:12 |
clarkb | If I was a smarter person I would've put it just behind the requirements fix but this is likely good enough for now | 01:13 |
cmurphy | clarkb: we're also still having timeout problems in keystone https://review.opendev.org/681621 if you're feeling generous | 01:14 |
clarkb | cmurphy: done | 01:14 |
cmurphy | tyvm | 01:15 |
*** slaweq has quit IRC | 01:16 | |
prometheanfire | I thought the timeout bump merged (for reqs at least) | 01:16 |
clarkb | this is a second bump | 01:17 |
gmann | prometheanfire: i am worried about stestr version up now. how did you test that (if all other project working fine)? with workaround in Tempest it will fix it in Tempest but every tox use stestr and what project raise skip exception from setUpClass we do not know | 01:17 |
prometheanfire | gmann: it's just our tempest job that's failing with it | 01:17 |
prometheanfire | all the other checks were passing | 01:18 |
gmann | ok. | 01:18 |
gmann | I will try to get Tempest workaround in by today. | 01:19 |
prometheanfire | thanks :D | 01:20 |
clarkb | neutron's functional job doesn't use speculative future states https://606e7073949a555d6ce7-a5a94d16cd1e63fdf610099df3afaf88.ssl.cf5.rackcdn.com/679813/2/gate/neutron-functional-python27/e20cb78/job-output.txt it failed onthe configparser issue behind the change that fixes that | 01:30 |
*** roman_g has quit IRC | 01:31 | |
clarkb | (I'm not going to debug that, its time for evening things. The neutron team should fix that though) | 01:32 |
*** mrda has joined #openstack-infra | 01:34 | |
openstackgerrit | Ian Wienand proposed zuul/zuul master: [wip] Test and expand documentation for executor-only jobs https://review.opendev.org/679184 | 01:37 |
*** dklyle has quit IRC | 01:42 | |
*** david-lyle has joined #openstack-infra | 01:42 | |
*** david-lyle has quit IRC | 01:46 | |
*** dklyle has joined #openstack-infra | 01:47 | |
*** dchen has quit IRC | 01:50 | |
*** jcoufal has quit IRC | 01:54 | |
guilhermesp | clarkb: thanks! | 01:56 |
fungi | i'm going to wager it directly retrieves the constraints file instead of using the copy zuul provides on the node's filesystem | 01:58 |
*** diablo_rojo has quit IRC | 02:02 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681603 | 02:05 |
*** slaweq has joined #openstack-infra | 02:11 | |
*** slaweq has quit IRC | 02:16 | |
*** diablo_rojo has joined #openstack-infra | 02:17 | |
*** rlandy|bbl has quit IRC | 02:37 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681603 | 02:40 |
*** exsdev has quit IRC | 02:45 | |
*** exsdev has joined #openstack-infra | 02:46 | |
*** bobh has joined #openstack-infra | 02:49 | |
*** tinwood has quit IRC | 02:50 | |
*** tinwood has joined #openstack-infra | 02:52 | |
*** xinranwang has joined #openstack-infra | 02:57 | |
*** diablo_rojo has quit IRC | 03:06 | |
*** bobh has quit IRC | 03:07 | |
*** ykarel|away has joined #openstack-infra | 03:13 | |
*** armax has quit IRC | 03:29 | |
openstackgerrit | Ian Wienand proposed zuul/zuul master: [wip] Test and expand documentation for executor-only jobs https://review.opendev.org/679184 | 03:32 |
*** Garyx_ has joined #openstack-infra | 03:33 | |
*** Garyx has quit IRC | 03:33 | |
*** dchen has joined #openstack-infra | 03:43 | |
*** KeithMnemonic has quit IRC | 04:04 | |
*** rh-jelabarre has quit IRC | 04:04 | |
*** ykarel|away is now known as ykarel | 04:06 | |
*** slaweq has joined #openstack-infra | 04:11 | |
*** slaweq has quit IRC | 04:16 | |
*** ricolin has quit IRC | 04:27 | |
*** dolpher has joined #openstack-infra | 04:30 | |
*** udesale has joined #openstack-infra | 04:31 | |
*** pcaruana has joined #openstack-infra | 04:32 | |
cmurphy | https://review.opendev.org/681630 needs to be promoted again :/ | 04:42 |
*** kjackal has joined #openstack-infra | 04:47 | |
*** markvoelker has quit IRC | 04:48 | |
*** pcaruana has quit IRC | 04:58 | |
*** pots has joined #openstack-infra | 05:10 | |
*** jaosorior has joined #openstack-infra | 05:20 | |
*** sshnaidm|off is now known as sshnaidm|afk | 05:25 | |
*** ykarel has quit IRC | 05:27 | |
*** georgk has quit IRC | 05:28 | |
*** fdegir has quit IRC | 05:28 | |
*** fdegir has joined #openstack-infra | 05:29 | |
*** georgk has joined #openstack-infra | 05:29 | |
*** ramishra has joined #openstack-infra | 05:31 | |
*** ccamacho has quit IRC | 05:33 | |
*** hamzy_ has quit IRC | 05:35 | |
openstackgerrit | Ian Wienand proposed zuul/zuul master: Discuss executor-only jobs, add unit-test https://review.opendev.org/679184 | 05:38 |
openstackgerrit | Ian Wienand proposed zuul/zuul master: Remove auto-add of localhost from unit test inventory https://review.opendev.org/681641 | 05:38 |
*** ykarel has joined #openstack-infra | 05:40 | |
*** raukadah is now known as chandankumar | 05:41 | |
ianw | cmurphy: i just merged it ... i don't think there's much point waiting for it and making the queues worse | 05:45 |
ianw | #status log force merged https://review.opendev.org/681630 "Revert configparser update" to avoid further breakage after it failed gate on it's prior promoted run | 05:46 |
openstackstatus | ianw: finished logging | 05:46 |
prometheanfire | ianw: ack | 05:47 |
ianw | it had it's chance :) | 05:47 |
prometheanfire | :D | 05:47 |
prometheanfire | blame upstream | 05:48 |
*** slaweq has joined #openstack-infra | 05:53 | |
*** kjackal has quit IRC | 05:58 | |
cmurphy | thanks ianw | 05:58 |
*** slaweq has quit IRC | 05:59 | |
*** tkajinam has quit IRC | 06:00 | |
*** markvoelker has joined #openstack-infra | 06:06 | |
*** tkajinam has joined #openstack-infra | 06:06 | |
*** pcaruana has joined #openstack-infra | 06:08 | |
*** markvoelker has quit IRC | 06:11 | |
*** pgaxatte has joined #openstack-infra | 06:11 | |
*** slaweq has joined #openstack-infra | 06:13 | |
*** jaicaa has quit IRC | 06:16 | |
*** jaicaa has joined #openstack-infra | 06:17 | |
*** pgaxatte has quit IRC | 06:18 | |
*** pgaxatte has joined #openstack-infra | 06:19 | |
AJaeger | ianw: saw your comments on the review. I suggest to add to the etherpad the structure you have in mind for publishing - and I'll mirror that to the jobs... | 06:31 |
*** dchen has quit IRC | 06:31 | |
AJaeger | ianw: left a comment... | 06:34 |
*** ccamacho has joined #openstack-infra | 06:35 | |
*** rpittau|afk is now known as rpittau | 06:36 | |
*** pcaruana has quit IRC | 06:41 | |
*** ricolin has joined #openstack-infra | 06:41 | |
AJaeger | ianw: I still see 681630 on http://zuul.opendev.org/t/openstack/status - why is Zuul not dequeing it? | 06:46 |
*** pgaxatte has quit IRC | 06:52 | |
*** tkajinam_ has joined #openstack-infra | 06:53 | |
*** tkajinam has quit IRC | 06:56 | |
*** kjackal has joined #openstack-infra | 06:57 | |
*** ricolin has quit IRC | 06:59 | |
*** ralonsoh has joined #openstack-infra | 07:00 | |
*** soniya29 has joined #openstack-infra | 07:04 | |
*** spsurya has joined #openstack-infra | 07:07 | |
*** aedc has joined #openstack-infra | 07:09 | |
*** pcaruana has joined #openstack-infra | 07:11 | |
*** trident has quit IRC | 07:15 | |
*** ykarel is now known as ykarel|lunch | 07:15 | |
*** ricolin has joined #openstack-infra | 07:18 | |
*** ralonsoh has quit IRC | 07:23 | |
*** happyhemant has joined #openstack-infra | 07:23 | |
*** ralonsoh has joined #openstack-infra | 07:24 | |
*** ralonsoh has quit IRC | 07:24 | |
*** trident has joined #openstack-infra | 07:24 | |
*** ralonsoh has joined #openstack-infra | 07:24 | |
ianw | AJaeger: i think it will still go through the motions but it's just already merged | 07:25 |
ianw | AJaeger: yeah, thanks. the change has prompted me to think a bit harder; i can come up with a bit of a spec, we can discuss it | 07:26 |
*** aedc has quit IRC | 07:27 | |
*** aedc has joined #openstack-infra | 07:28 | |
*** trident has quit IRC | 07:28 | |
AJaeger | ianw: ok, looking forward to it. | 07:29 |
*** ralonsoh has quit IRC | 07:32 | |
*** ralonsoh has joined #openstack-infra | 07:32 | |
openstackgerrit | Sorin Sbarnea proposed opendev/bindep master: Improve tox.ini setup https://review.opendev.org/605613 | 07:33 |
openstackgerrit | Sorin Sbarnea proposed opendev/bindep master: Improve tox.ini setup https://review.opendev.org/605613 | 07:33 |
*** pgaxatte has joined #openstack-infra | 07:35 | |
*** soniya29 has quit IRC | 07:36 | |
*** xenos76 has joined #openstack-infra | 07:37 | |
*** trident has joined #openstack-infra | 07:37 | |
*** hwoarang has quit IRC | 07:40 | |
*** dolpher has quit IRC | 07:41 | |
*** hwoarang has joined #openstack-infra | 07:42 | |
*** hamzy has joined #openstack-infra | 07:42 | |
*** jpena|off is now known as jpena | 07:42 | |
*** exsdev has quit IRC | 07:44 | |
*** exsdev0 has joined #openstack-infra | 07:44 | |
*** exsdev0 is now known as exsdev | 07:44 | |
*** xinranwang has quit IRC | 07:56 | |
*** e0ne has joined #openstack-infra | 07:57 | |
*** electrofelix has joined #openstack-infra | 07:57 | |
*** soniya29 has joined #openstack-infra | 08:01 | |
*** e0ne has quit IRC | 08:08 | |
*** tkajinam_ has quit IRC | 08:09 | |
*** ykarel|lunch is now known as ykarel | 08:09 | |
*** kopecmartin|off is now known as kopecmartin | 08:10 | |
*** ykarel is now known as ykarel|meeting | 08:14 | |
*** pkopec has joined #openstack-infra | 08:15 | |
*** whoami-rajat has joined #openstack-infra | 08:15 | |
*** yolanda has quit IRC | 08:16 | |
*** markvoelker has joined #openstack-infra | 08:16 | |
*** threestrands has quit IRC | 08:16 | |
*** ykarel has joined #openstack-infra | 08:18 | |
*** ykarel|meeting has quit IRC | 08:19 | |
*** markvoelker has quit IRC | 08:20 | |
*** derekh has joined #openstack-infra | 08:28 | |
*** yolanda has joined #openstack-infra | 08:39 | |
*** jaosorior has quit IRC | 08:45 | |
*** piotrowskim has joined #openstack-infra | 08:50 | |
*** gfidente has joined #openstack-infra | 08:53 | |
*** priteau has joined #openstack-infra | 08:56 | |
*** kjackal has quit IRC | 08:57 | |
*** kjackal has joined #openstack-infra | 08:57 | |
*** xenos76 has quit IRC | 08:59 | |
*** ociuhandu has joined #openstack-infra | 09:00 | |
*** e0ne has joined #openstack-infra | 09:03 | |
*** e0ne has quit IRC | 09:03 | |
*** udesale has quit IRC | 09:04 | |
*** ociuhandu has quit IRC | 09:04 | |
*** e0ne has joined #openstack-infra | 09:07 | |
*** udesale has joined #openstack-infra | 09:07 | |
*** ralonsoh has quit IRC | 09:13 | |
*** FlorianFa has quit IRC | 09:13 | |
*** e0ne has quit IRC | 09:15 | |
*** lpetrut has joined #openstack-infra | 09:16 | |
*** rcernin has quit IRC | 09:16 | |
*** sshnaidm|afk is now known as sshnaidm|rover | 09:18 | |
*** ociuhandu has joined #openstack-infra | 09:19 | |
*** e0ne has joined #openstack-infra | 09:19 | |
*** ociuhandu has quit IRC | 09:21 | |
*** e0ne has quit IRC | 09:21 | |
*** ociuhandu has joined #openstack-infra | 09:21 | |
*** lpetrut has quit IRC | 09:23 | |
*** prometheanfire has quit IRC | 09:24 | |
*** lpetrut has joined #openstack-infra | 09:24 | |
*** pgaxatte has quit IRC | 09:26 | |
*** prometheanfire has joined #openstack-infra | 09:26 | |
*** ricolin has quit IRC | 09:37 | |
*** e0ne has joined #openstack-infra | 09:39 | |
*** ykarel_ has joined #openstack-infra | 09:39 | |
*** ykarel has quit IRC | 09:40 | |
*** ralonsoh has joined #openstack-infra | 09:46 | |
*** udesale has quit IRC | 09:49 | |
*** udesale has joined #openstack-infra | 09:50 | |
*** efried has joined #openstack-infra | 09:52 | |
frickler | donnyd: infra-root: I just saw three gate failures with mirror issues in FN, maybe the current load is too high? | 09:53 |
*** kjackal has quit IRC | 09:54 | |
*** Tengu has quit IRC | 10:03 | |
*** Tengu has joined #openstack-infra | 10:04 | |
*** gfidente has quit IRC | 10:07 | |
frickler | looks like some kind of major issue since 08:30 according to https://grafana.fortnebula.com/d/9MMqh8HWk/openstack-utilization?orgId=2&refresh=30s , I think I'll disable that cloud until donnyd is awake again | 10:07 |
cgoncalves | hey infra team! we in octavia are seeing high failure rates in CI due to connectivity issues with apt and pip in fortnebula | 10:08 |
*** dtantsur|afk is now known as dtantsur | 10:08 | |
cgoncalves | this is blocking us from merging bug fixes and features. feature freeze is this week | 10:08 |
cgoncalves | e.g. https://ab6aa80517d8c71f588c-e65fbda5c4a8fc14eb81d398bd7b0a80.ssl.cf2.rackcdn.com/681144/8/check/openstack-tox-functional/0ac11dd/job-output.txt | 10:09 |
openstackgerrit | Jens Harbott (frickler) proposed openstack/project-config master: Disable fortnebula cloud https://review.opendev.org/681697 | 10:10 |
frickler | cgoncalves: yeah, I think I'll disable that cloud until it can be fixed ^^ | 10:10 |
cgoncalves | ^ this might do it :) | 10:10 |
cgoncalves | frickler, thanks! | 10:10 |
*** aedc has quit IRC | 10:13 | |
*** lpetrut has quit IRC | 10:17 | |
*** lpetrut has joined #openstack-infra | 10:19 | |
*** noama has joined #openstack-infra | 10:22 | |
*** pgaxatte has joined #openstack-infra | 10:23 | |
*** ykarel_ is now known as ykarel | 10:24 | |
*** markvoelker has joined #openstack-infra | 10:26 | |
*** panda is now known as panda|ruck | 10:28 | |
*** kjackal has joined #openstack-infra | 10:29 | |
openstackgerrit | Merged openstack/project-config master: Disable fortnebula cloud https://review.opendev.org/681697 | 10:32 |
*** markvoelker has quit IRC | 10:32 | |
*** jaosorior has joined #openstack-infra | 10:32 | |
*** lucasagomes has joined #openstack-infra | 10:58 | |
*** aedc has joined #openstack-infra | 10:58 | |
*** ykarel is now known as ykarel|afk | 10:59 | |
*** gfidente has joined #openstack-infra | 11:01 | |
*** ykarel|afk is now known as ykarel | 11:08 | |
*** lucasagomes has quit IRC | 11:12 | |
*** lucasagomes has joined #openstack-infra | 11:25 | |
donnyd | are there any other jobs with this issue? | 11:25 |
*** lucasagomes has quit IRC | 11:26 | |
*** lucasagomes has joined #openstack-infra | 11:27 | |
*** xenos76 has joined #openstack-infra | 11:30 | |
donnyd | frickler: I don't see any issue on any of the provider dashboard | 11:30 |
donnyd | but I surely want to see jobs get through the gate | 11:31 |
*** jaosorior has quit IRC | 11:31 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681603 | 11:31 |
*** roman_g has joined #openstack-infra | 11:32 | |
frickler | donnyd: other samples: https://aa7391dc0053b2e32f36-449df4e845769ef2b91c503daa7699ef.ssl.cf1.rackcdn.com/680354/2/gate/cloudkitty-tempest-full-ipv6-only/12b5704/job-output.txt https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_50a/681498/1/gate/neutron-grenade/50ae937/logs/grenade.sh.txt.gz | 11:35 |
donnyd | yea thats no good | 11:36 |
*** jpena is now known as jpena|lunch | 11:37 | |
donnyd | From the logs it doesn't look like a dns issue, it looks like the mirror is just being overloaded | 11:39 |
donnyd | but i don't have access to the mirrors logs | 11:39 |
donnyd | I was going to replace my edge router with a much faster one this weekend when the load calmed down anyways. Maybe this is a good time to get it done. | 11:40 |
*** pots has quit IRC | 11:41 | |
*** pots has joined #openstack-infra | 11:42 | |
donnyd | http://logstash.openstack.org/#/dashboard/file/logstash.json?query=node_provider:%5C%22fortnebula-regionone%5C%22%20AND%20filename:%5C%22job-output.txt%5C%22%20AND%20message:%5C%22RUN%20END%20RESULT_TIMED_OUT%5C%22&from=3h | 11:42 |
donnyd | yea there are a bunch of them | 11:43 |
*** ykarel is now known as ykarel|afk | 11:45 | |
donnyd | frickler: can you look at the logs from the FN mirror | 11:49 |
frickler | donnyd: that's an extract from the mirror error log when the issue seems to have started, goes on like that until 11:27 http://paste.openstack.org/show/775408/ | 11:49 |
donnyd | I am going to wait for the remaining jobs to finish and i will replace the router. Looking at the logs and metrics I don't see any real reason for it.. | 11:51 |
*** elod has quit IRC | 11:51 | |
*** elod has joined #openstack-infra | 11:52 | |
*** whoami-rajat has quit IRC | 11:54 | |
frickler | donnyd: is that the same router that does the nat between the instances and the mirror? seems suboptimal that they connect via natted v4 instead of v6 | 11:54 |
donnyd | Yea that tenant is special because I don't have public v6 | 11:55 |
donnyd | s/v4 | 11:55 |
donnyd | So it hits my edge and then comes back in for v4 related things | 11:55 |
donnyd | It's a 10G router and the load on it is well below that... however I already have a much more powerful one sitting in front of me. Trying to get everything up to 40G | 11:56 |
donnyd | There are also other things I can do for it, but the way we have the mirror designed this is how it works in all the other clouds too | 11:57 |
donnyd | all traffic leaves, hits a provider router on the outside of the tenant and then comes back down | 11:57 |
donnyd | If you look at the graphs you can see the load on the edge router is pretty low | 11:59 |
donnyd | and the network node where traffic leaves is also not even close to hitting a limit | 12:00 |
donnyd | With all that said there is clearly an issue | 12:00 |
*** markvoelker has joined #openstack-infra | 12:01 | |
frickler | donnyd: it may be that the nat is running out of ports to nat to. can you check nat table size on the router? although it might have recovered by now. | 12:01 |
*** soniya29 has quit IRC | 12:02 | |
fungi | if connections from the nodes to the mirror are funneled through a single layer 4 overload snat/pat then you may want to double-check that the pool of available ports it's using to map connections isn't getting exhausted | 12:02 |
donnyd | frickler: Oh i adjusted that a couple days back | 12:02 |
fungi | er, what frickler said | 12:02 |
donnyd | I do have another option and that is to use openstack routers | 12:02 |
fungi | usually those will incorporate a cooldown period for each port to handle stray/duplicate rst and fin packets, which can be on the order of minutes by default | 12:03 |
donnyd | I can keep it off the edge by connecting the zuul tenant directly to the same public network as the mirror tenant | 12:03 |
donnyd | or instead of NAT I can also route traffic | 12:04 |
fungi | if necessary we can add a static route for the node network on the mirror server, i expect | 12:04 |
fungi | or... why aren't they talking to the mirror via ipv6? | 12:05 |
fungi | i've likely already forgotten | 12:05 |
donnyd | yea that too | 12:05 |
donnyd | they should just resolve the v6 addr and then its direct | 12:05 |
donnyd | no NAT | 12:06 |
donnyd | I can fix that by providing a different DNS... but I know we are wary of using non public domain DNS servers for various reasons | 12:06 |
donnyd | i think it uses google right now | 12:07 |
donnyd | but yea all the jobs seem to be resolving the mirror to a v4 addr | 12:07 |
fungi | ahh, right, we stopped using ipv6 per https://review.opendev.org/675156 because of excessive retransmits | 12:08 |
fungi | if there's a chance that's gotten better we could just revert it | 12:08 |
donnyd | ahhhhh... yea that makes mo sense | 12:08 |
donnyd | well I need to replace my edge router anyways | 12:08 |
donnyd | I was hoping for a time that wouldn't impact the jobs | 12:09 |
openstackgerrit | Clint 'SpamapS' Byrum proposed zuul/zuul-jobs master: intercept-job -- self-service SSH access https://review.opendev.org/679306 | 12:09 |
*** rh-jelabarre has joined #openstack-infra | 12:10 | |
donnyd | but fungi I think I may know the reason for the retransmits, I need to provide a lower MTU for v6 to the tenant | 12:10 |
fungi | oh, interesting | 12:11 |
fungi | because it's tunneled? | 12:11 |
donnyd | needs to be 1450 | 12:11 |
donnyd | yep | 12:11 |
donnyd | At least that is what I am thinking | 12:11 |
fungi | makes sense. anyway, i need to disappear to knock out some more storm cleanup before it gets too hot out, but will be back around soon | 12:11 |
donnyd | thanks fungi | 12:11 |
*** rh-jelabarre has quit IRC | 12:12 | |
fungi | yw! | 12:12 |
*** rh-jelabarre has joined #openstack-infra | 12:12 | |
*** jamesmcarthur has joined #openstack-infra | 12:13 | |
*** goldyfruit_ has quit IRC | 12:16 | |
openstackgerrit | Clint 'SpamapS' Byrum proposed zuul/zuul-jobs master: Add upload-logs-s3 role to send logs to S3 https://review.opendev.org/681730 | 12:17 |
*** ykarel|afk is now known as ykarel | 12:23 | |
donnyd | rebooting the mirror to pick up the new mtu | 12:23 |
*** jamesmcarthur has quit IRC | 12:24 | |
*** owalsh is now known as owalsh_brb | 12:27 | |
donnyd | Ok all the jobs are done, replacing edge router now | 12:28 |
openstackgerrit | Andreas Jaeger proposed openstack/project-config master: Revert "Disable fortnebula cloud" https://review.opendev.org/681731 | 12:29 |
AJaeger | thanks, donnyd. Let me prepare a change once you're ready ^ | 12:30 |
donnyd | kk | 12:30 |
donnyd | thanks AJaeger | 12:30 |
*** iurygregory has joined #openstack-infra | 12:30 | |
AJaeger | donnyd: Please tell us when you feel that everything is fine and we can merge it. | 12:31 |
*** rlandy has joined #openstack-infra | 12:31 | |
*** elod has quit IRC | 12:32 | |
*** elod has joined #openstack-infra | 12:33 | |
*** eharney has joined #openstack-infra | 12:36 | |
*** soniya29 has joined #openstack-infra | 12:36 | |
*** jpena|lunch is now known as jpena | 12:38 | |
*** derekh has quit IRC | 12:42 | |
*** mriedem has joined #openstack-infra | 12:43 | |
*** owalsh_brb is now known as owalsh | 12:43 | |
*** e0ne has quit IRC | 12:44 | |
*** e0ne has joined #openstack-infra | 12:44 | |
*** larainema has quit IRC | 12:45 | |
*** jamesmcarthur has joined #openstack-infra | 12:48 | |
*** ociuhandu has quit IRC | 12:50 | |
*** soniya29 has quit IRC | 12:50 | |
frickler | corvus: fungi: regarding daily digests on lists.o.o, I have now manually tested running the senddigests cron with MAILMAN_SITE_DIR being set properly and it generated a release-announce digest as expected | 12:53 |
*** mriedem has quit IRC | 12:54 | |
fungi | AJaeger: donnyd: should we then also revert https://review.opendev.org/675156 now that the new mtu is in place? | 12:54 |
frickler | I have manually created /etc/cron.d/mailman-senddigest-sites that should take over starting tomorrow. a full solution would add creating that cron file to https://opendev.org/opendev/puppet-mailman/src/branch/master/manifests/site.pp but I don't know enough puppet for that, in particular staggering the start times | 12:54 |
*** soniya29 has joined #openstack-infra | 12:54 | |
frickler | #status log generated /etc/cron.d/mailman-senddigest-sites on lists.o.o in order to re-enable daily digests being sent | 12:55 |
openstackstatus | frickler: finished logging | 12:55 |
*** derekh has joined #openstack-infra | 12:56 | |
*** ociuhandu has joined #openstack-infra | 12:58 | |
*** derekh has quit IRC | 12:59 | |
*** mriedem has joined #openstack-infra | 13:00 | |
*** derekh has joined #openstack-infra | 13:00 | |
donnyd | fungi I'm thinking it wouldn't be a bad idea and we can watch to see if it's still an issue | 13:04 |
*** jcoufal has joined #openstack-infra | 13:09 | |
*** Goneri has joined #openstack-infra | 13:14 | |
icey | I'm seeing a lot of POST_FAILUREs recently | 13:15 |
openstackgerrit | Matt Riedemann proposed opendev/elastic-recheck master: Add query for configparser 4.0.1 missing bug 1843715 https://review.opendev.org/681741 | 13:18 |
openstack | bug 1843715 in OpenStack-Gate "CI jobs failing Sept 11 due to "ERROR: No matching distribution found for configparser===4.0.1"" [Undecided,New] https://launchpad.net/bugs/1843715 | 13:18 |
*** aaronsheffield has joined #openstack-infra | 13:18 | |
*** dolpher has joined #openstack-infra | 13:24 | |
*** njohnston has joined #openstack-infra | 13:27 | |
*** pcaruana has quit IRC | 13:30 | |
*** aedc has quit IRC | 13:31 | |
*** aedc has joined #openstack-infra | 13:32 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681603 | 13:32 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 13:32 |
*** efried is now known as efried_afk | 13:36 | |
donnyd | icey: do you have links for a job? | 13:37 |
donnyd | fungi: AJaeger it should be back up | 13:37 |
icey | donnyd: https://review.opendev.org/669617 https://review.opendev.org/678252 | 13:37 |
icey | both hit it in the last few minutes | 13:37 |
donnyd | icey: its likely because FN swift logs were still in operation and I just replaced the edge rtr | 13:38 |
*** aedc has quit IRC | 13:38 | |
donnyd | its back up now | 13:38 |
icey | thanks donnyd - will recheck :) | 13:39 |
fungi | ahh, oops, i should have thought to suggest disabling log uploads to there temporarily | 13:39 |
*** aedc has joined #openstack-infra | 13:39 | |
donnyd | We should be good to go now | 13:39 |
*** ricolin has joined #openstack-infra | 13:39 | |
*** markvoelker has quit IRC | 13:40 | |
*** ociuhandu has quit IRC | 13:41 | |
*** markvoelker has joined #openstack-infra | 13:42 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681603 | 13:44 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 13:44 |
openstackgerrit | Merged opendev/elastic-recheck master: Add query for configparser 4.0.1 missing bug 1843715 https://review.opendev.org/681741 | 13:45 |
openstack | bug 1843715 in OpenStack-Gate "CI jobs failing Sept 11 due to "ERROR: No matching distribution found for configparser===4.0.1"" [Undecided,Fix released] https://launchpad.net/bugs/1843715 | 13:45 |
donnyd | fungi: I do have one more option and that is to use an Openstack router for the zuul tenant | 13:47 |
*** happyhemant has quit IRC | 13:47 | |
donnyd | it would work the same way, the traffic would just be NAT by an Openstack router | 13:47 |
donnyd | it would eliminate the edge router entirely | 13:47 |
fungi | well, if ipv6 works then there wouldn't be any nat needed between the nodes and the mirror | 13:48 |
donnyd | and may be a really good option as the edge router is processing lots of rules, where the Openstack router would just need rules for that single tenant | 13:48 |
donnyd | and states | 13:48 |
donnyd | I agree | 13:48 |
donnyd | I will watch the logs for ipv6 on the edge rtr | 13:49 |
donnyd | that is when the DNS resolution is merged | 13:49 |
*** KeithMnemonic has joined #openstack-infra | 13:53 | |
*** goldyfruit_ has joined #openstack-infra | 13:53 | |
openstackgerrit | Thierry Carrez proposed opendev/system-config master: [opendev][gitea] Add a link to open changes https://review.opendev.org/681753 | 13:54 |
*** goldyfruit_ has quit IRC | 13:54 | |
*** tkajinam has joined #openstack-infra | 13:55 | |
*** goldyfruit has joined #openstack-infra | 13:56 | |
tkajinam | Hi. Can I ask a question about the usage of gerrit/zuul here? | 13:57 |
*** hamzy has quit IRC | 13:58 | |
*** ykarel is now known as ykarel|afk | 13:59 | |
*** ociuhandu has joined #openstack-infra | 14:00 | |
*** whoami-rajat has joined #openstack-infra | 14:01 | |
*** jbadiapa has quit IRC | 14:04 | |
*** jbadiapa has joined #openstack-infra | 14:07 | |
*** pcaruana has joined #openstack-infra | 14:08 | |
*** goldyfruit_ has joined #openstack-infra | 14:09 | |
*** aedc has quit IRC | 14:09 | |
*** aedc has joined #openstack-infra | 14:10 | |
*** goldyfruit has quit IRC | 14:12 | |
*** aedc has quit IRC | 14:12 | |
*** aedc has joined #openstack-infra | 14:13 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 14:14 |
*** soniya29 has quit IRC | 14:20 | |
smcginnis | tkajinam: There are likely folks here that can answer, but there is also a #zuul channel if it is not related to OpenStack/OpenDev usage of Zuul. | 14:21 |
tkajinam | smcginnis, thanks | 14:22 |
tkajinam | I'm currently looking for the way to retry gate job for the patch | 14:23 |
*** mriedem has quit IRC | 14:23 | |
smcginnis | tkajinam: That is done by leaving a comment on the patch that starts with "recheck". | 14:23 |
tkajinam | I know that putting "recheck" on the gerrit can work here, but it can trigger the execution also for check job | 14:23 |
tkajinam | I remember we had something like reverify just to rerun only gate job. Is it still valid? | 14:24 |
smcginnis | tkajinam: We used to have a way to only retrigger gate jobs, but that led to some issues. Now there is only recheck and it needs to go through check and gate again. | 14:24 |
donnyd | ok I am pretty happy, I think we can go ahead and put FN back online | 14:24 |
smcginnis | A little more overhead, but it ensures we are not missing issues. | 14:24 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 14:25 |
tkajinam | smcginnis, Understood. Thank you for your confirmation! | 14:25 |
smcginnis | No problem | 14:25 |
*** hamzy has joined #openstack-infra | 14:27 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: zuul-test: update jenkins-job-builder location https://review.opendev.org/681764 | 14:30 |
*** mriedem has joined #openstack-infra | 14:32 | |
dtantsur | smcginnis: a huge overhead, you meant? :) | 14:33 |
dtantsur | I really miss the reverify button.. | 14:33 |
* dtantsur issues the 8th recheck of a critical gate-fixing patch | 14:33 | |
dtantsur | another feature request would be a fail-fast mode | 14:34 |
smcginnis | dtantsur: Fail fast: abandon the patch, then restore it. ;) | 14:34 |
dtantsur | smcginnis: that's what I'm about to do, yes | 14:35 |
pabelanger | dtantsur: zuul supports it, I don't believe there is a want to enable it | 14:35 |
dtantsur | sigh | 14:35 |
smcginnis | dtantsur: I do miss reverify as well though. Especially when the gate gets so backed up during these milestone times. | 14:35 |
dtantsur | sometimes I want to have fewer voting CI jobs.. | 14:35 |
dtantsur | anywa | 14:35 |
dtantsur | is anybody else observing POST_FAILUREs in unit tests? we're seeing them a lot. | 14:35 |
pabelanger | dtantsur: if critical, you can also discuss with infra-root to enqueue into gate directly. But usually try to avoid that | 14:35 |
dtantsur | we cannot merge anything for days. and our jobs keep using a lot of disk space on testing nodes. | 14:36 |
pabelanger | POST_FAILURE usually means issue with upload of logs, maybe swift related | 14:36 |
pabelanger | dtantsur: I think clarkb was looking into that | 14:36 |
dtantsur | pabelanger: yep, we have a temporary fix for the problem clarkb discovered, but we cannot merge it | 14:37 |
pabelanger | dtantsur: have link? | 14:37 |
dtantsur | pabelanger: https://review.opendev.org/#/c/680652/ | 14:37 |
clarkb | clean check exists to help prevent flaky code from landing | 14:37 |
pabelanger | it looks like maybe swift issue again | 14:38 |
clarkb | pabelanger: I think it was mentioned that FN swift was off temporarily? | 14:38 |
frickler | pabelanger: we had a short issue with fn swift when donnyd changed the router | 14:38 |
pabelanger | kk, might have missed that | 14:38 |
pabelanger | I'm going to spend some time working on retries for upload-logs-swift | 14:39 |
frickler | sadly we missed turning off that swift site, only nodepool was/is off | 14:39 |
clarkb | dtantsur: but as pabelanger says we can directly enqueue gate fixes to try and cut down on that flakyness | 14:39 |
clarkb | dtantsur: I think configparser error threw everything into a mess last night though | 14:39 |
dtantsur | yeah | 14:39 |
dtantsur | clarkb: it would be great to try it with this patch. with Murphy's laws in full effect, we've hit all sorts of transient problems with it. | 14:40 |
clarkb | I'm not caught up yet but if configparser fix isnt landed yet we need to start there | 14:41 |
clarkb | then keystone and nova also hadbug fixes weshould enqueue if they havent landed | 14:41 |
clarkb | and neutrons functional job doesnt test speculative future states | 14:41 |
clarkb | slaweq: ^ fyi that needs fixing | 14:42 |
* dtantsur loves release time | 14:42 | |
*** aedc has quit IRC | 14:43 | |
slaweq | clarkb: sorry but I don't understand what "speculative future states" | 14:43 |
fungi | dtantsur: and to clarify, some of our zuul tenants do indeed enqueue directly into their gate pipeline (and cancel/abort any pending or running check pipeline jobs corresponding to the change) immediately upon change approval. the openstack tenant is not configured for that behavior because we observed a propensity for buggy changes to get approved into shared gate queues with projects running very | 14:43 |
fungi | lengthy jobs, which caused significant additional disruption and delays merging changes | 14:43 |
dtantsur | yeah, I understand that | 14:44 |
clarkb | slaweq: last night I promoted the config parser change to the head of the gate. Then neutron functional python27 change next in the queue failed becauseit used master requirements and not the change ahead of it in the gate | 14:44 |
fungi | slaweq: neutron's functional job is not using (at least) a copy of the upper-constraints.txt file provided by zuul and is probably independently retrieving it from a git remote, resulting in it not being tested wit the state of that file provided by changes ahead of it in a dependent pipeline (would also break use of depends-on to such requirements repo changes) | 14:45 |
clarkb | slaweq: the expectation is that jobs will use the unmerged zuul future state and test that otherwise our integrated gate testing isnt valid | 14:45 |
dtantsur | I'd just like to have an option for situations like this | 14:45 |
*** exsdev has quit IRC | 14:45 | |
slaweq | clarkb: fungi: ok, I think I understand now | 14:46 |
*** exsdev0 has joined #openstack-infra | 14:46 | |
*** exsdev0 is now known as exsdev | 14:46 | |
fungi | dtantsur: absolutely, the option is to ask us to enqueue changes into the gate skipping check results and, if especially critical, promote them to the front so all changes already in the gate are not impacted by whatever those fix | 14:46 |
slaweq | clarkb: fungi: I will take a look into that and will get back to You if I will have any questions | 14:46 |
*** armax has joined #openstack-infra | 14:47 | |
clarkb | slaweq: thank you and happy to answer questions | 14:47 |
dtantsur | fungi: got it. so could you enqueue https://review.opendev.org/#/c/680652/ into the gate once it's clear that we're no longer affected by the configparse thing? | 14:47 |
*** ykarel|afk is now known as ykarel|away | 14:48 | |
fungi | i think the configparser blacklist change landed in requirements hours ago, but i'll double-check | 14:48 |
*** hamzy has quit IRC | 14:48 | |
frickler | ianw force-merged it | 14:48 |
clarkb | ya its in git history now | 14:48 |
*** hamzy has joined #openstack-infra | 14:49 | |
frickler | https://review.opendev.org/681630 | 14:49 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 14:50 |
clarkb | I'm impressed that the upper constraints update merged so easy and quick | 14:52 |
fungi | okay, then i'll skip 680652 into the gate pipeline | 14:52 |
clarkb | then as soon aswe have to revert its all pain | 14:52 |
*** bdodd has joined #openstack-infra | 14:53 | |
*** ykarel|away has quit IRC | 14:53 | |
donnyd | Ok I think the proper way forward for FN is moving the tenant router to Openstack so connections to the mirror don't have to go all the way up and then back down the FN. This should solve v4 related issues | 14:54 |
fungi | --tenant=openstack --project=openstack/ironic --change=680652,3 is now in the gate | 14:54 |
fungi | looks like ironic has its own gate queue so there's no need to promote that change | 14:54 |
donnyd | the v6 related issues should be solved via the reduction in MTU for the tenant network | 14:54 |
fungi | it's the only ironic change in the queue currently | 14:55 |
donnyd | I think maybe we should separate out the jobs that currently depend on FN into a separate pool so the jobs that need to run there can and the generic jobs can remain disabled until everyone is happy | 14:57 |
*** pcaruana has quit IRC | 14:57 | |
clarkb | looks like both the keystone and nova fixes landed | 14:58 |
clarkb | mriedem: how important is that stein backport of the fix? | 14:59 |
dtantsur | thanks fungi. now fingers crossed for no more transient problems.. | 14:59 |
mriedem | clarkb: let me check logstash | 15:00 |
mriedem | clarkb: it's just nova functional tests so it won't break master by way of grenade, and logstash says it's only hitting on master | 15:01 |
mriedem | i did the backport b/c the test was obviously racy from inspection but for whatever reason something is tickling that now on master | 15:02 |
donnyd | Does anyone have any issues with turning the custom jobs for FN back on clarkb fungi AJaeger | 15:02 |
*** pgaxatte has quit IRC | 15:02 | |
openstackgerrit | Donny Davis proposed openstack/project-config master: Re-enable FN for custom jobs https://review.opendev.org/681773 | 15:02 |
clarkb | donnyd: I dont. Are we thinking use that to show its stable then add the other jobs bavk in? | 15:02 |
donnyd | clarkb: it will be difficult to T/S the issues with no load because they don't appear until there is a full load | 15:03 |
donnyd | but I also don't think we should disable the jobs that are using FN from a custom resource perspective | 15:04 |
donnyd | makes sense to me, so hopefully everyone else see's it the same way | 15:04 |
clarkb | sure I'm just wondering what the plan is for enabling the other jobs | 15:04 |
*** weshay is now known as weshay_passport | 15:05 | |
*** armstrong has joined #openstack-infra | 15:05 | |
donnyd | clarkb: I would imagine slowly | 15:05 |
clarkb | gotcha so add nodes back in at a measured pace. wfm | 15:06 |
donnyd | And closely monitor the network to see if the retransmit issues come back | 15:06 |
*** tkajinam has quit IRC | 15:06 | |
donnyd | yea, if we keep the job count low to begin with I can monitor the network to see if there are issues and if so it won't hit a ton of jobs if the issue is still there | 15:07 |
donnyd | I think a good start is to re-enable the custom jobs so we can not hold back those who are depending on it | 15:07 |
*** noama has quit IRC | 15:08 | |
donnyd | so if we could abandon https://review.opendev.org/#/c/681731/ and merge https://review.opendev.org/#/c/681773/ we would be there | 15:08 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 15:09 |
AJaeger | donnyd: I abandoned 681731, not sure about the "custom" 681773, want to have infra-root look at it. | 15:10 |
donnyd | sure that makes sense to me | 15:10 |
donnyd | If we need that name changed to something that makes more sense then I am happy to re-submit | 15:10 |
clarkb | I'm making tea then will review | 15:11 |
* AJaeger will cycle a bit now... | 15:11 | |
openstackgerrit | Merged zuul/zuul-jobs master: zuul-test: update jenkins-job-builder location https://review.opendev.org/681764 | 15:12 |
*** hamzy has quit IRC | 15:14 | |
*** hamzy has joined #openstack-infra | 15:14 | |
*** spsurya has quit IRC | 15:16 | |
*** njohnston has quit IRC | 15:19 | |
*** njohnston has joined #openstack-infra | 15:19 | |
*** guimaluf has quit IRC | 15:21 | |
clarkb | donnyd: AJaeger change lgtm | 15:22 |
*** gyee has joined #openstack-infra | 15:23 | |
*** jamesmcarthur has quit IRC | 15:23 | |
*** lpetrut has quit IRC | 15:25 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 15:26 |
Shrews | donnyd: +Ad. the pool name can be any arbitrary string | 15:28 |
clarkb | slaweq: https://606e7073949a555d6ce7-a5a94d16cd1e63fdf610099df3afaf88.ssl.cf5.rackcdn.com/679813/2/gate/neutron-functional-python27/e20cb78/job-output.txt is the example job where I noticed functional wasn't testing with the proposed requirements update | 15:32 |
clarkb | (finally able to dig that out of my scrollback) | 15:32 |
clarkb | slaweq: looks like the openstack-tox-python27 job may suffer the same problem in neutron | 15:33 |
clarkb | oh hrm maybe not I think the one I found for openstack-tox-python27 was due to the requirements update fialing and getting removed from the gate queue | 15:34 |
yoctozepto | hey infra, kolla going ipv6 has issues with centos somehow not getting ipv6 address, see: https://c66a5275cc7b20f05ed9-882fbdc9765e1a9e81809c5a97a4ce6a.ssl.cf1.rackcdn.com/681573/6/check/kolla-ansible-centos-source-ipv6/f107f39/job-output.txt | 15:36 |
clarkb | slaweq: I think we need to ensure that UPPER_CONSTRAINTS_FILE is set in your functional test https://opendev.org/openstack/neutron/src/branch/master/tox.ini#L17 | 15:36 |
yoctozepto | ubuntu gets it fine fine | 15:36 |
*** jamesmcarthur has joined #openstack-infra | 15:36 | |
*** ykarel has joined #openstack-infra | 15:37 | |
clarkb | yoctozepto: not every cloud has a public ipv6 address | 15:37 |
yoctozepto | clarkb: should not nodepool public_ipv6 be empty then? | 15:37 |
clarkb | and in some clouds that depends on the base image because of whether or not that platform supports the ip configuration necessary in that cloud | 15:37 |
yoctozepto | ah, so we might be doomed with respect to centos then | 15:37 |
clarkb | I think in this case rackspace gives us an ipv6 address but we don't support rackspace ipv6 ip assignment on centos/fedora | 15:38 |
yoctozepto | sad news for us, any way to resolve this soon? ipv6 in centos works pretty well normally | 15:39 |
openstackgerrit | Merged openstack/project-config master: Re-enable FN for custom jobs https://review.opendev.org/681773 | 15:39 |
openstackgerrit | Merged zuul/zuul master: Fix: prevent usage of hashi_vault https://review.opendev.org/681041 | 15:39 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 15:39 |
clarkb | yoctozepto: you could add support to glean for configuring ipv6 statically via config drive on centos | 15:39 |
clarkb | yoctozepto: out of curiousity why does it matter? | 15:39 |
clarkb | you can test the ipv6 goal without global ipv6 connectivity | 15:40 |
rosmaita | can someone approve this devstack-gate patch that has one +2: https://review.opendev.org/#/c/679610/1 ... thanks! | 15:40 |
clarkb | (in fact I impressed upon the TC that if they went with the goal they would have to make that possible bceause some clouds just don't ipv6) | 15:40 |
clarkb | rosmaita: done. We really need to get that data out of d-g and into devstack proper | 15:40 |
rosmaita | clarkb: ty | 15:41 |
openstackgerrit | Sean McGinnis proposed openstack/project-config master: Tighten formatting for new branch reno page title https://review.opendev.org/681785 | 15:41 |
*** shachar has joined #openstack-infra | 15:42 | |
*** snapiri has quit IRC | 15:42 | |
*** rpittau is now known as rpittau|afk | 15:44 | |
clarkb | slaweq: the zuul tox role (which is used by the unittest tox jobs, but I'm not sure about the neutron-functional job as that parents to devstack-minimal instead) accepts this tox_constraints_file var to set that value https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L83 | 15:44 |
yoctozepto | clarkb: can we do multinode with private IPv6 addressing - there seem to be none provided? | 15:45 |
clarkb | yoctozepto: yes, multinode jobs can put ipv6 addresses on the multinode network overlay. I believe octavia does this | 15:46 |
clarkb | johnsom: ^ can you confirm? I believe that was somethign you were working on at the last ptg | 15:46 |
fungi | donnyd: sorry for the delay, but sounds fine to me. been quite a busy thursday for me so far, apologies! | 15:46 |
openstackgerrit | Merged zuul/zuul master: Pass zuul_success to cleanup playbooks https://review.opendev.org/681552 | 15:47 |
yoctozepto | clarkb: you mean create our own overlay then? | 15:47 |
clarkb | yoctozepto: "yes" there exist zuul roles to do it for you already. I don't remember if ipv6 is already done on them or not | 15:47 |
clarkb | johnsom: ^ likely knows more about that than me at this point | 15:47 |
yoctozepto | clarkb: I see, thanks | 15:47 |
donnyd | np fungi | 15:47 |
clarkb | or rm_work ^ do you remember details on setting up ipv6 on the test overlay network for octavia? | 15:48 |
johnsom | clarkb It hasn't been stable for us frankly | 15:48 |
clarkb | johnsom: in what way? | 15:48 |
johnsom | Sometimes the IPv6 doesn't seem to pass between the nodes. | 15:48 |
clarkb | johnsom: does ipv4? (that could be a bug in the ovs implementation?) | 15:48 |
johnsom | Yes, the IPv4 seems stable. | 15:49 |
clarkb | interesting | 15:49 |
johnsom | I haven't had time to figure out *why* yet. This is why our multi-node jobs are still non-voting | 15:49 |
fungi | could it be an mtu issue then? maybe pmtud is working for v4 but something is not getting the path mtu correct for v6? | 15:49 |
clarkb | in any case we don't have ipv6 in every cloud so you'll have to figure this out regardless of centos not supporting static ipv6 in clouds that do support ipv6 | 15:50 |
yoctozepto | thanks guys for the insights | 15:50 |
clarkb | fungi: I don't think we can rely on pmtud here because some hops don't have ip addresses | 15:50 |
clarkb | fungi: but there is a good chance ipv6 mtus are at fault | 15:50 |
johnsom | yoctozepto https://github.com/openstack/octavia-tempest-plugin/blob/master/zuul.d/jobs.yaml if that is any help for you | 15:50 |
johnsom | Yeah, it seems to be in the neutron layer. Really I can't talk much to it as I haven't really dug into the problem yet. | 15:51 |
fungi | clarkb: ahh, right, no source to generate the need-to-frag message with | 15:51 |
clarkb | slaweq: I think you needt o apply https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L82-L83 to https://opendev.org/openstack/neutron/src/branch/master/playbooks/run_functional_job.yaml#L10 and also add openstack/requirements to the job's required proejcts list | 15:52 |
*** hamzy_ has joined #openstack-infra | 15:52 | |
*** hamzy has quit IRC | 15:52 | |
yoctozepto | johnsom: no idea where to look in that file, though mgoddard has proposed vxlan for kolla in this patch: https://review.opendev.org/670690 | 15:53 |
yoctozepto | seems we are not using any zuul stuff in there | 15:53 |
yoctozepto | mgoddard to confirm | 15:53 |
*** kjackal has quit IRC | 15:53 | |
yoctozepto | maybe we can use that for ipv6, sure | 15:53 |
mgoddard | I didn't use the zuul roles to avoid dependency on OVS | 15:54 |
mgoddard | linux bridge worked fine | 15:54 |
*** kjackal has joined #openstack-infra | 15:54 | |
clarkb | mgoddard: fwiw we used OVS because linux bridge did not work with vxlan for the longest time and GRE traffic isn't passed in some clouds | 15:54 |
clarkb | if linux bridge + vxlan works where you need it then it should be fine | 15:54 |
donnyd | yoctozepto: if you can build a job that can use one of the custom FN labels then you can find out | 15:54 |
clarkb | the original implementation of all of this was linux bridge + GRE as a lowest common denominator for linux support | 15:55 |
clarkb | then we discovered some clouds block gre traffic even if youtell security groups not to | 15:55 |
mgoddard | also went for a simpler topology with one shared overlay rather than the bridge plus tunnel approach in zuul | 15:55 |
clarkb | mgoddard: doesn't that require multicast? | 15:55 |
clarkb | mgoddard: the reason its a bridge plus tunnel appraoch is we can only do point to point overlays without multicast | 15:56 |
mgoddard | not if you add static fdb entries | 15:56 |
mgoddard | https://review.opendev.org/#/c/670690/10/tests/run.yml | 15:56 |
donnyd | to at least make sure the job lands somewhere with ipv6 | 15:56 |
clarkb | ugh | 15:56 |
mgoddard | bridge fdb append 00:00:00:00:00:00 dev vxlan0 dst {{ dest_ip }} | 15:56 |
clarkb | so you are broadcasting always? | 15:56 |
mgoddard | no | 15:57 |
mgoddard | that's just a catch all for BUM traffic | 15:57 |
mgoddard | it will learn unicast | 15:58 |
clarkb | I see it is a rule for broadcast frames | 15:58 |
mgoddard | right | 15:58 |
mgoddard | if it works I could tidy up and push into zuul | 15:59 |
clarkb | fwiw improvements to the existing roles and discussion about them are welcome :) | 15:59 |
mgoddard | yeah it's still a PoC | 15:59 |
clarkb | ok. The big two gotchas were linux bridge (at the time) could not vxlan hence ovs (and we can't gre in some clouds). And the decision to use a bridge was made because multicast doesn't work either | 16:00 |
mgoddard | s/if it works// | 16:00 |
mgoddard | I'm pretty sure it does work :) | 16:00 |
clarkb | the bridge allows us to do point to point and tie everything together. It looks a lot like a real switch on a rack if you squint and pretend :) | 16:00 |
mgoddard | makes sense, given the constraints | 16:01 |
*** diablo_rojo has joined #openstack-infra | 16:02 | |
*** ykarel is now known as ykarel|away | 16:02 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 16:02 |
mgoddard | I'm not actively working on it right now, but if/when I (or yoctozepto) do, I'll look at how to get it into the zuul role | 16:02 |
*** dave-mccowan has joined #openstack-infra | 16:03 | |
*** ricolin has quit IRC | 16:03 | |
*** elod has quit IRC | 16:04 | |
*** ricolin has joined #openstack-infra | 16:04 | |
*** ricolin has quit IRC | 16:05 | |
*** elod has joined #openstack-infra | 16:05 | |
clarkb | fwiw I'd personally much prefer linux bridge simply because tcpdump works out of the box with it | 16:06 |
clarkb | (I'm always amazed at how painful it is to tcpdump an ovs interface) | 16:06 |
slaweq | clarkb: thx for links, I will check that later today or tomorrow morning | 16:07 |
*** e0ne has quit IRC | 16:08 | |
*** dtantsur is now known as dtantsur|afk | 16:09 | |
*** ociuhandu has quit IRC | 16:13 | |
*** ccamacho has quit IRC | 16:16 | |
donnyd | clarkb: shouldn't nodepool be showing 10 max from turning on the separate pool? | 16:17 |
donnyd | http://grafana.openstack.org/d/3Bwpi5SZk/nodepool-fortnebula?orgId=1&from=now-3h&to=now | 16:17 |
zbr | clarkb: https://review.opendev.org/#/c/677971/ info-banner? thanks. | 16:17 |
clarkb | donnyd: it is for me | 16:17 |
donnyd | I just needed to refresh my page i guess. doh | 16:18 |
*** electrofelix has quit IRC | 16:19 | |
clarkb | zbr: in that change, looking at it again, do we haev to do anything to ensure the ansible_distribution fact is gathered? we explicitly gather the python info | 16:22 |
openstackgerrit | David Shrewsbury proposed zuul/nodepool master: Log new image upload external ID https://review.opendev.org/681812 | 16:24 |
zbr | clarkb: AFAIK, zuul collects default facts on all nodes so we should be safe. | 16:24 |
pabelanger | that isn't 100% true, for localhost we try not too (to avoid leaking data) | 16:25 |
zbr | clarkb: but what I did in other cases was to add a conditional to run "setup" at the start of the role when one of used facts was not defined, can do that too. | 16:25 |
pabelanger | so we need to be careful about adding that to emit-job-header, which runs on localhost | 16:25 |
pabelanger | that would be my concern using setup their, data bout executor leaks some how | 16:26 |
pabelanger | there* | 16:26 |
clarkb | pabelanger: can you leave a comment on the change about that? | 16:26 |
pabelanger | sure, will do that shortly | 16:26 |
clarkb | ty | 16:26 |
zbr | haha, already did it https://review.opendev.org/#/c/677971/8/roles/emit-job-header/tasks/main.yaml | 16:28 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-output: introduce zuul_use_fetch_output https://review.opendev.org/681748 | 16:28 |
*** DinaBelova has quit IRC | 16:30 | |
*** goldyfruit_ has quit IRC | 16:32 | |
*** udesale has quit IRC | 16:32 | |
*** dave-mccowan has quit IRC | 16:34 | |
clarkb | digging into the swift upload log error we noticed in rax. I see only one occurence of that in the last 10 hours | 16:34 |
clarkb | pabelanger: ^ fyi since you had mentioned it earlier | 16:34 |
clarkb | definitely something to get more debugging for, but not majorly catastrophic | 16:34 |
clarkb | I determined this via grep of the current executor debug logs | 16:34 |
*** mattw4 has joined #openstack-infra | 16:34 | |
*** dave-mccowan has joined #openstack-infra | 16:37 | |
*** DinaBelova has joined #openstack-infra | 16:39 | |
zbr | pabelanger: re info-banner, i really doubt that collecting "gather_subset: python" is leaking any secrets, or maybe we want to keep a secret the python version being used? | 16:44 |
jrosser | are the infra mirrors getting packages for debian buster? | 16:45 |
*** gfidente has quit IRC | 16:46 | |
*** igordc has joined #openstack-infra | 16:46 | |
clarkb | jrosser: http://mirror.dfw.rax.openstack.org/debian/dists/buster/ says yes. and http://mirror.dfw.rax.openstack.org/debian/timestamp.txt says that was last updated today | 16:46 |
jrosser | http://mirror.dfw.rax.opendev.org/debian/pool/main/e/exim4/exim4-config_4.92-8_all.deb 404 Not Found | 16:46 |
*** pkopec has quit IRC | 16:48 | |
*** goldyfruit_ has joined #openstack-infra | 16:48 | |
jrosser | i have a log here https://openstack.fortnebula.com:13808/v1/AUTH_e8fd161dc34c421a979a9e6421f823e9/zuul_opendev_logs_f66/681777/5/check/openstack-ansible-deploy-aio_lxc-debian-stable/f66a770/job-output.txt | 16:48 |
clarkb | http://mirror.dfw.rax.openstack.org/debian/pool/main/e/exim4/exim4-config_4.92-8+deb10u1_all.deb exists | 16:49 |
jrosser | hmm | 16:50 |
clarkb | reprepro is supposed to produce a valid index | 16:51 |
clarkb | I wonder where the other name comes from | 16:51 |
fungi | could a long-running job have straddled a delete? | 16:53 |
fungi | would have to be long enough to have done an apt install/upgrade from data two pulses after it did an apt update, in theory | 16:54 |
fungi | or it's doing an apt install without an apt update first and so relying on a package index contemporary with when the image was generated? | 16:55 |
clarkb | fungi: oh ya that could be | 16:55 |
clarkb | jrosser: ^ | 16:55 |
jrosser | ah so check that we have an apt-update at some point before it does that | 16:55 |
clarkb | the role that is doing this is very rhel centric | 16:55 |
clarkb | even the debian package list is called the rhel7 package list | 16:56 |
clarkb | and with yum you don't need to update explicitly first | 16:56 |
fungi | yikes | 16:56 |
clarkb | https://opendev.org/openstack/ansible-hardening/src/branch/master/vars/debian.yml#L53 note the filename | 16:56 |
*** efried_afk is now known as efried | 16:56 | |
*** weshay_passport is now known as weshay | 16:57 | |
jrosser | it does apt update at 2019-09-12 16:36:07.893588 | 16:58 |
jrosser | and again at 2019-09-12 16:36:24.523110 | 16:58 |
clarkb | might need to check the index files then? | 17:00 |
clarkb | see where its gone wrong | 17:00 |
fungi | yeah, i can confirm that's definitely a 404 currently | 17:00 |
*** derekh has quit IRC | 17:00 | |
* jrosser has to travel, thanks for the help, bbl | 17:01 | |
fungi | those packages were deleted in the pulse which vos released at 2019-09-07T18:50:02,415682984+00:00 but it errored with "VLDB: vldb entry is already locked" | 17:03 |
fungi | i wonder if we later manually performed a vos release of a broken volume or something | 17:03 |
clarkb | fungi: ya maybe worth rerunning to see if the vos release from today is good | 17:04 |
fungi | though we're currently updating successfully as of 2019-09-12T16:53:43,137317847+00:00 | 17:04 |
fungi | the last broken vos release was 2019-09-09T22:49:50,220254695+00:00 | 17:05 |
fungi | the first successful index regen and vos release was 2019-09-10T18:58:49,163568873+00:00 | 17:07 |
*** xarses has quit IRC | 17:08 | |
*** xarses has joined #openstack-infra | 17:08 | |
*** hamzy_ has quit IRC | 17:09 | |
*** hamzy_ has joined #openstack-infra | 17:11 | |
*** ociuhandu has joined #openstack-infra | 17:11 | |
*** armax has quit IRC | 17:14 | |
*** pkopec has joined #openstack-infra | 17:15 | |
*** jpena is now known as jpena|off | 17:15 | |
*** ociuhandu has quit IRC | 17:16 | |
*** soniya29 has joined #openstack-infra | 17:16 | |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Log swift upload tracebacks with ansible https://review.opendev.org/681843 | 17:16 |
fungi | unfortunately http://files.openstack.org/mirror/debian/dists/buster/main/binary-amd64/Packages.gz does indicate there should be a Version: 4.92-8 for Package: exim4-base but the last modified timestamp on the index is 2019-07-03 22:48 | 17:17 |
openstackgerrit | Merged zuul/nodepool master: Log new image upload external ID https://review.opendev.org/681812 | 17:17 |
fungi | so i wonder if reprepro isn't writing the package indices where we expect them for some reason | 17:17 |
openstackgerrit | Donny Davis proposed openstack/project-config master: Slowly scale FN back up https://review.opendev.org/681845 | 17:17 |
clarkb | fungi: the package is exim4-config fwiw | 17:17 |
fungi | exim4-base also failed to download | 17:18 |
clarkb | ah | 17:18 |
*** ykarel|away has quit IRC | 17:19 | |
fungi | basically all the binary packages built from the exim4 source package are disjunct because there's been an update since then | 17:19 |
fungi | ohh | 17:19 |
*** ykarel|away has joined #openstack-infra | 17:19 | |
fungi | i bet today is when buster-updates activated | 17:20 |
fungi | the buster and buster-updates suites share a common package pool | 17:20 |
*** ralonsoh has quit IRC | 17:20 | |
fungi | http://files.openstack.org/mirror/debian/dists/ | 17:20 |
fungi | i wonder if reprepro is incorrectly configured to limit the number of package versions it keeps | 17:20 |
fungi | and so we're seeing this break because jobs are using the buster suite but not the buster-updates suite (which contains newer exim4 packages they would grab instead if they knew about them) | 17:21 |
sean-k-mooney | am what is the sate of vexxhost in ci currently | 17:21 |
clarkb | and that is because we disabled updates as it didn't exist yet? | 17:21 |
clarkb | sean-k-mooney: I think it is leaking boot from volume root disks and we run out of quota | 17:22 |
clarkb | sean-k-mooney: on top of that the quota tracking system seems to think we use 2x what we actually use for volumes | 17:22 |
corvus | clarkb: re testing swift upload -- i don't have a great idea for that; everything else is testable from the cli except for that function which has the ansible wrapper. given that it's an exception handler, i think we can look real close to make sure it has all the closing parens, then throw it into prod and wait for it to hit. | 17:22 |
fungi | clarkb: we didn't disable updates, repropro was simply refusing to generate any empty indices for buster-updates | 17:23 |
sean-k-mooney | clarkb: ok :) that would explain the node_failures | 17:23 |
clarkb | corvus: k I await your real close inspection then :) | 17:23 |
openstackgerrit | Merged openstack/devstack-gate master: Prepare for stable branching in devstack-gate https://review.opendev.org/679610 | 17:23 |
corvus | clarkb: (if it were the non-error case, i'd say make a test-zuul-swift-upload role and do the base-test dance with it, but that wouldn't really help here) | 17:23 |
clarkb | sean-k-mooney: Shrews has been workign to debug if the volume leaks are on our end | 17:23 |
fungi | i think projects worked around the lack of buster-updates suite by omitting it from their sources.list files | 17:23 |
fungi | but now it exists, and there are newer package versions, and repropro is configured to only keep the new packages even though they technically belong to a different suite | 17:24 |
sean-k-mooney | clarkb: while FN was offline i moved a some other temp job to vexhost only flavors. ill swap them to either limestone or FN | 17:24 |
corvus | clarkb: my mental python parser caught an error :) | 17:25 |
sean-k-mooney | these specific temp jobs need nested vert but noting else special | 17:25 |
clarkb | corvus: perfect :) | 17:25 |
sean-k-mooney | *lables not flavors | 17:25 |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Log swift upload tracebacks with ansible https://review.opendev.org/681843 | 17:25 |
logan- | sean-k-mooney: regarding the numa flavors, can you give me specs what I need to add for that? | 17:25 |
*** soniya29 has quit IRC | 17:26 | |
sean-k-mooney | sure one sec its basically just hw:numa_node=2 but i think i have alink to the ones that donnyd created | 17:27 |
AJaeger | clarkb: you reviewed https://review.opendev.org/#/c/678573/ previously, should I update following the comments by frickler ? Or what's your take on it? | 17:28 |
fungi | confirmed, looking in http://files.openstack.org/mirror/debian/pool/main/e/exim4/ it's only providing the versions with 4.92-8+deb10u1 versions which are for debian 10 update 1 in buster-updates | 17:29 |
sean-k-mooney | logan-: https://www.irccloud.com/pastebin/FWxMEIqc/ | 17:29 |
sean-k-mooney | so the last two are the main ones | 17:29 |
fungi | not the 4.92-8 versions from buster | 17:29 |
*** jcoufal has quit IRC | 17:29 | |
sean-k-mooney | the first one was a hack to make sure it worked | 17:29 |
*** e0ne has joined #openstack-infra | 17:29 | |
logan- | ok, got it, thanks | 17:29 |
sean-k-mooney | logan-: if you can only provdie the 8G one that is ok too i plan to move to only that one in the future | 17:30 |
sean-k-mooney | while testing the job the extra ram is useful on the contoler but i think i can optimise down | 17:30 |
clarkb | AJaeger: frickler huh frickler seems to be correct. I guess that is a bug in python's docs? | 17:31 |
clarkb | AJaeger: frickler I think we should catch OSError in that case | 17:31 |
sean-k-mooney | logan-: the other thing i want to add is a nodepool lable for nest-virt-ubuntu-bionic. if we had that common acorss FN limesotne and vexhost we coudl more reliable run jobs arouss all tree. that lable would jsut be an alise for the normal bionic one with no extraspecs added | 17:32 |
AJaeger | thanks, clarkb - will try to update | 17:33 |
*** xek has joined #openstack-infra | 17:34 | |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: Log swift upload tracebacks with ansible https://review.opendev.org/681843 | 17:37 |
*** jamesmcarthur has quit IRC | 17:38 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-puppet-module-output: introduce zuul_use_fetch_output https://review.opendev.org/681855 | 17:38 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Add tests for manifest generation for missing files https://review.opendev.org/678573 | 17:39 |
openstackgerrit | David Shrewsbury proposed zuul/nodepool master: Do not overwrite image upload zk data on delete https://review.opendev.org/681857 | 17:40 |
Shrews | corvus: Can you think of any reason why the code I change in 681857 would be using a new ImageUpload object to update an existing object?? | 17:41 |
clarkb | AJaeger: lgtm thanks! | 17:42 |
*** priteau has quit IRC | 17:42 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-python-sdist-output: introduce zuul_use_fetch_output https://review.opendev.org/681859 | 17:45 |
*** markvoelker has quit IRC | 17:45 | |
*** markvoelker has joined #openstack-infra | 17:46 | |
fungi | on an unrelated topic, clarkb as infra ptl you might be interested in at least skimming https://review.opendev.org/681260 | 17:46 |
AJaeger | thanks, clarkb. | 17:46 |
Shrews | corvus: i think this may have been an accidental carryover from before we used proper objects to represent the data, just looking at the git history of that code :/ | 17:46 |
Shrews | this also does not appear to be #zuul :) | 17:47 |
AJaeger | do we want to scale FN slowly up? https://review.opendev.org/681845 adds 10 nodes... | 17:47 |
logan- | sean-k-mooney: yep makes sense | 17:53 |
AJaeger | config-core, could you review https://review.opendev.org/681785 https://review.opendev.org/681276 https://review.opendev.org/#/c/681259/ https://review.opendev.org/681361 https://review.opendev.org/680901 , please? | 17:53 |
openstackgerrit | Noorul Islam K M proposed zuul/zuul master: Fixed pull request URL and canMerge interface https://review.opendev.org/681860 | 17:55 |
clarkb | fungi: will do thanks | 17:56 |
sean-k-mooney | AJaeger: it seams to be working ok | 17:56 |
sean-k-mooney | i have a few jobs running on the sepcial flavors in the other pool | 17:57 |
sean-k-mooney | those are running well | 17:57 |
*** pcaruana has joined #openstack-infra | 17:58 | |
*** armax has joined #openstack-infra | 17:59 | |
corvus | Shrews: i can not -- if you've looked at git history and don't see anything then lgtm | 17:59 |
AJaeger | frickler: is https://review.opendev.org/678573 now good? | 18:00 |
*** pkopec has quit IRC | 18:03 | |
*** pkopec has joined #openstack-infra | 18:03 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-python-sdist-output: introduce zuul_use_fetch_output https://review.opendev.org/681859 | 18:04 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-tox-output: introduce zuul_use_fetch_output https://review.opendev.org/681864 | 18:04 |
*** jamesmcarthur has joined #openstack-infra | 18:07 | |
clarkb | corvus: zuul is happy with https://review.opendev.org/#/c/681843/ now (it tripped over linting on previous ps) | 18:08 |
*** dolpher has quit IRC | 18:13 | |
*** hamzy_ has quit IRC | 18:16 | |
*** hamzy_ has joined #openstack-infra | 18:17 | |
*** iurygregory has quit IRC | 18:19 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-python-sdist-output: introduce zuul_use_fetch_output https://review.opendev.org/681859 | 18:25 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-tox-output: introduce zuul_use_fetch_output https://review.opendev.org/681864 | 18:25 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-sphinx-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681870 | 18:25 |
*** pkopec has quit IRC | 18:27 | |
*** njohnston is now known as njohnston|lunch | 18:28 | |
prometheanfire | are we having problems with the opensuse-15.1 builds? | 18:36 |
fungi | prometheanfire: switch to opensuse-15 | 18:38 |
fungi | i believe the opensuse-15.0 and opensuse-15.1 nodes are deprecated in favor of generic opensuse-15 since a few weeks | 18:38 |
prometheanfire | ok, I'll try that | 18:38 |
prometheanfire | docs should be updated :D https://github.com/openstack/diskimage-builder/tree/master/diskimage_builder/elements/opensuse | 18:39 |
*** mriedem is now known as mriedem_afk | 18:39 | |
AJaeger | note that opensuse-15 *is* 15.1 currently, so if you have problems with 15.1 builds, that won't help. Best ask dirk, cmurphy, or evrardjp for help | 18:39 |
*** jamesmcarthur has quit IRC | 18:40 | |
*** jamesmcarthur has joined #openstack-infra | 18:40 | |
prometheanfire | ah, ya, DIB_RELEASE set to 15 doesn't fetch the image even | 18:40 |
* prometheanfire is trying to just get a generic non-cloud qcow2 image | 18:41 | |
*** armstrong has quit IRC | 18:44 | |
openstackgerrit | Merged zuul/zuul-jobs master: Log swift upload tracebacks with ansible https://review.opendev.org/681843 | 18:49 |
*** jamesmcarthur has quit IRC | 18:50 | |
*** kjackal has quit IRC | 18:51 | |
*** kjackal_v2 has joined #openstack-infra | 18:51 | |
*** jamesmcarthur has joined #openstack-infra | 18:52 | |
*** pkopec has joined #openstack-infra | 18:53 | |
*** jamesmcarthur has quit IRC | 18:58 | |
*** jamesmcarthur has joined #openstack-infra | 19:03 | |
redrobot | Hi infra friends! | 19:03 |
*** armax has quit IRC | 19:03 | |
*** jcoufal has joined #openstack-infra | 19:04 | |
redrobot | I'm trying to debug the Fedora gate failure in Barbican | 19:04 |
*** dciabrin has joined #openstack-infra | 19:06 | |
*** armax has joined #openstack-infra | 19:06 | |
redrobot | It was driving me crazy because the changes I'm making to the playbook didn't seem to get picked up | 19:06 |
redrobot | and I finally narrowed it down to this ansible task that is running the stable/stein branch for some reason: https://zuul.opendev.org/t/openstack/build/ab3577be51234895a664bd283bfc10e2/log/job-output.txt#2372 | 19:07 |
*** dciabrin_ has quit IRC | 19:07 | |
clarkb | redrobot: if you look in the zuul inventory file there is a job inheritance stack | 19:08 |
clarkb | that tends to be useful to see why these things happen | 19:08 |
*** lucasagomes has quit IRC | 19:09 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-tox-output: introduce zuul_use_fetch_output https://review.opendev.org/681864 | 19:09 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-sphinx-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681870 | 19:09 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-subunit-output: introduce zuul_use_fetch_output https://review.opendev.org/681882 | 19:09 |
clarkb | its due to your branch matcher on barbican-devstack-functional-base | 19:12 |
clarkb | you shouldnt use a branch matcher there and instead rely on the implicit match for the current branch | 19:13 |
clarkb | redrobot: ^ | 19:13 |
redrobot | clarkb, hmmm... I'm not sure I understand how that works... You're talking about this branch matcher, right? https://opendev.org/openstack/barbican/src/branch/master/.zuul.yaml#L4 | 19:15 |
redrobot | I'm looking at the _inhertance_path: list in the inventory file you pointed out | 19:16 |
redrobot | but I'm not sure how to interpret it? | 19:16 |
*** JorgeFranco has joined #openstack-infra | 19:17 | |
clarkb | yes that says 'this config applies to these branches but since the stein branch has that too the stein config applies to master too | 19:17 |
clarkb | by default without a branch matcher the config in the current branch applies to the current branch only | 19:17 |
*** ykarel|away has quit IRC | 19:18 | |
donnyd | sean-k-mooney: Are your jobs running in FN ok? | 19:18 |
clarkb | that line should likely just be deleted in all of your branches and if ocata needa something special change the config on ocata | 19:19 |
redrobot | clarkb, ack. I think I understand now... | 19:19 |
sean-k-mooney | donnyd: yes | 19:20 |
sean-k-mooney | i see you put them back in a seperate pool too | 19:20 |
sean-k-mooney | im not sure that was required but it does make running them faster | 19:20 |
donnyd | well there is no other load | 19:20 |
donnyd | We did that so we could re-enable the NUMA jobs without the general pool being included | 19:21 |
donnyd | how many have you ran? | 19:21 |
* redrobot sits down in the corner to read https://docs.openstack.org/infra/manual/zuulv3.html#job-inheritance | 19:21 | |
donnyd | looks like 8 are running right now, but not sure if that is all you | 19:21 |
donnyd | clarkb: do you think its safe to re-enable a small step in FN | 19:22 |
donnyd | and also we should probably revert the change that remove the ipv6 record | 19:22 |
sean-k-mooney | its al me | 19:23 |
sean-k-mooney | 1 2 node job finsihes and i have 4 running | 19:23 |
sean-k-mooney | all without the connection issue we were seeing before | 19:23 |
yoctozepto | seems ubuntu got hit by "ipv6 public promised but not present" too: https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e3d/681573/12/check/kolla-ansible-ubuntu-source-ipv6/e3d170f/zuul-info/ | 19:24 |
yoctozepto | different cloud now - ovh than rack | 19:24 |
clarkb | yoctozepto: did that job run on ovh? | 19:24 |
yoctozepto | seems so | 19:25 |
yoctozepto | clarkb | 19:25 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-tox-output: introduce zuul_use_fetch_output https://review.opendev.org/681864 | 19:25 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-sphinx-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681870 | 19:25 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-subunit-output: introduce zuul_use_fetch_output https://review.opendev.org/681882 | 19:25 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-translation-output: introduce zuul_use_fetch_output https://review.opendev.org/681887 | 19:25 |
clarkb | ya that happens because ovh neutron api says 'here is your ipv6 address' but the instancehas no info of that address | 19:25 |
clarkb | there is no router advertisements and the info is not in config drive | 19:25 |
clarkb | we've had a long standing ask from them to expose that info to the instances | 19:25 |
yoctozepto | ah, ok | 19:25 |
yoctozepto | thanks for quick response | 19:26 |
yoctozepto | then we are bound to that vxlan thingy | 19:26 |
openstackgerrit | Bob Fournier proposed openstack/diskimage-builder master: Use x86 architeture specific grub2 packages for RHEL https://review.opendev.org/681889 | 19:26 |
yoctozepto | eh, switching course then | 19:26 |
clarkb | yes since inap at least doesnt have ipv6 nor does ovh | 19:26 |
yoctozepto | oh, we have inap as provider? | 19:27 |
yoctozepto | never seen | 19:27 |
clarkb | its there but currently not launching instances | 19:27 |
mgagne | yoctozepto: it's been disabled for a couple months now. =( | 19:27 |
clarkb | we need to syncup with mgagne on turning it back on | 19:27 |
yoctozepto | mgoddard: vxlan is a must - but as promised the job works as soon as the underly does :-) | 19:28 |
yoctozepto | clarkb, mgagne: roger that | 19:28 |
yoctozepto | mgoddard: so it seems it's vxlan either way | 19:29 |
clarkb | mgagne: any more info on that? even if it will disabled for a while longer? | 19:29 |
yoctozepto | clarkb: one last question before I go to sleep - where to look at that zuul role you made for vxlan (for comparison sake) | 19:29 |
donnyd | how do i update the serial for the opendev zone? what is used to generate that? | 19:29 |
mgagne | clarkb: we are awaiting a network maintenance which takes longer than expected. | 19:29 |
*** factor has quit IRC | 19:30 | |
*** factor has joined #openstack-infra | 19:30 | |
clarkb | yoctozepto: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles the ones with multi-node prefix | 19:30 |
clarkb | donnyd: current unix epoch timestamp | 19:30 |
clarkb | donnyd: I use date +%s output | 19:30 |
mgagne | clarkb: we need to change some network topology and it's unfortunately not a high priority atm. I poked my coworker about it. Lets see where it goes... | 19:31 |
clarkb | mgagne: thank you for the update | 19:31 |
yoctozepto | clarkb: thanks, multi-node-bridge looks like the one | 19:31 |
slaweq | clarkb: hi, can You take a look at https://review.opendev.org/681893 - I hope it will do what You talked with fungi earlier | 19:32 |
openstackgerrit | Donny Davis proposed opendev/zone-opendev.org master: revert "Temporarily remove AAAA RR for fortnebula mirror" https://review.opendev.org/681895 | 19:32 |
clarkb | slaweq: that looks good. to confirm it works check the logs of that job against that change for the -c arguments to pip install in tox | 19:34 |
clarkb | it should be a local file path not an https url | 19:34 |
slaweq | clarkb: sure, I will check that tomorrow morning :) | 19:34 |
*** jcoufal has quit IRC | 19:36 | |
donnyd | To re-enable FN I am thinking we should do https://review.opendev.org/681895 first and then https://review.opendev.org/#/c/681845/ | 19:36 |
clarkb | I cant double check the IP currently but dns change lgtm otherwise | 19:37 |
clarkb | fungi: ^ can probably verify? | 19:38 |
clarkb | oh actually can you set the ttl to 3600 like with the A record? | 19:38 |
clarkb | donnyd: ^ | 19:38 |
donnyd | https://www.irccloud.com/pastebin/C2BvHcCS/ | 19:39 |
donnyd | yea np | 19:39 |
openstackgerrit | Donny Davis proposed opendev/zone-opendev.org master: revert "Temporarily remove AAAA RR for fortnebula mirror" https://review.opendev.org/681895 | 19:40 |
fungi | can do, just a sec | 19:41 |
*** jamesmcarthur has quit IRC | 19:43 | |
*** jamesmcarthur has joined #openstack-infra | 19:44 | |
fungi | clarkb: donnyd: i've double-checked that locally on the server and via the nova api | 19:46 |
fungi | `ssh 2001:470:e045:2:f816:3eff:fee6:691d hostname -f` also returns "mirror01.regionone.fortnebula.opendev.org" for me | 19:47 |
clarkb | I think you can approve it then | 19:47 |
fungi | done | 19:49 |
*** jamesmcarthur has quit IRC | 19:49 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-tox-output: introduce zuul_use_fetch_output https://review.opendev.org/681864 | 19:51 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-sphinx-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681870 | 19:51 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-subunit-output: introduce zuul_use_fetch_output https://review.opendev.org/681882 | 19:51 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-translation-output: introduce zuul_use_fetch_output https://review.opendev.org/681887 | 19:51 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-javascript-content-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681903 | 19:51 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-coverage-output: introduce zuul_use_fetch_output https://review.opendev.org/681904 | 19:52 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-sphinx-output: introduce zuul_use_fetch_output https://review.opendev.org/681905 | 19:52 |
*** goldyfruit_ has quit IRC | 19:52 | |
*** goldyfruit has joined #openstack-infra | 19:52 | |
*** pkopec has quit IRC | 19:53 | |
openstackgerrit | Merged opendev/zone-opendev.org master: revert "Temporarily remove AAAA RR for fortnebula mirror" https://review.opendev.org/681895 | 19:53 |
*** icarusfactor has joined #openstack-infra | 19:53 | |
*** factor has quit IRC | 19:55 | |
donnyd | fungi: can we move forward on scaling FN back up to 10 test nodes | 19:55 |
donnyd | https://review.opendev.org/#/c/681845/ | 19:56 |
*** jbadiapa has quit IRC | 19:57 | |
openstackgerrit | Jeremy Stanley proposed opendev/git-review master: Support spaces in topic https://review.opendev.org/681906 | 19:57 |
*** jtomasek has quit IRC | 19:58 | |
fungi | donnyd: yep, approved just now | 19:59 |
donnyd | thanks fungi | 19:59 |
*** njohnston|lunch is now known as njohnston | 19:59 | |
donnyd | I will watch the edge and see if we get the same retries. If so I will kill it | 19:59 |
fungi | thanks! | 20:01 |
*** e0ne has quit IRC | 20:03 | |
*** jamesmcarthur has joined #openstack-infra | 20:04 | |
openstackgerrit | Merged openstack/project-config master: Slowly scale FN back up https://review.opendev.org/681845 | 20:08 |
pabelanger | slowly! :) | 20:09 |
*** dolpher has joined #openstack-infra | 20:09 | |
redrobot | clarkb, ok, so I just want to make sure I'm understanding the config correctly. Barbican is an "unstrusted-project" which is why Zuul is looking at the config in all branches (including stable/stein) for a change to master? | 20:10 |
redrobot | as in https://zuul-ci.org/docs/zuul/user/config.html#configuration-loading | 20:10 |
*** markvoelker has quit IRC | 20:11 | |
clarkb | its looking at all branchea because the branch matcher tells it to | 20:11 |
*** markvoelker has joined #openstack-infra | 20:11 | |
redrobot | hmm... maybe I'm looking into this too much.... my understanding from reading the above is that zuul is looking at the config in all branches, but because we define that matcher in all branches then all branches apply to master? | 20:12 |
rm_work | oh hey, question -- with the new zuul log display stuff, the old way of colorizing/filtering/linkifying the logs doesn't work? | 20:12 |
clarkb | all branches with that matcher apply to all branaches the matcher matches | 20:12 |
rm_work | is there a way to make that happen again or is that just dead until more work is done on the zuul side? | 20:12 |
clarkb | redrobot: if you delete the branch matcher then only the config in that branch applies to that branch | 20:13 |
clarkb | but you need to do that for all brnches | 20:13 |
fungi | zuul checks all branches of the project for configuration, and when it finds branch explicit matchers (on any branch of the project) is incorporates them. if it doesn't find explicit branch matchers it uses implicit branch matching based on what branch it found the config on | 20:13 |
clarkb | rm_work: it should work if you use the zuul rendered logs | 20:13 |
donnyd | Load is low, but there is traffic | 20:13 |
redrobot | clarkb, ok, I think I'm starting to grok this. Thanks for the help! I really appreciate it. :) | 20:13 |
donnyd | https://usercontent.irccloud-cdn.com/file/7vLodPmg/image.png | 20:13 |
rm_work | hmmmmm | 20:13 |
donnyd | fungi: ^^^^ | 20:13 |
clarkb | dont load the raw version | 20:13 |
donnyd | that looks mo betta | 20:13 |
rm_work | which page is the zuul rendered one | 20:13 |
fungi | rm_work: give us an example | 20:14 |
fungi | donnyd: that's a lot of zero | 20:14 |
donnyd | yea, that should be what it looks like | 20:14 |
fungi | good! | 20:14 |
fungi | yep | 20:15 |
donnyd | I will keep watching as load increases... but before every single flow had some issue | 20:15 |
clarkb | rm_work: the zuul.openetack.org/build/uuid/path url | 20:15 |
clarkb | the line numbers are clickable and you can filter severity there | 20:16 |
rm_work | sorry yeah doing like 5 things, sec, looking | 20:16 |
*** xenos76 has quit IRC | 20:17 | |
*** xek has quit IRC | 20:21 | |
fungi | i think it's confused at least a few people that there's more than one way to pull up log files | 20:21 |
rm_work | https://371dc00fd45f0dbfe46c-53453ebbb69cd33a875a83010b4c2a5c.ssl.cf2.rackcdn.com/681144/9/check/octavia-v2-dsvm-py2-scenario-centos-7/b4586ac/controller/logs/screen-n-api.txt.gz | 20:23 |
rm_work | for example | 20:23 |
clarkb | ya thats the raw url, if toy open the url returned to gerrit you get thosefeatures back | 20:23 |
clarkb | fungi can probably type better than me | 20:23 |
rm_work | errr | 20:24 |
rm_work | where's that? | 20:24 |
rm_work | how can I get to that file via the other way? | 20:24 |
fungi | what's the initial link gerrit gave you? | 20:24 |
rm_work | to get to this I click the Logs tab and navigate to the logs? | 20:24 |
rm_work | so I go here: https://zuul.opendev.org/t/openstack/build/dbc0ecfbc95d40769c536392f05400dc/logs | 20:24 |
rm_work | and click "controller" -> "logs" -> "screen-o-api.txt.gz" | 20:25 |
rm_work | I wasn't aware of another way to get to those log files | 20:25 |
fungi | when i do that it takes me to https://zuul.opendev.org/t/openstack/build/dbc0ecfbc95d40769c536392f05400dc/log/controller/logs/screen-n-api.txt.gz | 20:26 |
fungi | are you clicking on the little linkaway icon to the right of the filename, or on the filename? | 20:26 |
fungi | the former will go to the url you pasted, the latter to the url i pasted | 20:26 |
rm_work | uhhh | 20:27 |
rm_work | clicking on the filename? | 20:27 |
rm_work | ahh you have to click the little *arrows* next to the directory names | 20:27 |
rm_work | if you click on "controller" it takes you out | 20:27 |
rm_work | wtf | 20:27 |
fungi | oh, yeah, to expand the contents | 20:27 |
rm_work | this is not intuitive | 20:28 |
fungi | i concur, it could probably be improved on | 20:28 |
rm_work | I wasn't even aware I could click on those, or that it would do something different, and it is freaking tiny | 20:28 |
rm_work | T_T | 20:28 |
rm_work | but thanks, at least now that I know, I can get there now | 20:29 |
jrosser | i find those controls very difficult on a mobile device | 20:29 |
fungi | maybe the directory names should link to the same thing the > does and then we should add little linkaway icons next to the directory names to go to the file indices like we do to go to the raw log urls | 20:29 |
rm_work | yesplz | 20:29 |
rm_work | there's already a linkaway icon, it just isn't clickable | 20:29 |
*** lpetrut has joined #openstack-infra | 20:30 | |
fungi | oh, indeed, we now link the word "raw" | 20:30 |
rm_work | i bet you like 3 beers that more than 50% of users believe you've "re-added the feature", not just made it more accessible, lol | 20:30 |
fungi | so maybe we add that in parentheses | 20:30 |
fungi | rm_work: you are probably correct | 20:30 |
*** kjackal_v2 has quit IRC | 20:31 | |
rm_work | yeah that's funny, on FILES it shows the (raw) thing | 20:31 |
fungi | i'm going re-ask in #zuul since we're using the upstream feature as it ships, and see if this is something folks think is easy to change | 20:31 |
rm_work | but on directories it doesn't do anything and it just looks like you'd click on the dirname | 20:31 |
rm_work | also on files I feel like the linkaway image should be part of the hyperlink tag | 20:32 |
rm_work | ¯\_(ツ)_/¯ | 20:32 |
*** pcaruana has quit IRC | 20:32 | |
rm_work | I guess I could always put up a patch, lol | 20:32 |
fungi | yeah, i mean, i *would* put up a patch but my javascript knowledge is just a hair past nonexistent so... | 20:33 |
rm_work | yeah i only know http://vanilla-js.com/ | 20:35 |
*** prometheanfire has quit IRC | 20:36 | |
*** prometheanfire has joined #openstack-infra | 20:36 | |
*** e0ne has joined #openstack-infra | 20:37 | |
*** markvoelker has quit IRC | 20:37 | |
*** trident has quit IRC | 20:39 | |
*** markvoelker has joined #openstack-infra | 20:39 | |
*** mriedem_afk is now known as mriedem | 20:39 | |
*** Goneri has quit IRC | 20:42 | |
fungi | i learned a little javascript back in the mid-1990s to up my webmaster game and do fancy things with hyperlinks | 20:42 |
fungi | i think that's about where it ended for me | 20:42 |
fungi | been meaning to get up to speed on what's happened with it since the old netscape navigator days | 20:43 |
*** lpetrut has quit IRC | 20:46 | |
*** hamzy_ has quit IRC | 20:49 | |
*** trident has joined #openstack-infra | 20:51 | |
openstackgerrit | Donny Davis proposed openstack/project-config master: Retransmit count with IPv6 is still unacceptable on FN https://review.opendev.org/681928 | 20:54 |
donnyd | fungi: yea so its still doing it.. .doesn't seem to be effecting the jobs at the current scale, but this needs to get fixed properly | 20:55 |
donnyd | I will rework the ipv6 sid e of the network and probably cut the edge rtr out of path for the mirror | 20:57 |
fungi | oh well | 20:58 |
donnyd | right now it goes from tenant - > edge -> other tenant for mirror traffic | 20:58 |
donnyd | i will get it sorted out later tonight | 20:58 |
fungi | no rush, thanks for working on it! | 20:59 |
* donnyd frustrated with ipv6 - switching to overwhelming kinetic force mode | 21:00 | |
*** slaweq has quit IRC | 21:00 | |
*** diablo_rojo has quit IRC | 21:01 | |
fungi | percussive maintenance is often surprisingly effective | 21:08 |
openstackgerrit | Merged openstack/project-config master: Retransmit count with IPv6 is still unacceptable on FN https://review.opendev.org/681928 | 21:21 |
*** ramishra has quit IRC | 21:23 | |
*** dustinc has joined #openstack-infra | 21:23 | |
*** e0ne has quit IRC | 21:32 | |
*** markvoelker has quit IRC | 21:32 | |
openstackgerrit | James E. Blair proposed zuul/zuul master: WIP: Support HTTP-only Gerrit https://review.opendev.org/681936 | 21:34 |
*** bdodd has quit IRC | 21:34 | |
*** hamzy_ has joined #openstack-infra | 21:37 | |
openstackgerrit | Jeremy Stanley proposed opendev/git-review master: Support spaces in topic https://review.opendev.org/681906 | 21:38 |
*** whoami-rajat has quit IRC | 21:40 | |
*** exsdev0 has joined #openstack-infra | 21:45 | |
*** exsdev has quit IRC | 21:45 | |
*** exsdev0 is now known as exsdev | 21:45 | |
*** markvoelker has joined #openstack-infra | 21:50 | |
*** efried has quit IRC | 21:50 | |
*** jcoufal has joined #openstack-infra | 21:51 | |
*** xenos76 has joined #openstack-infra | 21:59 | |
*** weshay is now known as weshay|ruck | 22:04 | |
*** jcoufal has quit IRC | 22:06 | |
*** eharney has quit IRC | 22:19 | |
*** rcernin has joined #openstack-infra | 22:19 | |
*** armax has quit IRC | 22:24 | |
*** markvoelker has quit IRC | 22:32 | |
*** whoami-rajat has joined #openstack-infra | 22:37 | |
*** rf0lc0 has joined #openstack-infra | 22:40 | |
*** diablo_rojo has joined #openstack-infra | 22:41 | |
*** rfolco has quit IRC | 22:41 | |
*** slaweq has joined #openstack-infra | 22:42 | |
*** markvoelker has joined #openstack-infra | 22:46 | |
*** slaweq has quit IRC | 22:48 | |
*** diablo_rojo has quit IRC | 22:51 | |
*** markvoelker has quit IRC | 22:51 | |
*** mattw4 has quit IRC | 22:52 | |
*** aaronsheffield has quit IRC | 22:57 | |
fungi | paladox: i don't suppose you know how to encode spaces in gerrit push options, for example to set a change topic with spaces in it? the webui lets me set a topic with spaces but the docs are unclear on actually setting any values with spaces through git push | 23:01 |
paladox | would enclosing it with quotes work? | 23:02 |
fungi | doesn't seem to, from what i can tell | 23:02 |
paladox | hmm | 23:02 |
fungi | i tried both " and ' | 23:02 |
fungi | also took a shot in the dark with substituting %20 for space, but that also didn't do it | 23:03 |
*** tkajinam has joined #openstack-infra | 23:03 | |
*** mattw4 has joined #openstack-infra | 23:03 | |
*** diablo_rojo has joined #openstack-infra | 23:04 | |
fungi | it's come up because a user unwittingly tried to set a (quoted) topic string with git review -t and at first i was like, "nah gerrit doesn't allow spaces in change topics" but i went looking in the docs and they didn't say, so i tried it via the webui and was surprised to discover it worked | 23:05 |
paladox | fungi yeh, i remember fixing this in polygerrit's ui | 23:06 |
fungi | but if it has a way to interpolate spaces in a topic from git push, i can't for the life of me figure out what it is | 23:06 |
paladox | fungi i guess we could call this a *bug* | 23:07 |
paladox | since the docs doin't show me an example of a topic on the command line containing a space. | 23:07 |
paladox | https://github.com/GerritCodeReview/gerrit/commit/efcce2f7ee6f5e50f54fd6371e27d661c392135d is the change i did :) | 23:08 |
*** mriedem is now known as mriedem_afk | 23:08 | |
fungi | huh, neat | 23:08 |
corvus | fungi, paladox: under ssh, sometimes quoting is accomplished with "{}" eg: "ssh review gerrit query message:{commit message with spaces}" | 23:20 |
corvus | variations on that might be worth a try in case that applies here | 23:21 |
paladox | would that work with git push (topic=)? | 23:21 |
corvus | no idea, i'm hoping fungi will try and tell us :) | 23:21 |
paladox | something like git push origin HEAD:refs/for/master%topic={multi space master} | 23:21 |
paladox | heh :) | 23:21 |
fungi | mebbe, worth a try for sure. thanks corvus! | 23:22 |
fungi | going to find out in a sec | 23:22 |
*** jamesmcarthur has quit IRC | 23:26 | |
fungi | error: src refspec spaces} does not match any | 23:26 |
fungi | nope, probably not | 23:27 |
fungi | also worth noting, spaces in a topic (added via the webui) break `git review -d ...` with an error like "fatal: 'review/jeremy_stanley/topic spaces' is not a valid branch name." | 23:28 |
fungi | so likely we need robustness in both directions with it | 23:29 |
fungi | at this stage, unless folks have other ideas on how we might solve it, i think postel's law should be applied | 23:31 |
*** dchen has joined #openstack-infra | 23:31 | |
fungi | add a clear error when a user tries to pass spaces in a topic string, and map them to something else when trying to create local git branches from found gerrit topics | 23:32 |
paladox | heh | 23:32 |
*** icarusfactor has quit IRC | 23:32 | |
*** icarusfactor has joined #openstack-infra | 23:33 | |
fungi | probably remap anything we expect git won't handle in a branch string... there's probably a clear specification for those | 23:33 |
*** igordc has quit IRC | 23:34 | |
*** armax has joined #openstack-infra | 23:35 | |
*** factor has joined #openstack-infra | 23:37 | |
*** icarusfactor has quit IRC | 23:37 | |
* paladox is excited, we've got a new server to replace our existing gerrit server. | 23:40 | |
paladox | we've outgrown the server too, so this upgrade is well needed! | 23:40 |
*** rlandy has quit IRC | 23:41 | |
clarkb | ok I'm finally back to a computer after a long afternoon out. Is there anything I can help with? anything with eg FN? | 23:42 |
fungi | clarkb: we turned fn back down while donnyd ragebuilds a soft router to shuttle traffic between the nodes and the bridge so they don't trombone in and out of his border router | 23:44 |
clarkb | I guess I should check if my swift logging change has caught anything yet | 23:44 |
*** JorgeFranco has quit IRC | 23:46 | |
donnyd | fungi: rage builds... I about fell out of my chair | 23:46 |
donnyd | LOL | 23:46 |
donnyd | There have been a steady stream of jobs so I havent tinkered with it yet | 23:47 |
donnyd | http://grafana.openstack.org/d/3Bwpi5SZk/nodepool-fortnebula?orgId=1 | 23:47 |
clarkb | I told my kids that servers are computers that talk to other computers. They now think that servers literally speak vocally to other computers | 23:47 |
clarkb | all that to say we should be nice to the servers and routers :) | 23:47 |
donnyd | https://usercontent.irccloud-cdn.com/file/TUdxgDFC/image.png | 23:48 |
donnyd | fungi: | 23:48 |
paladox | lol | 23:48 |
donnyd | this is no good | 23:48 |
fungi | clarkb: postel's law is all about polite network communication, so they might be closer to the truth than you realize | 23:52 |
clarkb | I've not caught any new swift errors | 23:52 |
fungi | donnyd: indeed, that's rather a big number | 23:52 |
clarkb | fungi: I almost pulled out a photo of putting a phone receiver in the modem then realized they don't know that that is a phone either | 23:52 |
fungi | i mean, i've seen some big numbers in my time, but that one's up there | 23:53 |
fungi | clarkb: funny-sad | 23:53 |
clarkb | fungi: the best part to me is how similar that stuff is to what we do today. Layer 1 has changed but the things above aren't all that different | 23:57 |
paladox | clarkb bt use to have that type of modem/router. | 23:59 |
fungi | indeed | 23:59 |
paladox | that's long been dead :) | 23:59 |
paladox | (discontinued in the bt home hub 3 (when bt infinity was launched) | 23:59 |
fungi | *i* used to have an acoustic coupler modem | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!