*** rlandy|ruck has quit IRC | 00:24 | |
*** mattw4 has quit IRC | 00:37 | |
*** jamesmcarthur has joined #zuul | 01:37 | |
*** bhavikdbavishi has joined #zuul | 01:45 | |
*** threestrands has joined #zuul | 01:47 | |
openstackgerrit | Merged openstack-infra/zuul master: Fix dynamic loading of trusted layouts https://review.openstack.org/652787 | 01:53 |
---|---|---|
pabelanger | clarkb: ^ | 01:55 |
clarkb | yup I'll be restarting our scheduler soon | 01:56 |
pabelanger | other 2 look good so far | 01:56 |
*** jamesmcarthur has quit IRC | 02:08 | |
openstackgerrit | Merged openstack-infra/zuul master: Config errors should not affect config-projects https://review.openstack.org/652788 | 02:10 |
clarkb | just one to go now | 02:10 |
*** bhavikdbavishi1 has joined #zuul | 02:13 | |
*** bhavikdbavishi has quit IRC | 02:14 | |
clarkb | I think the release note change will need a recheck | 02:14 |
pabelanger | looks like quickstart is waiting for node | 02:16 |
clarkb | it is waiing in the image build.job | 02:17 |
*** bhavikdbavishi1 has quit IRC | 02:18 | |
clarkb | which I think must be in limestone and failing because docker cant ipv6 | 02:18 |
pabelanger | clarkb: is there no way to force ipv4 on docker jobs atm? | 02:19 |
pabelanger | looks like rax | 02:20 |
pabelanger | started now | 02:20 |
*** bhavikdbavishi has joined #zuul | 02:20 | |
clarkb | no we have ipv6 only clouds and dont have ipv4 labels | 02:21 |
clarkb | its frustrating becayse docker shouldnt break on this its just a regex input check they do | 02:21 |
pabelanger | ah, right | 02:22 |
pabelanger | forgot about ipv6 only | 02:22 |
*** bhavikdbavishi1 has joined #zuul | 02:24 | |
*** bhavikdbavishi has quit IRC | 02:24 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 02:24 | |
clarkb | oh cool so ya now that it is in rax it appears to be finishing | 02:26 |
clarkb | I don't know if it will merge before puppet runs but I'm not worried about restarting on just the release note change | 02:26 |
pabelanger | ++ | 02:27 |
clarkb | yup puppet just ran | 02:29 |
openstackgerrit | Merged openstack-infra/zuul master: Add release note for broken trusted config loading fix https://review.openstack.org/652793 | 02:29 |
*** jamesmcarthur has joined #zuul | 02:39 | |
*** jamesmcarthur has quit IRC | 02:46 | |
*** weshay_pto is now known as weshay | 02:49 | |
*** jamesmcarthur has joined #zuul | 03:02 | |
*** jamesmcarthur has quit IRC | 03:39 | |
*** jamesmcarthur has joined #zuul | 03:40 | |
*** jamesmcarthur has quit IRC | 03:45 | |
*** jamesmcarthur has joined #zuul | 04:11 | |
*** jamesmcarthur has quit IRC | 04:18 | |
openstackgerrit | Merged openstack-infra/nodepool master: Fix for orphaned DELETED nodes https://review.openstack.org/652729 | 04:35 |
*** swest has joined #zuul | 05:21 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Support fail-fast in project pipelines https://review.openstack.org/652764 | 05:22 |
*** quiquell|off is now known as quiquell|rover | 05:49 | |
*** bjackman has joined #zuul | 05:56 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Support fail-fast in project pipelines https://review.openstack.org/652764 | 05:59 |
*** saneax has joined #zuul | 06:21 | |
*** saneax has quit IRC | 06:25 | |
*** pcaruana has joined #zuul | 06:26 | |
*** saneax has joined #zuul | 06:26 | |
*** kmalloc has quit IRC | 06:28 | |
*** kmalloc has joined #zuul | 06:30 | |
*** hogepodge has quit IRC | 06:32 | |
*** kmalloc has quit IRC | 06:40 | |
bjackman | Would be great to get a review of https://review.openstack.org/#/c/649900/ if anyone has time - it fixes a bug in the Zuul/Gerrit interface | 06:45 |
*** gtema has joined #zuul | 06:51 | |
*** kmalloc has joined #zuul | 06:53 | |
*** quiquell|rover is now known as quique|rover|brb | 06:55 | |
*** hogepodge has joined #zuul | 06:56 | |
*** hashar has joined #zuul | 06:57 | |
*** quique|rover|brb is now known as quiquell|rover | 07:09 | |
*** bhavikdbavishi has quit IRC | 07:28 | |
*** bhavikdbavishi has joined #zuul | 07:29 | |
openstackgerrit | Simon Westphahl proposed openstack-infra/zuul master: Ensure correct state in MQTT connection https://review.openstack.org/652932 | 07:29 |
*** gtema has quit IRC | 07:45 | |
*** gtema has joined #zuul | 07:46 | |
*** dkehn has quit IRC | 07:55 | |
*** threestrands has quit IRC | 07:58 | |
*** bhavikdbavishi1 has joined #zuul | 08:37 | |
*** bhavikdbavishi has quit IRC | 08:38 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 08:38 | |
openstackgerrit | Brendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field https://review.openstack.org/649900 | 09:31 |
*** saneax has quit IRC | 09:39 | |
openstackgerrit | Brendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field https://review.openstack.org/649900 | 09:49 |
*** hashar has quit IRC | 09:57 | |
openstackgerrit | Brendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field https://review.openstack.org/649900 | 10:27 |
*** panda is now known as panda|lunch | 11:05 | |
*** pcaruana has quit IRC | 11:17 | |
*** sshnaidm has quit IRC | 11:17 | |
openstackgerrit | Brendan proposed openstack-infra/zuul master: gerrit: Add support for 'oldValue' comment-added field https://review.openstack.org/649900 | 11:18 |
*** quiquell|rover is now known as quique|rover|eat | 11:25 | |
*** sshnaidm has joined #zuul | 11:26 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Support fail-fast in project pipelines https://review.openstack.org/652764 | 11:37 |
*** gtema has quit IRC | 11:40 | |
*** gtema has joined #zuul | 11:43 | |
*** pcaruana has joined #zuul | 12:05 | |
Shrews | corvus: tobiash: the cache is probably a decent guess. November is when we added I4834a7ea722cf2ac7df79455ce077832ae966e63 (Add second level cache of nodes) | 12:05 |
*** quique|rover|eat is now known as quiquell|rover | 12:07 | |
*** rlandy has joined #zuul | 12:19 | |
*** rlandy is now known as rlandy|ruck | 12:20 | |
*** jamesmcarthur has joined #zuul | 12:21 | |
*** panda|lunch is now known as panda | 12:22 | |
*** jamesmcarthur has quit IRC | 12:30 | |
*** bhavikdbavishi has quit IRC | 12:31 | |
*** bhavikdbavishi has joined #zuul | 12:35 | |
Shrews | pabelanger: 652778 lgtm. probably would have been cleaner to use in ('ssh', 'network_cli') in the connection_type comparisons, but still ok | 12:36 |
pabelanger | Shrews: ah, didn't think of it | 12:40 |
pabelanger | Shrews: also thanks! | 12:40 |
pabelanger | Shrews: do you think there will be a release of nodepool this week? | 12:46 |
*** jamesmcarthur has joined #zuul | 12:48 | |
Shrews | pabelanger: i don't think one is planned. normal procedure is to run it in production infra for a while before a release, so we'd probably have to schedule a restart first and run for a while before releasing with your change. | 12:51 |
pabelanger | +1 | 12:52 |
pabelanger | maybe after we stand down from zuul release :) | 12:52 |
Shrews | yeah. i think the task manager removal stuff has merged, but we aren't running that yet. so that may or may not be a big deal to contend with. | 12:54 |
*** gtema has quit IRC | 12:54 | |
Shrews | mordred may want to be around to help watch that when we do | 12:54 |
pabelanger | ack | 12:54 |
Shrews | then again, nl01 was restarted yesterday, so may not be an issue | 12:55 |
Shrews | or were all of them restarted? | 12:56 |
Shrews | "2019-04-15 23:20:07 UTC restarted nodepool-launcher on nl01 and nl02 due to OOM; restarted n-l on nl03 due to limited memory" | 12:56 |
* Shrews keeps fingers crossed | 12:57 | |
pabelanger | oh right | 12:57 |
pabelanger | we did it last night | 12:57 |
Shrews | launcher is not running on nl04 | 12:58 |
Shrews | moving to #-infra | 12:58 |
openstackgerrit | Merged openstack-infra/nodepool master: Gather host keys for connection-type network_cli https://review.openstack.org/652778 | 12:58 |
*** jamesmcarthur has quit IRC | 12:58 | |
*** bjackman has quit IRC | 13:00 | |
*** bjackman has joined #zuul | 13:00 | |
tristanC | mordred: so, could we get the zuul-operator project created? | 13:03 |
tristanC | And has anyone here used the executor zone feature yet? | 13:05 |
*** pcaruana has quit IRC | 13:07 | |
mordred | tristanC: sorry - I still need to write the spec - I've been slammed with RH internal things recently. I promise I'll get that done today or tomorrow | 13:10 |
tobiash | Shrews: so your launchers are already running the taskmanager changes? | 13:11 |
Shrews | tobiash: they are now | 13:11 |
tobiash | I wasn't sure if I want to be the first to try ;) | 13:11 |
pabelanger | tristanC: not yet, I've tested it locally a few weeks ago, but hoping to try it out in the next month or so | 13:12 |
tristanC | pabelanger: iiuc, the zone scope only happens for the execute method, wouldn't it make sense to disable merge job for zoned executor? | 13:13 |
mordred | Shrews: and they seem to be not crashing? that's great! | 13:14 |
pabelanger | tristanC: maybe, I cannot remember if we discussed that or not on IRC as improvement. It was more to make sure executors were closer to testing, but could see why you'd want that for mergers too | 13:15 |
pabelanger | and use private IPs only | 13:15 |
tristanC | pabelanger: i mean, not do merge job for zoned executors because they are not related to most of our git sources | 13:17 |
mordred | Shrews: speaking of ... there is a (very large) patch up for review in sdk that's about getting the image code to use the underlying resource primimtives: https://review.openstack.org/#/c/651534 - that should probably get decently deep review | 13:18 |
*** bhavikdbavishi has quit IRC | 13:23 | |
Shrews | mordred: i think there might be an issue with openstacksdk... or maybe nodepool | 13:26 |
Shrews | mordred: we've been trying to delete a server from ovh-gra1 for a long time that does not exist (10a646d6-fd03-4cca-ab97-ca137fafa745) | 13:26 |
*** bjackman has quit IRC | 13:27 | |
Shrews | mordred: it doesn't seem that nodepool is getting what it expects from get_server: http://paste.openstack.org/show/749354/ | 13:27 |
pabelanger | tristanC: yah, could see that, however in the use case i was thinking about, I didn't mind all mergers merging all git repos, was more about the push from executor to test node | 13:28 |
Shrews | mordred: we aren't throwing exceptions.NotFound() in cleanupServer there | 13:28 |
pabelanger | tristanC: I think you could propose a patch and see what others say | 13:28 |
Shrews | so we are always trying to delete it | 13:28 |
mordred | Shrews: hrm. what ARE we getting back I wonder? | 13:29 |
Shrews | mordred: yeah, that's the question | 13:29 |
mordred | blerg. we're throwing openstack.exceptions.ResourceNotFound | 13:30 |
mordred | lemme go look why | 13:30 |
*** tobiash has quit IRC | 13:31 | |
Shrews | mordred: oh yay. we don't handle that in nodepool | 13:31 |
mordred | well - we shouldn't be throwing it | 13:31 |
pabelanger | tristanC: I think there is a topic, say if the repo was marked private. Today we don't really support that in a multi-tenant setup. All mergers can see the content of all tenants, but there might be a use case when you want to limit the scope a merger / executor could see. Thankfully for now, that isn't the case for us. | 13:31 |
*** tobiash has joined #zuul | 13:32 | |
*** lennyb has quit IRC | 13:32 | |
*** pcaruana has joined #zuul | 13:33 | |
Shrews | mordred: i don't think we are actually | 13:33 |
mordred | Shrews: I just ran get_server_by_id against ovh gra1 with that id and it threw that exception at me | 13:34 |
Shrews | mordred: what i see in the log is nodepool.exceptions.ServerDeleteException, which happens when we wait for server deletion | 13:34 |
mordred | Shrews: if we're not throwing that in the nodepool code it's even weirder :) | 13:34 |
mordred | Shrews: WEIRD | 13:34 |
Shrews | i'm confused | 13:34 |
Shrews | mordred: what happens if you call get_server(server_id) | 13:36 |
mordred | checking | 13:37 |
mordred | Shrews: I properly get None | 13:37 |
Shrews | wow. now i'm more confused | 13:37 |
mordred | Shrews: remote: https://review.openstack.org/652995 Return None from get_server_by_id | 13:38 |
mordred | Shrews: taht should fix the get_server_by_id bug | 13:38 |
mordred | hrm. that probably wants a release note | 13:38 |
Shrews | mordred: can you share your test code with me so i don't have to hack that up real quick | 13:42 |
Shrews | want to run it on nl04 | 13:42 |
*** bjackman has joined #zuul | 13:43 | |
*** lennyb has joined #zuul | 13:45 | |
*** bjackman_ has joined #zuul | 13:45 | |
*** bjackman has quit IRC | 13:48 | |
Shrews | nm. hacked one up | 13:50 |
Shrews | wow, this code is really not acting the way i'm expecting | 13:54 |
Shrews | cleanupNode() should be throwing a NotFound() exception, which would prevent waitForNodeCleanup() from being called. That doesn't appear to be the case. | 13:56 |
mordred | Shrews: yeah - i just ran c = openstack.connect() ; c.get_server_by_id() in a repl | 13:56 |
Shrews | mordred: nodepool clearly thinks that server exists from some reason | 14:01 |
*** bjackman_ has quit IRC | 14:13 | |
*** bjackman_ has joined #zuul | 14:14 | |
mordred | Shrews: is it that it's in zk - and since the get_server_by_id is misbehaving it never gets cleaned out of zk? | 14:22 |
*** bjackman_ has quit IRC | 14:23 | |
Shrews | mordred: it’s in zk but calling get_server which doesn’t appear to be returning None but does in my test | 14:24 |
Shrews | I have to afk for a bit | 14:25 |
mordred | Shrews: eek. get_server not returning None seems extra weird | 14:26 |
*** gtema has joined #zuul | 14:29 | |
*** quiquell|rover is now known as quiquell|off | 14:41 | |
fungi | yikes, so i finally got tox -e py37 running on my workstation for zuul last night (by deleting the web/build symlink shipped in the git repo) and after running many, many tests it finally dies because of a too long subunit packet, or so i think based on the traceback... | 14:42 |
fungi | File "/home/fungi/src/openstack/openstack-infra/zuul/.tox/py37/lib/python3.7/site-packages/subunit/v2.py", line 208, in _write_packet | 14:42 |
fungi | raise ValueError("Length too long: %r" % base_length) | 14:42 |
fungi | ValueError: Length too long: 5375957 | 14:42 |
fungi | that's one massive subunit packet | 14:43 |
clarkb | I had those too until I got zk mysql and postgres running | 14:43 |
fungi | it starts its own services for those as fixtures though, right? | 14:44 |
fungi | or maybe i just dreamt that | 14:44 |
SpamapS | you did | 14:44 |
fungi | so... to run zuul unit tests on my workstation i need to have already running database and zookeeper services (with some predetermined configuration)? | 14:45 |
SpamapS | fungi:tools/test-setup.sh ftw ;) | 14:45 |
fungi | that script is frightening unless i run it on a throwaway v, | 14:45 |
fungi | vm | 14:45 |
SpamapS | fungi:actually if you docker, `cd tools && docker-compose up` | 14:45 |
fungi | it adds packages from random places which are not my distro | 14:45 |
fungi | heh... "docker-compose: command not found" | 14:46 |
fungi | time for me to install and learn about docker | 14:46 |
SpamapS | right, it's an add-on | 14:47 |
SpamapS | fungi:docker doesn't come with compose by default, it's a separate tool, but this does at least avoid polluting your dev box. | 14:49 |
corvus | i think it's a good fit for this case | 14:50 |
fungi | yep, noted | 14:50 |
clarkb | I run zk out of its tarball but I should convert it to a docker container since I have mysql and postgres running out of docker containers now too | 14:50 |
corvus | i will tag dc9347c1223e3c7eb0399889d03c5de9e854a836 as zuul 3.8.0 -- does that look good? | 14:50 |
corvus | clarkb, pabelanger, mordred, tobiash: ^ | 14:50 |
corvus | (i believe clarkb restarted opendev's zuul last night one commit behind that, and it seems to work) | 14:51 |
pabelanger | corvus: wfm | 14:51 |
clarkb | corvus: I did but only the scheduler | 14:51 |
tobiash | corvus: ++ | 14:51 |
clarkb | and web | 14:51 |
mordred | corvus: ++ | 14:52 |
corvus | i don't think there are any changes which require restarts of other components, so under the circumstances (urgent release required) i think we're good | 14:52 |
clarkb | and ya sha1 lgtm so ++ from me | 14:52 |
corvus | pushed | 14:53 |
clarkb | I'll send the email to the announce list. I expect I'll get moderated so you can let that through when the release is on pypi? | 14:54 |
*** bjackman_ has joined #zuul | 14:54 | |
corvus | clarkb: yep | 14:54 |
clarkb | sent | 14:56 |
*** jamesmcarthur has joined #zuul | 14:56 | |
*** bjackman_ has quit IRC | 14:59 | |
fungi | and we should switch the story to public at ~ the same time | 15:05 |
fungi | oh, looks like that's already done. cool | 15:06 |
fungi | i guess that was the same time the fixes were pushed to gerrit, so makes sense | 15:07 |
clarkb | https://pypi.org/project/zuul/#history has 3.8.0. Now just waiting for release notes to update? | 15:10 |
clarkb | https://zuul-ci.org/docs/zuul/releasenotes.html#relnotes-3-8-0 that just updated | 15:10 |
clarkb | corvus: ^ I think we are good to send the email? | 15:10 |
corvus | clarkb: done! | 15:12 |
corvus | clarkb, pabelanger: thanks! | 15:13 |
pabelanger | thanks all. starting upgrade now | 15:13 |
*** pcaruana has quit IRC | 15:27 | |
*** pcaruana has joined #zuul | 15:30 | |
*** pcaruana has quit IRC | 15:40 | |
pabelanger | Zuul version: 3.8.0 :) | 15:43 |
pabelanger | <3 xterm.js | 15:43 |
Shrews | mordred: umm, wow. this is what nodepool is getting back: http://paste.openstack.org/raw/749375/ | 15:44 |
Shrews | 'status': 'BUILD' | 15:44 |
Shrews | 'created': '2018-11-04T06:15:47Z' | 15:45 |
Shrews | why is nodepool getting that but our scripts are not? | 15:45 |
*** pcaruana has joined #zuul | 15:46 | |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul master: WIP - Pagure driver - https://pagure.io/pagure/ https://review.openstack.org/604404 | 15:46 |
clarkb | Shrews: could it be an instance that is in a building state that has effectively leaked in the cloud? | 15:48 |
clarkb | or are your scripts listing the same instance? | 15:48 |
clarkb | if ^ then maybe double check you are using the correct account | 15:48 |
Shrews | http://paste.openstack.org/show/749377/ | 15:49 |
Shrews | that prints None | 15:49 |
*** gtema has quit IRC | 15:49 | |
Shrews | using clouds.yaml from the nodepool home directory | 15:49 |
clarkb | cache maybe? though you restarted the process and the cache isn't persistent unless that changed | 15:50 |
Shrews | oh! the correct param is 'region-name', not 'region' | 15:52 |
Shrews | now i get it returned | 15:52 |
Shrews | so apparently deleting an instance in the BUILD state does not ever work | 15:53 |
Shrews | at least in this case | 15:53 |
pabelanger | Shrews: yah, you likely need to toggle the state, but need to be openstack admin to do that | 15:53 |
pabelanger | which usually means, something cloud side messed up | 15:54 |
pabelanger | this is the exact issue rdocloud has, but they've given the nodepool user admin access to the tenant | 15:54 |
Shrews | pabelanger: clarkb: so we need to contact someone at ovh to clean that up for us? | 15:54 |
pabelanger | Shrews: Yup, I've done that in the past | 15:55 |
*** pcaruana has quit IRC | 15:55 | |
pabelanger | usually email them, and they are very good at helping | 15:55 |
Shrews | what's the contact email? does it have to come from someone they know? | 15:56 |
pabelanger | Shrews: I can send you it, 1 sec | 15:57 |
clarkb | amorin in IRC is who we've been pinging lately. I'll PM you an email addr | 15:57 |
pabelanger | ah, clarkb is doing it | 15:58 |
*** bhavikdbavishi has joined #zuul | 16:00 | |
*** pcaruana has joined #zuul | 16:02 | |
mordred | Shrews: wow. we just don't know how to type region_name? | 16:03 |
mordred | Shrews: we're awesome | 16:04 |
Shrews | i haz shame | 16:04 |
*** jamesmcarthur has quit IRC | 16:04 | |
* SpamapS is excited for 3.8.0 as well | 16:04 | |
SpamapS | xterm.js is so nice | 16:04 |
mordred | Shrews: MAYBE we should consider updating sdk to accept both region and region_name ... I get this wrong and I'm core on the project | 16:04 |
mordred | Shrews: especially since passing region *works* but doesn't do what the user thinks it does | 16:05 |
Shrews | mordred: i would unobject to such a thing so hard | 16:05 |
Shrews | mordred: it's also a bit odd that 'openstack server list' does not return it | 16:06 |
Shrews | maybe they filter based on state | 16:06 |
*** pcaruana has quit IRC | 16:08 | |
Shrews | oh, it does return it. helps to use the right credentials | 16:12 |
clarkb | yesterday when trying to make a symlink I kept running ls -s and wondering why it wasn't working | 16:13 |
corvus | clarkb: you probably meant "ls -n" | 16:13 |
*** jamesmcarthur has joined #zuul | 16:13 | |
clarkb | ha | 16:14 |
*** pcaruana has joined #zuul | 16:36 | |
*** mattw4 has joined #zuul | 16:39 | |
*** jamesmcarthur_ has joined #zuul | 16:48 | |
*** jamesmcarthur has quit IRC | 16:52 | |
*** bhavikdbavishi1 has joined #zuul | 16:55 | |
*** bhavikdbavishi has quit IRC | 16:56 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 16:56 | |
*** yolanda_ has quit IRC | 16:57 | |
*** bjackman_ has joined #zuul | 17:22 | |
*** bhavikdbavishi has quit IRC | 17:23 | |
*** bhavikdbavishi has joined #zuul | 17:25 | |
*** bhavikdbavishi has quit IRC | 17:32 | |
*** bjackman_ has quit IRC | 17:40 | |
*** mattw4 has quit IRC | 18:00 | |
clarkb | https://review.openstack.org/#/c/652727/ is a simple fix for the tox cover target in zuul | 18:04 |
*** electrofelix has quit IRC | 18:13 | |
*** jamesmcarthur_ has quit IRC | 18:25 | |
*** jamesmcarthur has joined #zuul | 18:26 | |
*** jamesmcarthur has quit IRC | 18:35 | |
*** irclogbot_0 has quit IRC | 18:39 | |
*** jamesmcarthur has joined #zuul | 18:40 | |
*** irclogbot_1 has joined #zuul | 18:42 | |
clarkb | hrm that failed on the py35 issues | 18:57 |
clarkb | we probably need to dig into why it is so unstable (and either pin it down to $cloud(s) or something else) | 18:58 |
clarkb | http://logs.openstack.org/27/652727/1/gate/tox-py35/3852b93/testr_results.html.gz that doesn't seem to be zk or db related | 18:58 |
pabelanger | guess we need py35 still for xenial | 18:59 |
clarkb | I'm not sure the errors are python versions specific | 18:59 |
clarkb | we just happen to roll the dice ineffectively on those nodes more often? | 18:59 |
pabelanger | yah, we hit a good streak last night | 19:06 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul-jobs master: Add test_setup_reset_connection setting https://review.openstack.org/653130 | 19:38 |
pabelanger | I am not sure if we want to do^, but that is something I do in local jobs to pickup group changes when using tools/setup.sh | 19:39 |
*** jamesmcarthur has quit IRC | 20:00 | |
*** weshay has quit IRC | 20:14 | |
*** weshay has joined #zuul | 20:15 | |
*** pcaruana has quit IRC | 20:19 | |
*** jamesmcarthur has joined #zuul | 21:01 | |
SpamapS | Hm, zuul upgrade broke us | 21:19 |
SpamapS | not necessarily 3.8's fault | 21:19 |
SpamapS | But the multi-ansible is clearly not working | 21:19 |
SpamapS | http://paste.openstack.org/show/749395/ | 21:19 |
SpamapS | 2.5 works, but 2.6 and 2.7 throw this weird optparse sys.exit problem | 21:21 |
mordred | SpamapS: yeah ... that's ... what? | 21:21 |
SpamapS | I can't figure it out | 21:22 |
mordred | SpamapS: I don't see any smoking guns in that paste WRT bad paths or anything | 21:22 |
SpamapS | They were installed with zuul-manage-ansible during our docker build | 21:22 |
SpamapS | I'm setting default ansible version to 2.5 | 21:23 |
SpamapS | Oh that's annoying... so... | 21:24 |
SpamapS | ansible seems to still use system python when doing localhost tasks | 21:25 |
SpamapS | so my boto3 installed in all the venvs does nothing :-P | 21:25 |
mordred | SpamapS: *ugh* | 21:25 |
mordred | SpamapS: fwiw- I can't find DEFAULT_LOCAL_TMP in ansible/constants.py | 21:26 |
SpamapS | mordred:yeah I think there's some weird interactions going on | 21:27 |
mordred | yeah | 21:27 |
SpamapS | Also it seems that while it did print that out | 21:27 |
SpamapS | it went ahead and used 2.7 just fine | 21:27 |
mordred | well - also - it's not in 2.5 - so that might not be a thing | 21:27 |
mordred | ok | 21:27 |
mordred | yeah | 21:27 |
mordred | I think magic somewhere might set that | 21:27 |
mordred | SpamapS: I'm trying to think of a way of setting ansible_python_interpreter to the venv python for localhost - without creating a localhost entry in the inventory | 21:29 |
mordred | SpamapS: so far I am not succeeding in having such an idea | 21:29 |
mnaser | SpamapS: use vars in the task/play? | 21:29 |
mnaser | or you can use a block too | 21:30 |
mordred | SpamapS: terrible idea ... what if we were to bind-mount $venv/bin/python in as /usr/bin/python to the bubblewrap | 21:30 |
mnaser | https://github.com/openstack/openstack-ansible-os_nova/blob/master/tasks/nova_service_setup.yml we delegate operations to the cloud to a specific host (mostly localhost) | 21:30 |
mnaser | nova_service_setup_host_python_interpreter: "{{ openstack_service_setup_host_python_interpreter | default((nova_service_setup_host == 'localhost') | ternary(ansible_playbook_python, ansible_python['executable'])) }}" | 21:30 |
mnaser | its funky but it works. | 21:30 |
mordred | mnaser: I think the thing is that the allocation of which python to use in localhost tasks with multi-ansible in zuul is stuff that really shouldn't be in playbooks or roles | 21:31 |
mnaser | ah I see | 21:31 |
mnaser | that's more of a job definition thing? | 21:31 |
SpamapS | I already set ansible_python_interpreter=/usr/bin/python3 | 21:31 |
SpamapS | as a site var | 21:31 |
SpamapS | because I don't use any images that have a /usr/bin/python | 21:31 |
mnaser | that's a nice feeling | 21:32 |
mordred | yah - but that's not the python that has the boto installed in it | 21:32 |
SpamapS | It is now. ;) | 21:32 |
* SpamapS just rebuilt images w/ it | 21:32 | |
mordred | hahaha | 21:32 |
mordred | SpamapS: fair | 21:32 |
SpamapS | It was actually before too | 21:32 |
SpamapS | I removed it thinking they'd somehow magically use their own python | 21:32 |
mordred | I'm still stumped by your traceback though | 21:32 |
SpamapS | I always forget that Ansible doesn't do that | 21:32 |
corvus | that's what we do for gear, etc -- just install them systemwide | 21:32 |
corvus | but yeah, there's some weird dichotomy here since the executor has a bunch of ansible/python stuff it uses, yet also happens to serve dual-duty as a "remote" system for ansible which receives ansiballz | 21:34 |
SpamapS | Seems to only cause problems in the --version check | 21:35 |
SpamapS | so that was a red herring | 21:35 |
SpamapS | my real problem was lack of system-wide boto3 | 21:35 |
SpamapS | still.. nice to see colors in my log viewer | 21:38 |
SpamapS | I wish I had something that would make the static logs colorized too | 21:38 |
SpamapS | lots of folks complain about the color codes while looking for errors | 21:38 |
SpamapS | would be nice if there was some de-duping in the venvs. My zuul-executor image is now 700+MB | 21:48 |
SpamapS | oh, I see, that's because I'm doing it wrong and re-running zuul-manage-ansible | 21:49 |
SpamapS | the ones built right are only 312MB | 21:49 |
fungi | corvus et al: we want zuul deliverables moved to a zuul namespace during the opendev maintenance on friday, right? | 22:40 |
corvus | fungi: yes! want me to make a list? | 22:41 |
corvus | i think http://git.zuul-ci.org/cgit is it | 22:41 |
fungi | corvus: please do! i can hard-code it into my script | 22:41 |
fungi | oh, right, i can pull it that way too | 22:41 |
fungi | it's just one more file to ingest | 22:42 |
fungi | i'll do it that way | 22:42 |
corvus | ok probably just as easy :) | 22:42 |
fungi | yep! | 22:42 |
corvus | fungi: fyi there's an existing zuul/project-config which does not need to move | 22:43 |
fungi | yep, that's covered just fine. the only things forced to move if they *aren't* listed anywhere are things in the openstack namespace | 22:44 |
corvus | but that's the only thing that we pre-emptively created into the zuul namespace | 22:44 |
SpamapS | Hm, the Buildsets tab seems kind of.. | 22:57 |
SpamapS | pointless? | 22:57 |
pabelanger | I've been using it to quickly see which projects are pushing up PRs | 22:59 |
pabelanger | could I get a review on https://review.openstack.org/650880/ works around some python2.6 issues with validate-host role | 23:04 |
pabelanger | https://review.openstack.org/648815/ also fixes some stale files with prepare-workspace on static nodes | 23:05 |
*** jamesmcarthur has quit IRC | 23:06 | |
*** jamesmcarthur has joined #zuul | 23:07 | |
*** jamesmcarthur has quit IRC | 23:11 | |
*** rlandy|ruck is now known as rlandy|ruck|biab | 23:22 | |
*** sshnaidm is now known as sshnaidm|afk | 23:50 | |
*** rlandy|ruck|biab is now known as rlandy|ruck | 23:55 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!