*** yolanda has quit IRC | 00:27 | |
*** toabctl has quit IRC | 01:22 | |
*** hogepodge has quit IRC | 01:22 | |
*** timrc has quit IRC | 01:22 | |
*** jamielennox has quit IRC | 01:22 | |
*** mordred has quit IRC | 01:22 | |
*** mrhillsman has quit IRC | 01:22 | |
*** _ari_ has quit IRC | 01:22 | |
*** maxamillion has quit IRC | 01:22 | |
*** smyers has quit IRC | 01:22 | |
*** robled has quit IRC | 01:22 | |
*** harlowja has quit IRC | 01:22 | |
*** kmalloc has quit IRC | 01:22 | |
*** zhuli has quit IRC | 01:22 | |
*** mnaser has quit IRC | 01:22 | |
*** patrickeast has quit IRC | 01:22 | |
*** kklimonda has quit IRC | 01:22 | |
*** zaro has quit IRC | 01:22 | |
*** patriciadomin has quit IRC | 01:22 | |
*** EmilienM has quit IRC | 01:22 | |
*** jlk has quit IRC | 01:22 | |
*** mmedvede has quit IRC | 01:22 | |
*** SotK has quit IRC | 01:22 | |
*** pleia2 has quit IRC | 01:22 | |
*** openstackgerrit has quit IRC | 01:22 | |
*** ianw has quit IRC | 01:22 | |
*** clarkb has quit IRC | 01:22 | |
*** rbergeron has quit IRC | 01:22 | |
*** fungi has quit IRC | 01:22 | |
*** TheJulia has quit IRC | 01:22 | |
*** robcresswell has quit IRC | 01:22 | |
*** fdegir has quit IRC | 01:22 | |
*** logan- has quit IRC | 01:22 | |
*** ChanServ has quit IRC | 01:22 | |
*** bstinson has quit IRC | 01:22 | |
*** mgagne has quit IRC | 01:22 | |
*** xinliang has quit IRC | 01:22 | |
*** tristanC has quit IRC | 01:22 | |
*** rcarrillocruz has quit IRC | 01:22 | |
*** leifmadsen has quit IRC | 01:22 | |
*** nguyentrihai has quit IRC | 01:22 | |
*** zigo has quit IRC | 01:22 | |
*** jianghuaw_ has quit IRC | 01:22 | |
*** cinerama has quit IRC | 01:22 | |
*** qwc has quit IRC | 01:22 | |
*** jtanner has quit IRC | 01:22 | |
*** mattclay has quit IRC | 01:22 | |
*** jkilpatr has quit IRC | 01:22 | |
*** dmellado has quit IRC | 01:22 | |
*** fbouliane has quit IRC | 01:22 | |
*** persia has quit IRC | 01:22 | |
*** pbrobinson has quit IRC | 01:22 | |
*** gothicmindfood has quit IRC | 01:22 | |
*** odyssey4me has quit IRC | 01:22 | |
*** threestrands has quit IRC | 01:22 | |
*** tobiash has quit IRC | 01:22 | |
*** lennyb has quit IRC | 01:22 | |
*** dmsimard has quit IRC | 01:22 | |
*** SpamapS has quit IRC | 01:22 | |
*** jhesketh has quit IRC | 01:22 | |
*** eventingmonkey has quit IRC | 01:22 | |
*** pabelanger has quit IRC | 01:22 | |
*** jesusaur has quit IRC | 01:22 | |
*** tflink has quit IRC | 01:22 | |
*** rcarrillocruz has joined #zuul | 01:28 | |
*** threestrands has joined #zuul | 01:28 | |
*** tobiash has joined #zuul | 01:28 | |
*** nguyentrihai has joined #zuul | 01:28 | |
*** mmedvede has joined #zuul | 01:28 | |
*** zigo has joined #zuul | 01:28 | |
*** jkilpatr has joined #zuul | 01:28 | |
*** clarkb has joined #zuul | 01:28 | |
*** jianghuaw_ has joined #zuul | 01:28 | |
*** rbergeron has joined #zuul | 01:28 | |
*** xinliang has joined #zuul | 01:28 | |
*** jtanner has joined #zuul | 01:28 | |
*** fungi has joined #zuul | 01:28 | |
*** patrickeast has joined #zuul | 01:28 | |
*** zaro has joined #zuul | 01:28 | |
*** kklimonda has joined #zuul | 01:28 | |
*** smyers has joined #zuul | 01:28 | |
*** robled has joined #zuul | 01:28 | |
*** lennyb has joined #zuul | 01:28 | |
*** cinerama has joined #zuul | 01:28 | |
*** qwc has joined #zuul | 01:28 | |
*** dmsimard has joined #zuul | 01:28 | |
*** SotK has joined #zuul | 01:28 | |
*** patriciadomin has joined #zuul | 01:28 | |
*** EmilienM has joined #zuul | 01:28 | |
*** toabctl has joined #zuul | 01:28 | |
*** pleia2 has joined #zuul | 01:28 | |
*** dmellado has joined #zuul | 01:28 | |
*** leifmadsen has joined #zuul | 01:28 | |
*** fbouliane has joined #zuul | 01:28 | |
*** _ari_ has joined #zuul | 01:28 | |
*** openstackgerrit has joined #zuul | 01:28 | |
*** mrhillsman has joined #zuul | 01:28 | |
*** mordred has joined #zuul | 01:28 | |
*** jamielennox has joined #zuul | 01:28 | |
*** timrc has joined #zuul | 01:28 | |
*** hogepodge has joined #zuul | 01:28 | |
*** harlowja has joined #zuul | 01:28 | |
*** kmalloc has joined #zuul | 01:28 | |
*** ianw has joined #zuul | 01:28 | |
*** maxamillion has joined #zuul | 01:28 | |
*** TheJulia has joined #zuul | 01:28 | |
*** zhuli has joined #zuul | 01:28 | |
*** SpamapS has joined #zuul | 01:28 | |
*** mnaser has joined #zuul | 01:28 | |
*** robcresswell has joined #zuul | 01:28 | |
*** fdegir has joined #zuul | 01:28 | |
*** bstinson has joined #zuul | 01:28 | |
*** logan- has joined #zuul | 01:28 | |
*** tristanC has joined #zuul | 01:28 | |
*** persia has joined #zuul | 01:28 | |
*** pbrobinson has joined #zuul | 01:28 | |
*** jhesketh has joined #zuul | 01:28 | |
*** mattclay has joined #zuul | 01:28 | |
*** mgagne has joined #zuul | 01:28 | |
*** eventingmonkey has joined #zuul | 01:28 | |
*** pabelanger has joined #zuul | 01:28 | |
*** jesusaur has joined #zuul | 01:28 | |
*** gothicmindfood has joined #zuul | 01:28 | |
*** tflink has joined #zuul | 01:28 | |
*** jlk has joined #zuul | 01:28 | |
*** odyssey4me has joined #zuul | 01:28 | |
*** ChanServ has joined #zuul | 01:28 | |
*** barjavel.freenode.net sets mode: +o ChanServ | 01:28 | |
*** patriciadomin_ has joined #zuul | 01:38 | |
*** patriciadomin has quit IRC | 01:41 | |
*** patriciadomin_ has quit IRC | 01:55 | |
*** patriciadomin has joined #zuul | 01:57 | |
*** nguyentrihai has quit IRC | 02:04 | |
*** dmellado has quit IRC | 05:46 | |
*** dmellado has joined #zuul | 05:55 | |
*** yolanda has joined #zuul | 06:47 | |
*** rbergeron has quit IRC | 07:04 | |
*** xinliang has quit IRC | 07:15 | |
*** jianghuaw_ has quit IRC | 07:17 | |
*** jianghuaw_ has joined #zuul | 07:17 | |
*** xinliang has joined #zuul | 07:28 | |
*** threestrands has quit IRC | 07:30 | |
*** dmellado has quit IRC | 07:57 | |
*** dmellado has joined #zuul | 07:58 | |
*** rbergeron has joined #zuul | 08:25 | |
*** rbergeron has quit IRC | 09:14 | |
*** rbergeron has joined #zuul | 09:14 | |
*** smyers has quit IRC | 10:04 | |
*** smyers has joined #zuul | 10:15 | |
*** nguyentrihai has joined #zuul | 10:25 | |
openstackgerrit | Rui Chen proposed openstack-infra/nodepool feature/zuulv3: Apply floating ip for node according to configuration https://review.openstack.org/518875 | 11:52 |
---|---|---|
openstackgerrit | Rui Chen proposed openstack-infra/nodepool feature/zuulv3: Fix nodepool cmd TypeError when no arguemnts https://review.openstack.org/519582 | 11:53 |
openstackgerrit | Andrea Frittoli proposed openstack-infra/zuul-jobs master: Add compress capabilities to stage artifacts https://review.openstack.org/509234 | 12:02 |
openstackgerrit | Andrea Frittoli proposed openstack-infra/zuul-jobs master: Add a generic process-test-results role https://review.openstack.org/509459 | 12:02 |
*** toabctl has quit IRC | 12:11 | |
Shrews | mordred: I found one more instance of override-branch in that file. | 12:25 |
*** tobiash has quit IRC | 12:58 | |
rcarrillocruz | so question | 13:17 |
rcarrillocruz | i see zuul-executor defaults to 'zuul' as connecting user | 13:17 |
rcarrillocruz | that can be changed with default-username | 13:18 |
rcarrillocruz | does this mean we don't allow remote_user on playbooks, or it will be just ignored | 13:18 |
rcarrillocruz | ? | 13:18 |
dmsimard | rcarrillocruz: You can set inventory-wide vars inside the job parameters, but I don't think you can set host-level vars (yet). The way I see it, this is something you would define inside a nodeset, not through the executor config | 13:20 |
dmsimard | rcarrillocruz: Can try defining "ansible_user: foo" in a job's vars and see if that works | 13:20 |
dmsimard | It'd be a good improvement to be able to define hostvars inside a nodeset IMO | 13:21 |
rcarrillocruz | some will be plumbed from nodepool i assume | 13:21 |
rcarrillocruz | so | 13:21 |
rcarrillocruz | there are changes | 13:21 |
rcarrillocruz | in nodepool | 13:21 |
rcarrillocruz | for having username and port | 13:21 |
rcarrillocruz | i assume zuul will consume those if defined | 13:22 |
rcarrillocruz | and in the end | 13:22 |
rcarrillocruz | those become ansible_port | 13:22 |
rcarrillocruz | ansible_user | 13:22 |
rcarrillocruz | otoh, haven't seen anything for ansible_connection | 13:22 |
rcarrillocruz | tobias was working on windows support | 13:22 |
rcarrillocruz | we must somehow pass that up | 13:22 |
rcarrillocruz | as windows is moslty a winrm game | 13:23 |
odyssey4me | I've picked up what one might consider a bug for nodepool - not a serious one though. The min-ready state check will launch builds if it's not met, even if builds are already in progress to fulfill the min-ready state. | 13:24 |
odyssey4me | It's not much of an issue because nodes will get consumed at some point, so it'll rectify back to the min-ready state - and perhaps this is working as designed to more quickly fulfill tests if they're coming in thick and fast. | 13:25 |
dmsimard | rcarrillocruz: ah, yeah.. it's probably perhaps more of a nodepool thing, you're right. Sometimes the lines blur a bit between zuul and nodepool :) | 13:25 |
dmsimard | odyssey4me: nodepoolv3 ? | 13:26 |
odyssey4me | dmsimard yep | 13:26 |
dmsimard | Haven't had the chance to play with it much yet. In v2 the behavior seems fine, maybe they've changed the logic | 13:26 |
odyssey4me | it's only something you'd see on a quiet system, because you can deliberately hold nodes and manage the usage/depletion of the pool | 13:27 |
odyssey4me | the side-effect of the behaviour is that if you have 5 launchers, then when the min-ready quota is not met you'll have 5 launchers kick off builds to meet the min-ready quota, resulting in a bunch of nodes | 13:28 |
dmsimard | ohhhh, I see | 13:29 |
dmsimard | I guess we haven't scaled beyond one launcher yet :p | 13:30 |
dmsimard | We're getting rid of jenkins soon with zuul v3.. didn't want to needlessly migrate to zuul-launcher knowing v3 was coming | 13:30 |
dmsimard | we're actually reaching the limits of a single jenkins master, seeing a couple issues | 13:30 |
odyssey4me | yeah, I'm building a multi-region setup so that if one region fails the others can still do the work needed | 13:35 |
dmsimard | sounds fun | 13:41 |
Shrews | odyssey4me: yeah, the min-ready mechanism is not an exact thing. we try to guarantee "at least" min-ready nodes, but we may actually end up building more. especially noticable to more launchers are in use | 13:51 |
Shrews | s/to more/the more/ | 13:51 |
dmsimard | Shrews: could that be handled through zookeeper ? like, if a launcher claims to be launching a node, the other launchers take it into account ? | 13:51 |
Shrews | dmsimard: possibly. we'd have to examine the request queue along with the current node count (we don't consider the req queue right now). it's still not going to be an exact thing | 13:52 |
odyssey4me | dmsimard given that this is desirable behaviour in a busy environment, and perhaps less desirable behaviour in some environments, it might be nice to have two algorithms... something like optimistic and conservative | 13:52 |
Shrews | improvements to it are welcome | 13:52 |
odyssey4me | conservative takes the queue into account, and optimistic doesn't :) | 13:53 |
Shrews | it's difficult to capture an exact state of the system without locking the entire system | 13:53 |
odyssey4me | yeah, that's true | 13:55 |
odyssey4me | hmm, it looks like the image list cleans itself, which is neat - but it doesn't clean up the cloudfiles uploads for rax... just the saved image list | 14:16 |
odyssey4me | and the files on disk | 14:16 |
SpamapS | Shrews: all one needs is a stat cache that is locked for incr/decr/calc. | 14:29 |
SpamapS | But I'm kind of with you.. patches accepted, but for the most part it's not that terrible if one ends up with a few extra nodes on a busy system. | 14:29 |
Shrews | keeping a stat cache in sync with reality could be... interesting | 14:31 |
mordred | odyssey4me: oh - thanks for the reminder on that ... | 14:32 |
mordred | odyssey4me: I verified with the glance team in sydney that glance imports from swift copy the data, so the swift objects are not needed once the import is complete | 14:32 |
SpamapS | Shrews: not too many entry points to incr/decr ready/building, and you can always cross-tab check it for bugs occasionally (requiring a broader lock0 | 14:33 |
mordred | odyssey4me: I think there are two missing things ... one is cleaning objects when an image is deleted when the objects were created for the user by create_image ... | 14:33 |
mordred | odyssey4me: the second is that I *actually* think we should delete the objects once create_image has succeeded, since once the image is imported they are no longer needed by anything | 14:34 |
mordred | odyssey4me: one is purely a shade patch, one is a shade patch to add a shade api call and then a nodepool patch using it from nodepool | 14:35 |
odyssey4me | mordred where's an appropriate place to register a bug for that as a reminder? | 14:37 |
Shrews | SpamapS: there are actually more entry points than that (e.g., sub-statuses for assigning/unassigning ready nodes). but i don't think it's worth the effort anyway | 14:38 |
mordred | odyssey4me: no need - I'm writing them right now | 14:48 |
odyssey4me | mordred awesome, thanks :) | 14:55 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack-infra/nodepool feature/zuulv3: [docs] Correct default image name https://review.openstack.org/520628 | 15:50 |
*** openstackgerrit has quit IRC | 16:03 | |
rcarrillocruz | hey folks, how the push logs work? does the executor pull logs from nodepool nodes locally, then push to log server? or is maybe an indirect rsync, where executor drives the copy from hte nodepool node to the log server? | 16:46 |
*** weshay is now known as weshay_bbiab | 16:46 | |
*** tobiash has joined #zuul | 16:48 | |
Shrews | mordred: with the native devstack job, what's the proper way to run the post_test_hook? | 16:50 |
Shrews | i'm not finding any roles to do that | 16:50 |
mordred | Shrews: I just put the content of the post_test_hook into the run playbook for shade's devstack jobs | 16:52 |
mordred | Shrews: post_test_hook itself is a devstack-gate thing | 16:52 |
Shrews | mordred: i'm not finding that | 16:53 |
mordred | Shrews: so - basically - after the run-devstack role - just kind of do whatever | 16:53 |
mordred | Shrews: playbooks/devstack/pre.yaml and playbooks/devstack/run.yaml | 16:53 |
mordred | Shrews: for shade, I had it run devstack in pre-run, since we're not testing devstack with shade, we just need a devstack so that we can run shade's tests | 16:53 |
mordred | Shrews: I'd imagine the same pattern could hold for nodepool | 16:54 |
mordred | Shrews: the existing nodepool devstack-gate job installs nodepool via a devstack plugin though - so if we keep that model, you'd want to do the run-devstack role in the run playbook, as a patch to nodepool could cause the install to stop working | 16:55 |
Shrews | mordred: oh, our plugin for shade doesn't really do much other than install shade. nodepool's does a bit more. | 16:55 |
mordred | Shrews: BUT - we could also change approaches and just use ansible to install nodepool after devstack is done rather than as a devstack plugin | 16:55 |
mordred | Shrews: in fact, we could even consider using openstack/ansible-role-nodepool to do the nodepool | 16:57 |
mordred | install | 16:57 |
mordred | it has support for installing from git already - if we did that, we could add the nodepool devstack job to openstack/ansible-role-nodepool too and we'd have good validation of both things | 16:58 |
mordred | AND - we could also consider making jobs for non-devstack installs - like an OSA-nodepool job - to get a little more coverage of different types of clouds | 16:59 |
mordred | anyway - just thoughts | 16:59 |
* Shrews wants to just start simple here for now | 16:59 | |
Shrews | i think the thing to do is just call check_devstack_plugin.sh using 'command' | 17:00 |
mordred | nod. then I think just doing run-devstack role in a run playbook, and then a playbook/role to run post_test_hook is your best bet | 17:00 |
odyssey4me | windmill does a single node with zookeeper and all that | 17:00 |
mordred | yah | 17:00 |
odyssey4me | it's built for testing, so it could work well | 17:01 |
mordred | odyssey4me: yah - I think that'll be a good followup once Shrews has the simple conversion working | 17:01 |
odyssey4me | for production the ansible-role-zookeeper lacks cluster support, so we're using a different one to cover the zookeeper setup | 17:01 |
mordred | odyssey4me: is that our/windmill's ansible-role-zookeeper? should we maybe shift to the role you're using? | 17:02 |
mordred | odyssey4me, Shrews: btw ... | 17:02 |
mordred | remote: https://review.openstack.org/520652 Cleanup objects that we create on behalf of images | 17:02 |
mordred | remote: https://review.openstack.org/520653 Add method to cleanup autocreated image objects | 17:02 |
odyssey4me | yeah, after doing some inspection, https://github.com/AnsibleShipyard/ansible-zookeeper meets our needs at this stage - it's pretty straightforward to setup | 17:03 |
odyssey4me | had to set two vars https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-setup-nodepool-yml-L105-L106 and pre-install a jre https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-setup-nodepool-yml-L128 | 17:03 |
odyssey4me | oh, and make sure the nodes can resolve each other: https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-setup-nodepool-yml-L189-L210 | 17:04 |
*** openstackgerrit has joined #zuul | 17:05 | |
openstackgerrit | Monty Taylor proposed openstack-infra/nodepool feature/zuulv3: Run image object autocleanup after uploading images https://review.openstack.org/520657 | 17:05 |
mordred | and ^^ that consumes it | 17:05 |
mordred | odyssey4me: the resolve each other step should be skippable if DNS yeah? | 17:06 |
odyssey4me | yep | 17:06 |
mordred | odyssey4me: also - your paste reminds me I need to follow up on seeing if qemu can make working vhd images now .. | 17:06 |
odyssey4me | although you might still have to correct the local /etc/hosts file if something added a private IP as the first entry for the host name | 17:06 |
odyssey4me | mordred I tried without vhd-util from the infra ppa and it failed | 17:07 |
mordred | I think I made a patch to qemu to support it - but never finished validating that it all worked. sigh | 17:07 |
odyssey4me | I then tried the distro file, but it still has not 'convert' action | 17:07 |
mordred | odyssey4me: yah - it definitely needs a patched/newer qemu ... but I honestly don't remember if I even got as far as submitting the patch ... | 17:08 |
odyssey4me | that's why the long treatise comment to explain the task, otherwise others will waste their time too | 17:08 |
mordred | ++ | 17:08 |
odyssey4me | I have another gist somewhere giving the procedure for compiling that toolset too. IIRC it needs a physical host. :/ | 17:08 |
odyssey4me | so I was very happy to find the infra ppa :) | 17:09 |
odyssey4me | tyvm :) | 17:09 |
mordred | you're welcome! I'm sad it's needed, but ... yeah- it's a real mess | 17:09 |
clarkb | at this point even aws is giving up on it so ya | 17:11 |
mordred | odyssey4me: in case you get bored ... https://github.com/emonty/do-not-use-patched-qemu | 17:11 |
odyssey4me | hahaha | 17:11 |
mordred | odyssey4me: that has a repo with a debian package patch that should theoretically add support to qemu based on conversaions with BobBall about what xenserver is looking for in the image metadata header thing | 17:12 |
odyssey4me | starred, so that when I find myself struggling to sleep I can give it a whirl | 17:12 |
mordred | (qemu can make vhd images, but xenserver expects creator_app to be 'tap' | 17:12 |
odyssey4me | srsly - that's it? | 17:13 |
mordred | odyssey4me: yah - also something about blanking out a batmap flag | 17:13 |
mordred | https://github.com/emonty/do-not-use-patched-qemu/blob/master/debian/patches/xenserver-support.patch is the patch itself | 17:14 |
odyssey4me | ja, reading it now | 17:14 |
mordred | it's ... it's really sad | 17:14 |
odyssey4me | not sure which is worse - having to patch vhd-util or having to patch qemu | 17:15 |
odyssey4me | I'm thinking the latter, actually. | 17:15 |
mordred | yah. well - the vhd-util patch is unacceptably bad and won't ever get upstreamed | 17:15 |
mordred | if we can verify the qemu patch works, I'm pretty sure we can get it landed upstream | 17:15 |
mordred | I just keep getting distracted from that task | 17:16 |
mordred | so maybe just maybe we could get lucky enough to have the next ubuntu LTS have a qemu that can make vhd images | 17:16 |
odyssey4me | it's probably already too late for that unless you push for it now until the release | 17:18 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: [docs] Correct default image name https://review.openstack.org/520628 | 17:28 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: DNM: Convert from legacy to native devstack job https://review.openstack.org/520664 | 17:39 |
*** nguyentrihai has quit IRC | 17:43 | |
Shrews | heh. i expected ^^^ to not work, but i at least expected some sort of error from zuul | 18:11 |
* Shrews makes tea to ponder this | 18:12 | |
Shrews | oh, i guess no playbooks, no run. | 18:14 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: WIP: Convert from legacy to native devstack job https://review.openstack.org/520664 | 18:20 |
* jlk waves | 19:06 | |
jlk | sorry I've not been around lately, I've had to go do some other tasks for a bit. But I think I have some time to throw at Zuul again. Cranking up the gertty. | 19:07 |
jlk | One thing I'm very curious about is whether there is appetite yet to think/talk/prototype Nodepool drivers, specifically a k8s driver. | 19:08 |
jlk | I know SpamapS would be very interested in such a thing. And I might be able to throw time into it | 19:08 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: WIP: Convert from legacy to native devstack job https://review.openstack.org/520664 | 19:18 |
* jlk hopes he doesn't step on toes by giving out some +3s | 19:19 | |
jlk | jeblair: mordred: is there anything I should be aware of before doing (more) +3s? | 19:20 |
SpamapS | jlk: yes I want a k8s driver like yesterday ;) | 19:20 |
SpamapS | I mean.. honestly.. I have the cloud capacity to run 20x1GB VMs, no problem | 19:21 |
SpamapS | but feels a bit silly to use a 1GB VM for what takes about 64MB of RAM (run a ruby markdown linter.. or a yaml syntax checker) | 19:21 |
jlk | yeah, you at least have an OpenStack to play with. I may not. | 19:22 |
jlk | What does Zuul do when there is a set of jobs running for a given change:patchset and a new patchset for the change comes in? Does it cancel the running jobs? I know it'll not run a NEW change:jobset if when it starts it finds that the patchset isn't the newest one | 19:24 |
dmsimard | rcarrillocruz: the logs are pulled from the nodepool VM by the executor and then pushed to the logserver | 19:25 |
dmsimard | we were discussing that just yesterday in fact | 19:25 |
SpamapS | jlk: it aborts yes | 19:25 |
dmsimard | rcarrillocruz: http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2017-11-15.log.html#t2017-11-15T21:06:33 | 19:25 |
SpamapS | jlk: well, cancels them yes | 19:25 |
jlk | okay | 19:25 |
SpamapS | jlk: it's kind of a nice feature actually. :) | 19:26 |
jlk | indeed. | 19:26 |
jlk | keeps the queue small | 19:26 |
SpamapS | jlk: if you don't have an openstack, you can play by doing trusted jobs on the executor | 19:27 |
pabelanger | Shrews: Hmm, do you mind helping see why an autohold didn't work | 19:27 |
SpamapS | no nodes required ;) | 19:27 |
pabelanger | sudo zuul autohold --tenant openstack --project openstack/requirements --job build-wheel-mirror-centos-7 --reason pabelanger | 19:27 |
pabelanger | was the command | 19:27 |
pabelanger | and trying to hold a job failure in periodic pipeline | 19:27 |
pabelanger | | openstack | git.openstack.org/openstack/requirements | build-wheel-mirror-centos-7 | 1 | pabelanger | | 19:27 |
pabelanger | that is what I see in autohost-list but nothing in nodepool | 19:27 |
Shrews | pabelanger: has that job run and failed since you set it? | 19:30 |
pabelanger | Shrews: yup | 19:30 |
pabelanger | let me get job | 19:30 |
pabelanger | http://logs.openstack.org/78/520178/5/periodic/build-wheel-mirror-centos-7/7b44080 | 19:31 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: web: add /tenants route https://review.openstack.org/503268 | 19:31 |
Shrews | pabelanger: i have no idea. maybe has something to do with being periodic? I don't see it being de-registered in zuul logs. | 19:38 |
Shrews | pabelanger: but the ara logs show this error: tar (child): bzip2: Cannot exec: No such file or directory | 19:38 |
*** toabctl has joined #zuul | 19:40 | |
Shrews | pabelanger: i don't have enough knowledge of inner zuul workings to understand if there is a different code path for periodic jobs | 19:41 |
pabelanger | Shrews: yah, there is a build failure, and wanted to get into node and see why | 19:41 |
pabelanger | kk | 19:41 |
pabelanger | no rush, I have a patch up to collect logs | 19:41 |
pabelanger | figured I ask | 19:41 |
Shrews | first i've seen of autohold not working | 19:42 |
Shrews | pabelanger: let's ask jeblair when he returns (i'll be out next week so won't be able to do so) | 19:43 |
Shrews | why does our bindep role default to 'test' profile and not the default profile? | 19:44 |
Shrews | that seems counterintuitive | 19:45 |
Shrews | i'd think that the default would be, ya know, the default | 19:46 |
pabelanger | think the idea was to have bindep test, pull in things needs to testing | 19:52 |
pabelanger | and bindep production | 19:53 |
pabelanger | for deployment | 19:53 |
pabelanger | but, hasn't gone that way yet | 19:53 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: web: add /{tenant}/status route https://review.openstack.org/503269 | 20:01 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Switch to threading model of socketserver https://review.openstack.org/517437 | 20:15 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Remove zuul-migrate job https://review.openstack.org/516028 | 20:15 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Improve error handling in webapp /keys https://review.openstack.org/517053 | 20:16 |
SpamapS | oh nice, web improvements. :-D | 20:27 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: More documentation for enqueue-ref https://review.openstack.org/518662 | 20:29 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Make enqueue-ref <new|old>rev optional https://review.openstack.org/518663 | 20:29 |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Override HOME environment variable in bubblewrap https://review.openstack.org/519654 | 20:29 |
tobiash | jlk: I left a comment on your comment at https://review.openstack.org/#/c/517078/ | 20:45 |
mordred | woot! web things landing | 20:51 |
mordred | pabelanger: I think we need to update our puppet to add the new js lib the dashboard uses | 20:51 |
* mordred apologizes for being not super around today - day of calls ... | 20:51 | |
*** _ari_ has quit IRC | 20:55 | |
rcarrillocruz | Cool, thx dmsimard | 21:01 |
*** _ari_ has joined #zuul | 21:06 | |
*** docaedo has joined #zuul | 21:14 | |
*** weshay_bbiab is now known as weshay | 21:17 | |
*** docaedo has left #zuul | 21:28 | |
pabelanger | mordred: kk, I can look into that | 21:33 |
jlk | tobiash: oh weird, gertty is showing them in a weird order | 23:29 |
jlk | doesn't list one as a parent of the other | 23:29 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!