rcarrillocruz | folks, won't attend tomorrow meeting, i'm in Munich in the new hire orientation thingy for redhat | 00:07 |
---|---|---|
rcarrillocruz | jeblair: ^ | 00:07 |
*** jamielennox is now known as jamielennox|away | 00:24 | |
*** jamielennox|away is now known as jamielennox | 00:50 | |
*** saneax-_-|AFK is now known as saneax | 06:02 | |
*** Cibo_ has joined #zuul | 07:28 | |
*** abregman has joined #zuul | 08:34 | |
*** hashar has joined #zuul | 08:50 | |
*** abregman has quit IRC | 08:56 | |
*** markmcd has quit IRC | 09:17 | |
*** jamielennox is now known as jamielennox|away | 09:48 | |
*** jamielennox|away is now known as jamielennox | 10:24 | |
*** hashar has quit IRC | 10:37 | |
*** hashar has joined #zuul | 10:38 | |
*** herlo has quit IRC | 12:08 | |
*** herlo has joined #zuul | 12:08 | |
*** herlo has joined #zuul | 12:08 | |
*** isaacb has joined #zuul | 12:20 | |
*** isaacb_ has joined #zuul | 12:20 | |
*** isaacb_ has left #zuul | 12:21 | |
jkt | hi there, I've upgraded and rebooted my zuul-server, and I notice that it's now trying to access my private projects over https without proper credentials, https://paste.kde.org/pjm52hxec | 13:17 |
jkt | what confuses me is that this code apparently hasn't been touched since 2015, and I only installed this service maybe a month ago | 13:18 |
jkt | ah well, so according to the logs, this has always been the case, interesting | 13:24 |
*** tristanC_ has joined #zuul | 13:30 | |
*** herlo_ has joined #zuul | 13:31 | |
*** tflink has quit IRC | 13:32 | |
*** hashar has quit IRC | 13:32 | |
*** tristanC has quit IRC | 13:32 | |
*** herlo has quit IRC | 13:32 | |
*** tflink has joined #zuul | 13:32 | |
*** hashar has joined #zuul | 13:32 | |
Shrews | pabelanger: So, I retract my retraction. I think the failure you're seeing for min-ready=2/max-servers=1 is valid (and we should change the config). | 14:27 |
Shrews | hrm... we might be able to make it work, though, by not requesting more than 1 server at a time for min-ready | 14:30 |
Shrews | that would also solve the "splitting up min-ready requests" issue | 14:33 |
*** jamielennox is now known as jamielennox|away | 14:39 | |
*** abregman has joined #zuul | 14:41 | |
*** openstackgerrit has joined #zuul | 15:18 | |
Shrews | pabelanger: https://review.openstack.org/433127 | 15:19 |
pabelanger | Shrews: will look shortly | 15:20 |
*** saneax is now known as saneax-_-|AFK | 15:21 | |
Shrews | pabelanger: no rush. solves the issue you found and fixes something jeblair wanted, too | 15:22 |
pabelanger | yay | 15:22 |
Shrews | grr, why did that fail | 15:43 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Split up min-ready requests to 1 node per request https://review.openstack.org/433127 | 15:45 |
Shrews | oooh, it's a race | 15:49 |
isaacb | pabelanger: is there a way to temporary disable a job in zuul without editing the layout file and reloading the zuul services? | 15:51 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Split up min-ready requests to 1 node per request https://review.openstack.org/433127 | 16:00 |
*** abregman has quit IRC | 16:00 | |
pabelanger | isaacb: no, we don't have anything via the CLI today | 16:08 |
isaacb | pabelanger: ok, thanks | 16:08 |
*** abregman has joined #zuul | 16:13 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Split up min-ready requests to 1 node per request https://review.openstack.org/433127 | 16:27 |
pabelanger | Shrews: ya, I think the split approach over updating our config file. | 16:37 |
Shrews | pabelanger: there's a problem with it i have to track down | 16:37 |
*** abregman is now known as abregman|afk | 16:38 | |
pabelanger | k | 16:38 |
*** isaacb has quit IRC | 16:42 | |
*** saneax-_-|AFK is now known as saneax | 16:52 | |
jeblair | mordred: in 428798 i pointed out the error that's causing tests to fail; you want to update? | 16:56 |
mordred | jeblair: nope. I don't believe in tests | 17:05 |
mordred | jeblair: oh holy hell. thank you | 17:13 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Add action plugins to restrict untrusted execution https://review.openstack.org/428798 | 17:13 |
*** abregman|afk has quit IRC | 17:13 | |
jeblair | mordred: green! | 17:20 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Split up min-ready requests to 1 node per request https://review.openstack.org/433127 | 17:23 |
Shrews | jeblair: fyi, before i forget (again), integration test is failing b/c of provider authentication issues, if you're curious. That should be a simple .yaml change, yeah? | 17:28 |
jeblair | mordred: +2; who else should review it? | 17:28 |
jeblair | Shrews: i think so | 17:29 |
jeblair | Shrews: you want me to poke at it or do you want to? | 17:29 |
Shrews | k. i'll try to look at it again later. i tried adding a 'cloud' line to it last week, but that didn't work | 17:29 |
Shrews | jeblair: https://review.openstack.org/431727 ... not sure what the correct solution is | 17:31 |
jeblair | Shrews: it's intended to use the fakes which are triggered by "auth-url: 'fake'" i believe | 17:32 |
jeblair | Shrews: have a link to a failing job? | 17:33 |
Shrews | jeblair: http://logs.openstack.org/27/433127/1/experimental/gate-zuul-nodepool/0a0f934/testr_results.html.gz | 17:33 |
Shrews | err, actual log: http://logs.openstack.org/27/433127/1/experimental/gate-zuul-nodepool/0a0f934/logs/logs/nodepool-launcher.log | 17:34 |
jeblair | Shrews: ah, yeah, so it's trying to use an actual provider not a fake one | 17:34 |
mordred | jeblair: might be nice for jlk and/or SpamapS to take a peek | 17:34 |
jeblair | (i really need to add a test that tests that we don't break running the fake providers in a real daemon, though, i guess if we make this one voting, we will have one :) | 17:35 |
* SpamapS rouses | 17:37 | |
Shrews | pabelanger: ok, think i fixed the issue in 433127. we had a race on re-using existing READY nodes. | 17:37 |
jeblair | Shrews: the problem is that _updateProvider is instantiating a ProviderManager directly instead of using the helper function get_provider_manager | 17:38 |
Shrews | jeblair: ugh | 17:38 |
jeblair | Shrews: i will patch | 17:38 |
Shrews | jeblair: i love that there are 3 ways to instantiate a ProviderManager, btw | 17:39 |
pabelanger | Shrews: looking | 17:39 |
Shrews | still need to remove the static methods at some point | 17:39 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Use helper function to instantiate ProviderManager https://review.openstack.org/433212 | 17:39 |
Shrews | jeblair: thx | 17:41 |
jeblair | Shrews: yep. i'd be happy to see it simplified. to be fair, there used to only be one -- the static method. you're changing things, which is fine, but if you do that, you can expect the transition period to be confusing. :) | 17:41 |
Shrews | to be fair, i am always confused | 17:41 |
jeblair | Shrews: i left 'check experimental' on that, we'll see if i'm confused :) | 17:42 |
Shrews | \o/ | 17:42 |
pabelanger | Shrews: left a question on 433127. | 17:43 |
pabelanger | Shrews: jeblair: with nodepool, will we have 2 daemons or 3? | 17:44 |
pabelanger | nodepool, nodepool-builder, nodepool-launcher? | 17:44 |
jeblair | pabelanger: 2: launcher and builder | 17:45 |
Shrews | pabelanger: replied! | 17:45 |
pabelanger | jeblair: k | 17:45 |
jeblair | no more split daemon nonsense :) | 17:45 |
jeblair | (we can split by provider if needed for performance reasons) | 17:45 |
pabelanger | ya, wasn't sure if we needed a 3rd for scheduling | 17:45 |
pabelanger | but sounds like each launcher will have something | 17:45 |
jeblair | pabelanger: nope, they're self-sufficient | 17:45 |
pabelanger | very cool | 17:46 |
mordred | SpamapS, jlk: if you do look at https://review.openstack.org/428798 keep in mind roles aren't handled/managed yet, so are not covered in this | 17:51 |
mordred | jlk: do playbooks have an implcit role lookup path relative to playbook location? | 17:52 |
Shrews | jeblair: that seemed to work, though it will need https://review.openstack.org/431719 to pass now | 17:57 |
jeblair | Shrews: 719 depends on an abandoned change? 431727 | 18:02 |
Shrews | jeblair: oops. lemme remove that depends-on | 18:03 |
*** hashar has quit IRC | 18:04 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Supply label name to Nodes https://review.openstack.org/431719 | 18:04 |
Shrews | jeblair: thx for pointing that out ^^^ | 18:05 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Use helper function to instantiate ProviderManager https://review.openstack.org/433212 | 18:08 |
jeblair | Shrews: i approved your label change, but also depended the helper function on it, will recheck that now | 18:08 |
Shrews | jeblair: w00t! success | 18:15 |
jeblair | Shrews: yay! | 18:15 |
jeblair | i'm going through the outstanding nodepool stack now | 18:16 |
jeblair | Shrews: can you give your +1 or +2 to 431647? | 18:16 |
Shrews | jeblair: done | 18:17 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Supply label name to Nodes https://review.openstack.org/431719 | 18:17 |
jeblair | pabelanger: do you want to review 433127? | 18:20 |
pabelanger | sure | 18:20 |
jeblair | pabelanger: that and 433212 are ready for your +3 if you want to give it | 18:21 |
pabelanger | +3 | 18:22 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Update nodepool 'list' command https://review.openstack.org/431647 | 18:23 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Update nodepool hold to use zookeeper https://review.openstack.org/431756 | 18:24 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Re-enable test_dib_image_pause / test_dib_image_upload_pause https://review.openstack.org/432447 | 18:24 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Re-enable test_dib_image_delete test https://review.openstack.org/432606 | 18:24 |
Shrews | pabelanger: If you want to focus on test re-enablement, I can get to work on a node deleter thread | 18:25 |
pabelanger | Shrews: WFM | 18:25 |
Shrews | awesome. getting delete working will help avoid bash scripts at the ptg :) | 18:26 |
jeblair | ++ | 18:26 |
jeblair | spurious test failure? http://logs.openstack.org/07/432607/1/gate/gate-nodepool-python27-ubuntu-xenial/b0b24dc/testr_results.html.gz | 18:26 |
pabelanger | ya, just looking at it | 18:27 |
jeblair | looks like maybe the provider is still running and creating requests while the test is shutting down and trying to delete the zookeeper tree? | 18:28 |
jlk | mordred: the implicit role path is roles/ subdir from where the playbook is. | 18:29 |
jlk | mordred: beyond that, it's explicit via the config file, with a default of /etc/ansible/roles | 18:30 |
jeblair | jlk: are we talking roles or plugins? | 18:30 |
jlk | roles, but I may have misunderstood the question | 18:30 |
jlk | [10:24:49] <mordred>17:52:12> jlk: do playbooks have an implcit role lookup path relative to playbook location? | 18:31 |
mordred | jlk: k. thanks | 18:31 |
jeblair | oh hey i missed that, sorry :) | 18:31 |
mordred | when we add scrubbing of role content for safety | 18:31 |
jeblair | cool | 18:31 |
mordred | we should also apply that logic to anything found in a roles dir adjacent to the playbook when we find it | 18:31 |
mordred | sort of like how we're throwing an error if we find plugin dirs adjacent to the playbook | 18:32 |
jeblair | mordred: so we'll need to match "<playbook>/roles/plugins" or something? | 18:32 |
mordred | jeblair: yah. | 18:32 |
jeblair | mordred: because we *do* want to allow roles, right? just not roles with plugins? | 18:32 |
mordred | right | 18:32 |
jeblair | k | 18:32 |
mordred | jeblair: and to be fair, we do want to allow roles with plugins if they are in trusted repos | 18:33 |
jeblair | right | 18:34 |
jlk | this is all just around automatically running a playbook in-repo right? | 18:35 |
jlk | Totally avoided if the "test" calls ansible from one of the nodepool nodes? | 18:35 |
jeblair | jlk: correct, but that's not the way we want most jobs to be written. i can imagine that tests *of* ansible would want to be written that way; however, something's gone wrong if that's required in a more general case. | 18:36 |
jlk | right, I was thinking of the use case where the code being tested is ansible code, which would require custom plugins or modules. | 18:37 |
Shrews | jeblair: pabelanger: I think jeblair is correct. Since we don't wait on the NodeLaunch threads, they could still access ZK. | 18:37 |
Shrews | during shutdown, that is | 18:38 |
jeblair | jlk: yeah, in that case, if it isn't in a secure repo, it would need to do what you say. | 18:38 |
jeblair | Shrews: it looks like that test created another min-ready request, so the issue may be that we don't wait for the main loop to stop | 18:40 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_node_net_name test https://review.openstack.org/433227 | 18:41 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Wait for main loop when stopping https://review.openstack.org/433230 | 18:42 |
jeblair | Shrews: ^ | 18:42 |
jeblair | actually, lemme rebase that | 18:43 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Wait for main loop when stopping https://review.openstack.org/433230 | 18:43 |
jeblair | there | 18:43 |
jeblair | [alternatively, we could put that join in the test cleanup (but we'd also have to make sure to do so in any other callers)] | 18:44 |
Shrews | nah, that looks good to me | 18:45 |
pabelanger | Shrews: what do you think will be the best way to see if validate a node is not launched? That is the current thing we are missing from waitForNodes() | 18:47 |
pabelanger | I need to assert we don't launch a disabled label | 18:47 |
pabelanger | and don't have a good way to do that ATM | 18:47 |
mordred | jeblair, jlk: yah - that's the use-case where I think we'll want to provide some sort of syntactic sugar so that the person in question can write things the 'normal' way but have thigns executed on a remote node - but I'm pretty sure I don't know exactly what that sugar looks lke yet | 18:48 |
jlk | agreed, will take some use casing | 18:49 |
Shrews | pabelanger: the request for that node fails? | 18:49 |
jlk | mordred: jeblair: to that end, we're testing hoist, our Ansible to install zuul/nodepool et al, in Zuul 2.5, with the ansible launcher, using nodepool subnodes. SO we'll have SOME experience with how it kind of works | 18:49 |
Shrews | pabelanger: oh, disabled label | 18:50 |
jlk | mordred: looking at your change, I'm wondering if we can hack into the various strategies, or use some plugin, to maintain a whitelist or blacklist of modules. Instead of having to create symlinks to a blacklisted module, maybe we can capture which module is in use and reject/approve accordingly? | 18:51 |
jlk | might be a lot easier to maintain than a pile of symlinks that has to be examined each new Ansible release for possible bad actors. | 18:51 |
pabelanger | Shrews: ya, disabled | 18:52 |
jlk | also this feels like an excellent topic to speak about at AnsibleFest in London :D | 18:52 |
pabelanger | test_disabled_label for example | 18:52 |
Shrews | pabelanger: hrm... iterate through all requests and make sure there are none, or at least none with that label in request.node_types. depends on the test | 18:52 |
mordred | jlk: ++ | 18:53 |
mordred | jlk: this is definitely the sledgehammer approach atm | 18:53 |
*** saneax is now known as saneax-_-|AFK | 18:54 | |
Shrews | pabelanger: gee, there's really no way to wait for a label that's disabled. | 18:57 |
Shrews | not sure of the best solution there | 18:57 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_node_net_name test https://review.openstack.org/433227 | 18:58 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_node_vhd_image test https://review.openstack.org/433233 | 18:58 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_dib_upload_fail test https://review.openstack.org/433235 | 19:14 |
SpamapS | I have a name for our locked down Ansible: HOSTansIbLE | 19:17 |
SpamapS | Or if you want to veil it a bit.. HostAnsible | 19:17 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Wait for main loop when stopping https://review.openstack.org/433230 | 19:26 |
jlk | hah | 19:31 |
jlk | Ansiblocked | 19:31 |
*** hashar has joined #zuul | 19:32 | |
jeblair | nice :) | 19:36 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Set Node image_id and launcher attributes https://review.openstack.org/433242 | 19:48 |
Shrews | jeblair: I'm assuming you wanted Node.image_id to be the external image ID there ^^^ and not a ZK id | 19:49 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Re-enable test_node test https://review.openstack.org/432607 | 19:54 |
hashar | good morning! | 20:02 |
hashar | do you still accept minor changes to the Zuul master branch or is that closed / deprecated in favor of Zuul v3? | 20:03 |
hashar | I have a few tiny patches pending which could use to be landed. I got them cherry picked on my instance but feel they might benefit others :D | 20:03 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Split up min-ready requests to 1 node per request https://review.openstack.org/433127 | 20:18 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Use helper function to instantiate ProviderManager https://review.openstack.org/433212 | 20:18 |
mordred | hashar: we've landed few few bugfixes/small changes to master recently | 20:27 |
mordred | hashar: it sort of depends on which bits they touch | 20:27 |
hashar | mordred: yes I noticed hence my question :] | 20:27 |
hashar | maybe I should rebase mine and list them somewhere | 20:27 |
hashar | then you guys can triage them and accept/deny them | 20:28 |
hashar | and if deny I will just abandon them | 20:28 |
mordred | hashar: I think that sounds like a great plan | 20:28 |
hashar | mordred: should i use the openstack-infra list for that or is zuul blessed with a mailing list nowadays ? :] | 20:31 |
mordred | hashar: nope - still on infra list :) | 20:31 |
hashar | :) | 20:32 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Add generator API method for node iteration https://review.openstack.org/433252 | 20:32 |
Shrews | jeblair: Current node deletion seems to launch a thread per node-to-delete. Given your concerns about thread contention, should we still follow that model? | 20:44 |
jeblair | Shrews: i think we can probably go ahead and drop that -- | 20:52 |
jeblair | Shrews: the reason it's a thread is due to all the things it (historically) needs to do -- each of which are asynchronous and could fail (much like launching) | 20:52 |
jeblair | Shrews: well, hrm, that may not be right... | 20:53 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Add generator API method for node iteration https://review.openstack.org/433252 | 20:54 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Set Node image_id and launcher attributes https://review.openstack.org/433242 | 20:54 |
jeblair | the steps are: remove from jenkins (or revoke from zuul v2.5), clean up subnodes, and then actually delete the node | 20:54 |
Shrews | pabelanger: added the requested testing in 433242 | 20:54 |
jeblair | in the new world, we're just left with 'delete the node' which is a shade call | 20:54 |
jeblair | but that shade call is, i think, a blocking call which itself has lots of individual tasks | 20:54 |
pabelanger | Shrews: thanks, looking | 20:55 |
jeblair | Shrews: so it may need to remain a thread for now, until we find a way to rework that. | 20:55 |
jeblair | (tasks like floating ip cleanup, etc) | 20:55 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable working test_builder.py tests https://review.openstack.org/433262 | 20:57 |
jeblair | Shrews: re earlier quuestion -- i think the image_id was intended to be the zk path | 20:57 |
jeblair | Shrews: image_id: /nodepool/image/ubuntu-trusty/builds/123/provider/rax/images/456 | 20:57 |
Shrews | jeblair: ok. right now, i'm thinking a new child thread of the main NodePool thread that periodically iterates through the nodes (with a long'ish sleep period) which itself spawns the deleter thread | 20:57 |
Shrews | jeblair: ack on image_id | 20:58 |
jeblair | Shrews: (i think the idea there was to give us an easy way to say "this node was launched from this image / build") | 20:58 |
jeblair | Shrews: that sounds like it would work, but i think we'll want deletes to happen quickly. to accomplish that, we may either want a short sleep or to start using watches... | 20:59 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_diskimage_build_only test https://review.openstack.org/433265 | 21:00 |
jeblair | Shrews: (deletes happen nearly instantaneously now, and it's important for our throughput) | 21:00 |
jeblair | Shrews: (setting a watch on every node that is allocated might be an approach) | 21:01 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_image_upload_fail test https://review.openstack.org/433270 | 21:01 |
Shrews | jeblair: ok. i was mostly concerned with querying ALL nodes too often (and the load that might cause), but that seems unavoidable as watches will not work on things like abnormally terminated launches | 21:02 |
*** jamielennox|away is now known as jamielennox | 21:02 | |
Shrews | i suppose we could mix the two approaches | 21:02 |
jeblair | Shrews: agreed -- we would probably need to do long poll + watches or short poll. | 21:03 |
Shrews | Hrm, not likely to get this done this week w/o major effort. Ugh | 21:03 |
jeblair | Shrews: good news is, polling is a good starting point and can build on it :) | 21:03 |
Shrews | yeah. maybe i can get that part done | 21:03 |
jeblair | Shrews: yeah, and we can set a short poll interval and see how it works and add watches and reduce the polling later | 21:04 |
jeblair | if needed | 21:04 |
Shrews | jeblair: one last thing on image_id... the full path is a hidden detail for our ImageUpload model. How about "<image>-<build_id>-<provider>-<upload_id>"? | 21:09 |
Shrews | or some variant | 21:09 |
Shrews | meh, i could access the "private" _imageUploadPath() ZK API method to build it, I guess | 21:10 |
jeblair | Shrews: i don't have strong feelings on that (i figured it's in the zk database so it didn't need to be hidden, and we would present something more useful to the user in CLI utilities). i think really this is another point in favor of making globally unique images rather than the deep tree approach [this is something i think i'd like to change in a later revision]. fortunately, it's the only thing like this. | 21:13 |
Shrews | guid++ | 21:14 |
jeblair | we said we were going to learn things. we did. :) | 21:14 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Set Node image_id and launcher attributes https://review.openstack.org/433242 | 21:21 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Add generator API method for node iteration https://review.openstack.org/433252 | 21:21 |
Shrews | ok, i think image_id is correct now in 433242. going to take a quick walk before the meeting | 21:23 |
*** herlo_ is now known as herlo | 21:24 | |
*** herlo has quit IRC | 21:24 | |
*** herlo has joined #zuul | 21:24 | |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_diskimage_build_only test https://review.openstack.org/433265 | 21:33 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_dib_upload_fail test https://review.openstack.org/433235 | 21:33 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_node_vhd_image test https://review.openstack.org/433233 | 21:33 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_image_upload_fail test https://review.openstack.org/433270 | 21:33 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable test_node_net_name test https://review.openstack.org/433227 | 21:33 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Re-enable working test_builder.py tests https://review.openstack.org/433262 | 21:33 |
jhesketh | Morning | 21:48 |
jeblair | it's meeting time in #openstack-meeting-alt | 22:00 |
*** hashar has quit IRC | 22:06 | |
*** lespaul has joined #zuul | 22:34 | |
*** saneax-_-|AFK is now known as saneax | 23:02 | |
*** jamielennox is now known as jamielennox|away | 23:15 | |
*** jamielennox|away is now known as jamielennox | 23:20 | |
*** lespaul has quit IRC | 23:34 | |
jhesketh | jeblair: so thinking about the variants and whatnot in v3 (from your email), have you given any thought to storing branch variants in the branches themselves? It'd be very complicated because of the way jobs sprawl across repos, but it'd allow changes to jobs to be tested when proposed | 23:40 |
jhesketh | I think we may have chatted about this before | 23:40 |
* jhesketh 's memory is poor | 23:40 | |
jhesketh | I suppose for testing branch changes you just need to use a depends-on, but I'm not sure if that'd work in the same repo but across branches | 23:41 |
jhesketh | the other thing is that stable branches will still contain a .zuul.yaml file. Perhaps we need a policy of cleaning them up if they are otherwise completely ignored | 23:42 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!