*** rlandy has quit IRC | 00:35 | |
*** zer0c00l has joined #zuul | 01:08 | |
zer0c00l | Any idea i can pass a --property (metdata) when nodepool launches an instance? | 01:08 |
---|---|---|
zer0c00l | My nodepool.yaml looks like http://paste.openstack.org/show/747434/ | 01:11 |
zer0c00l | I want to nodepool to launch an instance with metdata (--property key=value) | 01:11 |
pabelanger | zer0c00l: i believe you want to use https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.labels.instance-properties | 01:14 |
zer0c00l | pabelanger: so it would look like | 01:17 |
zer0c00l | instance-properties: | 01:17 |
zer0c00l | <tab> key: value | 01:18 |
zer0c00l | Under pool/labels | 01:18 |
zer0c00l | Thanks | 01:18 |
pabelanger | zer0c00l: yes: http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/tests/fixtures/config_validate/good.yaml#n57 | 01:19 |
zer0c00l | zer0c00l: Thanks. That file is a great example. | 01:20 |
zer0c00l | Wish it is part of the documentation | 01:20 |
pabelanger | good idea | 01:22 |
SpamapS | zer0c00l: tell Cr45h0v3rr1d3 I said hi | 01:42 |
zer0c00l | SpamapS: :) | 01:46 |
zer0c00l | Hackers.avi | 01:46 |
*** bhavikdbavishi has joined #zuul | 02:11 | |
*** jamesmcarthur has joined #zuul | 02:30 | |
*** jhesketh has quit IRC | 02:45 | |
*** jhesketh has joined #zuul | 02:47 | |
*** bhavikdbavishi has quit IRC | 02:58 | |
*** ianw_pto has quit IRC | 03:23 | |
*** ianw has joined #zuul | 03:23 | |
*** jamesmcarthur has quit IRC | 03:27 | |
*** daniel2 has quit IRC | 03:42 | |
*** bhavikdbavishi has joined #zuul | 03:48 | |
*** bhavikdbavishi1 has joined #zuul | 03:48 | |
*** bhavikdbavishi has quit IRC | 03:52 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 03:52 | |
*** daniel2 has joined #zuul | 04:59 | |
*** bjackman has joined #zuul | 05:46 | |
*** saneax has joined #zuul | 06:43 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: tests: remove debugging prints https://review.openstack.org/641942 | 07:00 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: switch jobs list to a tree view https://review.openstack.org/633437 | 07:20 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: add jobs list filter https://review.openstack.org/633652 | 07:23 |
*** bjackman has quit IRC | 07:24 | |
*** pcaruana has joined #zuul | 07:26 | |
*** gtema has joined #zuul | 08:04 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: switch jobs list to a tree view https://review.openstack.org/633437 | 08:11 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: add jobs list filter https://review.openstack.org/633652 | 08:11 |
*** bjackman has joined #zuul | 08:15 | |
bjackman | I'm finding that when my jobs are aborted, the processes started by ansible tasks on my test node are not killed. This is a problem because (unfortunately) I am using the static nodepool driver | 08:22 |
bjackman | Does anyone know what I can do about that? | 08:22 |
tristanC | zuul jobs tree view with filter search preview available here: http://logs.openstack.org/52/633652/6/check/zuul-build-dashboard/91c54d3/npm/html/ | 08:23 |
tobiash | bjackman: that's because zuul sends a sigkill to ansible: https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L1811 | 08:32 |
tobiash | bjackman: maybe it makes sense to change that and first send sigterm and after a few seconds sigkill | 08:32 |
AJaeger | tristanC: you mean http://logs.openstack.org/52/633652/6/check/zuul-build-dashboard/91c54d3/npm/html/jobs ? Nice! | 08:34 |
AJaeger | tristanC: clicking on a job name gives a 404 - is that to be expected? | 08:35 |
AJaeger | tristanC: oh, direct click works - but open in new tab gives 404 | 08:36 |
*** rfolco has joined #zuul | 08:36 | |
tristanC | AJaeger: yes, zuul preview dashboard needs to be accessed from '/', and tested through the zuul-preview service | 08:36 |
tristanC | AJaeger: that's why i shared the html/ link, html/jobs doesn't work from the logserver | 08:37 |
*** rfolco|ruck has quit IRC | 08:38 | |
tristanC | AJaeger: thanks for checking it out, it took more than a year to land the necessary zuul-web bits :) | 08:40 |
AJaeger | tristanC: ah, I see. A year? Wow. Thanks for persisting and getting all bits together! | 08:40 |
tristanC | the initial implementation was https://review.openstack.org/537869, but it was missing multi-parent representation | 08:42 |
bjackman | tobiash, I would have expected that when Ansible is killed, its SSH processes would die too and so the remote processes would too | 08:42 |
tobiash | bjackman: the ssh processes are also killed with -9 and won't have a chance to tell the remote that they're gone now, so the remote process will notice that with some kind of delay | 08:43 |
bjackman | tobiash, Ah right, so the remote process is going to carry on until the socket times out or something I guess? Then yeah I guess SIGTERM is the best solution | 08:48 |
tobiash | yes | 08:48 |
bjackman | Exposing my network noobiness here but I also would have thought the network stack would send a FIN when the process owning the socket got killed? | 08:49 |
*** hashar has joined #zuul | 09:07 | |
*** pcaruana has quit IRC | 09:12 | |
*** pcaruana has joined #zuul | 09:27 | |
*** fdegir has quit IRC | 09:34 | |
*** fdegir has joined #zuul | 09:48 | |
*** electrofelix has joined #zuul | 10:36 | |
*** jkt has quit IRC | 10:50 | |
*** bhavikdbavishi has quit IRC | 10:51 | |
bjackman | Is there a way to do a reconfigure of Zuul without going through all the merger<->scheduler init stuff? I'm not sure exactly what it is, but it seems to be asking the merger to check out every tag in every repo | 10:53 |
bjackman | I have a couple of repos with a lot of tags | 10:54 |
bjackman | So it takes a very long time | 10:54 |
*** jkt has joined #zuul | 10:57 | |
*** gtema has quit IRC | 11:28 | |
*** gtema has joined #zuul | 11:28 | |
*** hashar is now known as hasharLunch | 11:33 | |
*** bhavikdbavishi has joined #zuul | 11:43 | |
*** gtema has quit IRC | 11:46 | |
*** gtema has joined #zuul | 11:49 | |
*** panda|rover is now known as panda|rover|lunc | 11:54 | |
tobiash | bjackman: zuul should only check branches, not tags afaik | 12:07 |
*** EmilienM is now known as EvilienM | 12:08 | |
*** hasharLunch is now known as hashar | 12:15 | |
*** gtema has quit IRC | 12:33 | |
*** zbr|ssbarnea has quit IRC | 12:46 | |
*** zbr has joined #zuul | 12:47 | |
*** gtema has joined #zuul | 12:52 | |
*** bhavikdbavishi has quit IRC | 13:07 | |
*** frickler has quit IRC | 13:17 | |
*** frickler has joined #zuul | 13:18 | |
*** panda|rover|lunc is now known as panda|rover | 13:25 | |
*** pcaruana has quit IRC | 13:30 | |
*** rlandy has joined #zuul | 13:32 | |
*** bhavikdbavishi has joined #zuul | 13:55 | |
*** rfolco is now known as rfolco|ruck | 14:08 | |
*** bhavikdbavishi has quit IRC | 14:10 | |
mordred | tristanC: I like that tree view a lot! how hard would it be, do you think, to add a toggle to switch between list and tree view? I ask because I went to search for "openstacksdk" - and the results are really cool and informative, but if I was looking for a specific job that I knew happened to have openstacksdk in it it's a bit harder to find. I think the tree view is a great deafult though - so maybe a | 14:21 |
mordred | 'flatten' checkbox or something? I think it could be a followup - mostly just asking while looking | 14:21 |
fungi | in the project browser, what's the cause of/distinction between the multiple master branches listed like i can see at http://zuul.opendev.org/t/openstack/project/openstack/ironic | 14:24 |
tristanC | mordred: yes sure, there could be a checkbox in the jobs view | 14:26 |
mordred | tristanC: cool - I left a +2 with a comment saying something like that on the change | 14:27 |
tristanC | fungi: that's because the zuul api returns each variant, that is each project-template applied and the different project definition | 14:27 |
tristanC | fungi: i meant to look at either freezing those per branch, either at the api level, or in the webui | 14:27 |
fungi | ahh, so those are different project variants? or groups of job variants? (and what determines which tab they're grouped in?) | 14:28 |
tristanC | fungi: each tab is either a project-template, or a project config (e.g. applied to a project from a config projects) | 14:30 |
fungi | i was going to point someone there to show how to tell what jobs a project is running, but after looking at it for a moment figured i would just confuse them even more | 14:30 |
tristanC | fungi: this change show the api "issue" for project-template: https://review.openstack.org/604264 | 14:30 |
fungi | ahh, thanks! | 14:31 |
fungi | seems like fixing that ought to help | 14:31 |
tristanC | fungi: indeed, that ironic page is confusing, there may be other issue when multiple project template are applied | 14:33 |
*** pcaruana has joined #zuul | 14:37 | |
tristanC | fungi: it seems like branch matcher are not taken into account, only the default branch. Thus that ironic page is confusing master tab with some other branch. | 14:46 |
tristanC | fungi: thus, either the api returns the existing branch matcher in https://git.zuul-ci.org/cgit/zuul/tree/zuul/model.py#n3159 | 14:46 |
tristanC | fungi: either the api merge/freeze the different project configs based on the branch matchers | 14:47 |
tristanC | corvus: what's your though on that ^ ? | 14:48 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: add flatten checkbox https://review.openstack.org/642047 | 15:04 |
tristanC | mordred: ^ :) | 15:04 |
mordred | tristanC: wow, you're quick :) | 15:05 |
mordred | tristanC: oh - and thats way simpler of a patch than I was thinking it would be | 15:05 |
tristanC | mordred: thought the "processNode" may be too simple... but if we optimize it that patch may get more complex | 15:12 |
tristanC | with the amount of branch and job in openstack tenant it's quite slow, but it works so... | 15:13 |
*** jamesmcarthur has joined #zuul | 15:16 | |
*** gtema has quit IRC | 15:30 | |
corvus | tristanC: is the branch tab in http://zuul.opendev.org/t/openstack/project/openstack/ironic from the default_branch variable? | 15:45 |
corvus | yes it is | 15:47 |
*** hashar has quit IRC | 15:48 | |
corvus | tristanC, fungi: i think that's part of the confusion; that's not the right value to render there. the default_branch is a property of the project as a whole -- it doesn't change in different contexts. it's almost always 'master'. but for ansible, it's 'devel'. | 15:48 |
*** TheJulia is now known as needssleep | 15:50 | |
corvus | it *happens* to sometimes have different values in the internal representation, just because of how it's defined, but zuul always coalesces it to the same value for a particular project before doing anything with it. it might be useful to display it as a piece of data inside the tab, so that if it's defined twice in two different ways (which is usually an error), that's visible to the user. | 15:51 |
corvus | but it probably shouldn't be the tab at the top. | 15:51 |
corvus | tristanC, fungi: as to how to improve that page to make it more useful -- i think there are a couple of options | 15:56 |
corvus | the first is to keep the current approach of data representation but resolve the usability issues that make it confusing | 15:58 |
corvus | the first thing we should do is make sure that we put the source_context in the info for each tab -- that way we can see exactly where each of those config stanzas is. that would help people understand that each of those tabs is a config stanza. | 15:58 |
corvus | then we should use some other value for the tab -- either the implied branch matcher, or source_context.branch | 15:58 |
*** irclogbot_3 has quit IRC | 15:59 | |
corvus | the problem with using the implied branch matcher is that these can (just like jobs) have explicit branch matchers too, so we'd have to account for that (it may not always say "stable/queens", though it usually will. explicit branch matchers get used a lot less often in project stanzas) | 16:00 |
corvus | we can handle it just like the job branch variant tabs | 16:00 |
corvus | we should also display the default branch and merge mode above the tabs, since it's global project info. we should also display them inside the tabs too so users know where they are defined. | 16:01 |
corvus | if think if we do that, then we'll have a more comprehensible page that answers the question "show me all of the configuration stanzas for this project" | 16:02 |
corvus | the other approach, which tristanC alluded to, would be to try to answer the question "what does this project do on this branch?" | 16:03 |
corvus | we're at a slight advantage with project variants compared to job variants -- we *do* know all the branches of a project. so we can freeze the job graph for each of those branches and present the resulting configuration to the user. | 16:04 |
corvus | i suspect that might be a slightly more common question that users ask, and perhaps displaying that as the project page may be more intuitive. | 16:05 |
corvus | i think we can use the new freeze api to accomplish that. so the javascript would say "get me all the branches for this project" then loop over those and "get me the configuration for this branch". we could probably make a single convenience method which does all of that in one api call. | 16:06 |
corvus | if we update the project page like that, i'm not sure whether we should *also* keep the current project page. maybe we should, in case it's helpful for someone digging more deeply into the configuration. but maybe we should move it off of the main project page. so if you click on a project you get the new page with the frozen graph, and then from there you can click to see all the project variants. | 16:08 |
corvus | regardless, the current api endpoint for returning all the project config stanzas should remain -- it's important for automated tools which need to understand the underlying configuration | 16:09 |
corvus | so, in summary, i think we should make a new page using the freeze api, then link it to the current page, which we should update to correct the tab labels, but if no one thinks that's useful, we could drop it. | 16:10 |
*** bhavikdbavishi has joined #zuul | 16:12 | |
*** dkehn has joined #zuul | 16:15 | |
*** irclogbot_3 has joined #zuul | 16:17 | |
*** irclogbot_3 has quit IRC | 16:18 | |
*** irclogbot_3 has joined #zuul | 16:22 | |
corvus | tobiash: i'm around if you want to chat :) | 16:25 |
tobiash | corvus: good timing, just arrived at home :) | 16:28 |
*** dkehn has quit IRC | 16:28 | |
tobiash | so the thing is that I'd like to spread the executors over multiple regions with each having an openshift | 16:28 |
tobiash | the restriction is that I can route into an openshift via a load balancer so I can route from the executors to gearman, but not from zuul-web to a specific executor | 16:29 |
tobiash | which would break live log streaming | 16:29 |
*** dkehn has joined #zuul | 16:30 | |
tobiash | I had two ideas to solve that so far. One idea would be to do the streaming to zuul-web via gearman instead of finger. | 16:30 |
corvus | tobiash: hang on | 16:31 |
corvus | tobiash: you said you can route into openshift via a lb, so why can't you establish a lb for each executor so that you can access the streaming port? | 16:31 |
tobiash | I'd have to open a new port for each executor and would need to configure a route to each executor | 16:32 |
corvus | tobiash: yeah, but that's the right thing to do in the container world | 16:32 |
tobiash | that makes the executor deployment much more complicated and would need a firewall rule change on the openshift nodes each time an executor is added | 16:33 |
tobiash | I have another hand wavey idea of making the finger-gw the 'load balancer' or dispatche in the other openshift. | 16:34 |
corvus | tobiash: i almost hesitate to ask this, but why do you want to spread the executors out? that's not the right architecture for zuul. it's designed so that all the zuul components run together, and the executors reach out to external resources. | 16:35 |
tobiash | that would expand the executor zone concept a bit and could make it possible to route the stream from zuul-web to the finger-gw in the other openshift which then would dispatch the connection to the correct executor | 16:35 |
tobiash | corvus: our executor transfer much data to and from the nodes so I anticipate that this will eventually become a problem | 16:36 |
tobiash | also that would spread the io load of the executors over multiple regions | 16:37 |
tobiash | but you're right, it's not a direct requirement so far | 16:37 |
corvus | tobiash: ok. i'll accept that as a given for the moment. :) tell me more about your fingergw idea | 16:38 |
tobiash | I'm just trying to think ahead so I have ideas how to solve such bottlenecks when we hit them | 16:38 |
tobiash | so the zuul-web, zuul-executor and fingergw (we would have one in each cluster) would know their 'zone' | 16:38 |
tobiash | when zuul-web asks for the finger-address of a build it would send its zone and would get back the executor if it's in the same zone or the other fingergw if it's in another zone | 16:39 |
tobiash | and the other fingergw would do the same resolution and dispatch further to the correct local executor | 16:40 |
tobiash | that way the finger-gw would be the equivalent component to the router/ingress when using http dispatching | 16:41 |
corvus | how does a user end up at a particulor zuul-web? | 16:42 |
tobiash | zuul-web could be in just one location or even reachable in multiple locations that are chosen via dns/geo ip or similar mechanisms | 16:43 |
tobiash | if each zuul-web knows its zone that wouldn't make a difference | 16:43 |
corvus | oh i thought you said zuul-web in each cluster | 16:43 |
tobiash | well, that would be possible too | 16:43 |
tobiash | I meant that you need at least the fingergw in all clusters with executors | 16:44 |
corvus | tobiash: let's try this: https://awwapp.com/b/u1cxnco8b/ | 16:46 |
corvus | never used it before, it's just the first google result for shared whiteboard | 16:46 |
tobiash | wow, never saw this :) | 16:47 |
corvus | your screen is a lot larger than mine :) | 16:47 |
tobiash | it's a 4k, ok I try to make it more compact ;) | 16:49 |
corvus | don't worry, i zoomed out | 16:49 |
tobiash | how can I move stuff? | 16:50 |
corvus | tobiash: arrow cursor at the top, drag a box around it, then move | 16:51 |
tobiash | somehow that didn't work at the first try | 16:51 |
corvus | okay, i think i get it now | 16:52 |
corvus | i wasn't imagining multiple executors per region earlier, only one | 16:52 |
corvus | (probably because i'm more used to thinking about zones as one executor per zone) | 16:52 |
tobiash | ah ok | 16:53 |
corvus | but now i see what you mean, using the fingergw as an aggregator lets you set up a region with one ingress point, and then scale out to N executors in each region | 16:53 |
tobiash | exactly | 16:53 |
tobiash | I think that could even be relatively simple to support | 16:53 |
tobiash | zuul-web already asks the scheduler for the executor executing job xyz | 16:54 |
tobiash | I think we'd just need to add zone information to that call | 16:54 |
tobiash | and that would magically work | 16:55 |
tobiash | or maybe call it region to not confuse it with the nodepool zone information | 16:55 |
corvus | i like that idea -- i don't think it's going to present problems scaling this in the future. i think the only tricky bit will be updating the finger protocol request syntax to support routing, and maybe making sure this doesn't open some kind of DOS or vulnerability. | 16:55 |
corvus | well, since the executor knows its "zone" i think it makes sense to keep the same idea. | 16:56 |
tobiash | yes | 16:56 |
corvus | oh, apparently https://www.draw.io/ is open source | 16:59 |
corvus | https://github.com/jgraph/drawio | 16:59 |
tobiash | I think we anyway start with all executors in the same cluster, but I'm feeling good that we have a solution in mind that can solve those problems if we hit them | 16:59 |
corvus | tobiash: yeah, i think that will work | 16:59 |
tobiash | cool, that makes me happy :) | 16:59 |
corvus | \o/ | 17:00 |
*** jamesmcarthur_ has joined #zuul | 17:00 | |
*** jamesmcarthur has quit IRC | 17:03 | |
tobiash | corvus: just curious after reading your mail, will zuul get its own zuul-tenant on opendev or will it stay in openstack? | 17:05 |
corvus | tobiash: its own tenant. we could actually do that now. | 17:06 |
corvus | tobiash: ttx and clarkb pushed through the initial changes to make us multi-tenant: http://zuul.opendev.org/tenants | 17:07 |
pabelanger | +1 to do now :) | 17:07 |
tobiash | yeah, just saw that | 17:07 |
tobiash | so zuul will vanish then from zuul.o.o web ui? | 17:08 |
corvus | i've had too much on my plate to start that right now, but if other folks have time, i'm happy to help :) | 17:08 |
tobiash | I guess the scheduler is the same | 17:08 |
clarkb | tobiash: yes I think that is the mid to long term plan | 17:08 |
pabelanger | sounds like a swell friday afternoon project | 17:08 |
clarkb | not sure anyone has it on their immediate plan to do | 17:08 |
corvus | heh, o.o is ambiguous now :) but, if you mean that zuul and nodepool projects won't appear in zuul.openstack.org, correct :) | 17:08 |
tobiash | oh right | 17:09 |
tobiash | friday evening... | 17:09 |
corvus | i'm especially looking forward to having our own tenant because we can drop "clean check" and go back to enqueueing changes directly into gate :) | 17:09 |
pabelanger | where do we see out config-project for zuul tenant? zuul-config I assume? | 17:10 |
pabelanger | s/out/our | 17:10 |
corvus | sounds reasonable, unless we wanted to reserve that name for a future tool called "zuul-config". things get tricky in our namespace :) | 17:11 |
pabelanger | zuul-project-config might also work | 17:11 |
tobiash | maybe just zuul/conf? | 17:11 |
pabelanger | zuul/tenant-config? | 17:11 |
tobiash | tenant-config should be really the zuul tenant configuration | 17:12 |
pabelanger | true | 17:12 |
*** rlandy is now known as rlandy|brb | 17:12 | |
corvus | i like zuul/config zuul/project-config | 17:12 |
tobiash | ++ | 17:12 |
*** pcaruana has quit IRC | 17:12 | |
*** gtema has joined #zuul | 17:13 | |
*** panda|rover is now known as panda|rover|baby | 17:48 | |
*** rlandy|brb is now known as rlandy | 17:51 | |
dmsimard | btw I found this yesterday, a sphinx lexer for yaml/yaml+jinja: https://github.com/ansible/ansible/blob/devel/docs/docsite/_extensions/pygments_lexer.py | 17:57 |
dmsimard | sharing in case it might be useful for zuul docs :) | 17:58 |
dmsimard | I asked the author if it was possible to split it out of Ansible so it can be re-used more easily | 17:59 |
mordred | dmsimard: seems like the sort of thign that should just be submitted upstream to pygments no? | 18:00 |
mordred | dmsimard: http://pygments.org/docs/lexers/#pygments.lexers.data.YamlLexer <-- pygments has one - perhaps that ansible one is just improving it - or maybe adding support for ansible-specific yaml things? | 18:01 |
dmsimard | mordred: I dunno. I found it by accident last night when searching for issues preventing me from lexing yaml | 18:01 |
mordred | dmsimard: yes - that is a fork of the upstream yaml lexer | 18:01 |
dmsimard | mordred: in any case, if you're asking me if it should be upstreamed -- you're preaching to the choir :p | 18:03 |
* mordred now imagines a choir composed of only dmsimard | 18:03 | |
*** electrofelix has quit IRC | 18:04 | |
dmsimard | wouldn't that be something | 18:16 |
*** irclogbot_3 has quit IRC | 18:17 | |
*** irclogbot_3 has joined #zuul | 18:20 | |
*** saneax has quit IRC | 18:22 | |
SpamapS | Hrm, kubernetes pod node type didn't work for me. :-/ | 18:31 |
*** jamesmcarthur_ has quit IRC | 18:32 | |
corvus | mordred: you said " put in an entry to the Zuul CR" in your most recent email. what does that mean? what's a zuul CR? | 18:34 |
mordred | corvus: custom resource | 18:34 |
mordred | corvus: or maybe I should just say resource there | 18:34 |
corvus | mordred: is that the same or different than a CRD? | 18:35 |
mordred | corvus: I was saying CRD before - but CRD is "custom resource definition" - so maybe that's still the right term to use here... shrug? | 18:35 |
*** gtema has quit IRC | 18:35 | |
corvus | in my mental model, a CRD is a bit of yaml that causes k8s to create a CR which is a mess of containers, etc. | 18:36 |
corvus | mordred: anyway, i now parse the sentence, thx :) | 18:36 |
openstackgerrit | Merged openstack-infra/nodepool master: docker: don't daemonize when starting images https://review.openstack.org/635584 | 18:37 |
mordred | corvus: ah - in my mental model the CRD is the bit of yaml that tells k8s what the spec is for a type of custom resource - and then a user defines a resource using yaml that matches that spec - but it's entirely possible that I'm overthiking those words and the invocation of a custom resource is the thing referred to by CRD | 18:38 |
corvus | mordred: maybe you're right. but if i just s/CRD?/yaml/ i can understand your point well enough for this conversation. :) | 18:39 |
mordred | corvus: yaml all the yamls with the docker in the dockers for your yamls | 18:39 |
corvus | someone is going to have to understand that difference eventually, but i don't think it's now :) | 18:39 |
corvus | mordred: here, let me send you a github to yaml your docker | 18:40 |
SpamapS | I yaml my kubernetes with istio, docker and calico, and a side of fluentd. | 18:46 |
openstackgerrit | Merged openstack-infra/zuul master: tests: remove debugging prints https://review.openstack.org/641942 | 18:48 |
SpamapS | Hrm, is there a way to bypass the default base job entirely? | 18:52 |
SpamapS | My kubernetes pod issue is failing on the fact gathering in the first pre playbook of the base job. | 18:53 |
SpamapS | So I can't like, get ahead of the fail with a - pause: minutes=60 and peek inside the executor | 18:53 |
clarkb | SpamapS: you can disable fact gathering iirc | 18:54 |
*** bhavikdbavishi has quit IRC | 18:55 | |
pabelanger | SpamapS: does autohold work? | 18:56 |
pabelanger | oh, you said executor | 18:57 |
SpamapS | clarkb: at the play level, yes... I don't control the play of the first base job. | 18:57 |
SpamapS | pabelanger: yeah I can autohold the pod. The problem is the executor can't talk to kubernetes to exec in. | 18:57 |
pabelanger | at one point, I think we discussed autohold not deleting executor build directory | 18:58 |
pabelanger | only after the hold was removed | 18:58 |
tobiash | SpamapS: facts gathering is done before the first base job | 18:59 |
mordred | SpamapS: we do a few instances of dropping special fact files into the fact cache so that fact gathering doesn't try to gather facts - and I thni kwe skip fact gathering for certain types | 18:59 |
mordred | maybe we're missing this for k8s pod type | 18:59 |
SpamapS | I could turn on keep actually | 19:00 |
tobiash | SpamapS: https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L1122 | 19:00 |
tobiash | SpamapS: so maybe the ansible setup might need to be skippable? | 19:01 |
pabelanger | tobiash: yah, still would like to find a way to make that option. Very soon it will be an issue for us, ansible-network, when we bring online network vendors that don't work properly with fact gathering | 19:01 |
mordred | tobiash, SpamapS: https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L396-L401 | 19:01 |
tobiash | SpamapS: and regarding skipping the base job, you can just define your own base job in any config-repo | 19:01 |
mordred | we skip it for localhost | 19:01 |
mordred | but ploppping that in | 19:01 |
mordred | butyeah - maybe we want to do some things more generally to allow skipping fact gathering (and puttign in stub files) on some hosts or types of hosts | 19:02 |
tobiash | SpamapS: a base job btw, doesn't need to be called 'base', just set parent: null | 19:03 |
SpamapS | parent: null was what I was looking for :) | 19:03 |
tobiash | but that won't solve your problem if the problem is facts gathering | 19:03 |
SpamapS | that's not where it's failing | 19:04 |
SpamapS | so something else is going on | 19:04 |
tobiash | ah ok, then it might solve your issue :) | 19:04 |
SpamapS | fact gathering in the base job is specifically where it is failing with a 401 from kubernetes. | 19:04 |
SpamapS | Also I just realized kubectl exec doesn't work for any pods on that cluster.. so.. might not be zuul's fault | 19:08 |
*** rfolco|ruck is now known as rfolco|ruck|brb | 19:11 | |
*** jamesmcarthur has joined #zuul | 19:11 | |
openstackgerrit | Merged openstack-infra/zuul master: SQL: only create tables in scheduler https://review.openstack.org/610696 | 19:12 |
*** jamesmcarthur has quit IRC | 19:30 | |
*** jamesmcarthur has joined #zuul | 19:43 | |
*** irclogbot_3 has quit IRC | 20:13 | |
*** rfolco|ruck|brb is now known as rfolco|ruck | 20:21 | |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool master: Remove prelude from AWS release note https://review.openstack.org/642148 | 20:42 |
corvus | pabelanger, SpamapS, tobiash, mordred: ^ i think we should merge that before release | 20:42 |
corvus | the release notes look really weird with that in place: https://zuul-ci.org/docs/nodepool/releasenotes.html#in-development | 20:42 |
tobiash | +2 | 20:43 |
corvus | the prelude would typically be used for introductory text. like "the theme of this release is improving stability, therefore the following changes blah blah blah" :) | 20:44 |
tobiash | I wonder why we added that | 20:44 |
corvus | i think zuul is ready though -- so shall i tag 603ce6f474ef70439bfa3adcaa27d806c23511f7 as 3.6.0 ? | 20:46 |
pabelanger | wfm +3 | 20:47 |
pabelanger | corvus: +1 | 20:47 |
tobiash | +1 | 20:49 |
corvus | pushed | 20:50 |
pabelanger | \o/ | 20:51 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul-jobs master: Revert "Docker: use the buildset registry if defined" https://review.openstack.org/642150 | 20:56 |
pabelanger | 3.6.0 looks to be on pypi, giving it a try locally now! | 21:00 |
*** daniel2 has quit IRC | 21:12 | |
*** bjackman has quit IRC | 21:13 | |
*** jamesmcarthur has quit IRC | 21:13 | |
*** jamesmcarthur has joined #zuul | 21:14 | |
*** daniel2 has joined #zuul | 21:16 | |
corvus | okay that's one release down :) | 21:25 |
openstackgerrit | Merged openstack-infra/nodepool master: Remove prelude from AWS release note https://review.openstack.org/642148 | 21:25 |
corvus | and time to start the next! :) | 21:25 |
*** jamesmcarthur has quit IRC | 21:27 | |
*** jamesmcarthur has joined #zuul | 21:28 | |
corvus | nodepool 3.5.0 pushed | 21:35 |
tobiash | \o/ | 21:35 |
corvus | i just realized -- what are we going to do when zuul revs to v4? would we rev nodepool too? even if there are not operator-visible changes? :) | 21:37 |
corvus | semver vs ocd! | 21:37 |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul-preview master: Port in changes from the c++ version https://review.openstack.org/641089 | 21:37 |
Shrews | mordred: i think ^^ grabs your suggested changes too | 21:38 |
corvus | okay, both releases done :) | 21:46 |
*** jamesmcarthur has quit IRC | 21:50 | |
*** jamesmcarthur has joined #zuul | 21:53 | |
SpamapS | \o/ | 21:53 |
*** jamesmcarthur has quit IRC | 21:54 | |
clarkb | I think I would prefer to stick to semver. If we uprev nodepool people might not upgrade nodepool at that point thinking there is work to be done to do so | 21:55 |
Shrews | SpamapS: is the "&response.json::<Value>()" syntax you used in zuul-preview an old way to specify a generic type? I can't seem to find that documented anywhere. | 21:56 |
clarkb | Shrews: it is part of the serde json lib | 21:56 |
clarkb | Value is a type defined there so it is casting the json resposne into the serde library value type aiui | 21:56 |
SpamapS | Yeah it's just explicit typing because Serde's model makes it impossible to imply. | 21:57 |
mordred | SpamapS: we couldn't find docs on the explicit typing syntax for functions like that ... | 21:58 |
Shrews | func::<Type>() is the thing i'm asking about | 21:59 |
mordred | yah. that | 21:59 |
*** jamesmcarthur has joined #zuul | 22:08 | |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Revert "Docker: use the buildset registry if defined" https://review.openstack.org/642150 | 22:09 |
*** EvilienM is now known as EmilienM | 22:16 | |
*** rlandy has quit IRC | 22:19 | |
*** jamesmcarthur has quit IRC | 22:19 | |
pabelanger | cool, zuul 3.6.0 installed and working | 22:22 |
pabelanger | now to bump nodepool | 22:22 |
pabelanger | with the zuul dashboard, if you don't have a database setup, both Builds / Buildsets don't work, just shows 'Loading...'. I know adding a database is the fix, but until that is required, is there any way to have the web first check for database? | 22:32 |
pabelanger | and not render in that case? | 22:32 |
corvus | pabelanger: it's, er, supposed to do that | 22:45 |
corvus | but, tbh, if that's broken, we'll probably just fix it by requiring the db. we're really close to that point. | 22:45 |
pabelanger | yah, I've just been too lazy to setup a db in this local test environment, I think my fix is to do that and be done with it :) | 22:47 |
pabelanger | so, +1 to require db | 22:47 |
pabelanger | also, it is pretty cool that zuul 3.5.0 upgraded another zuul to 3.6.0! hehe | 22:48 |
corvus | Shrews, mordred: i think https://matematikaadit.github.io/posts/rust-turbofish.html is what you're looking for (cc SpamapS) | 22:52 |
corvus | clarkb: ^ | 22:52 |
*** irclogbot_3 has joined #zuul | 23:02 | |
pabelanger | and nodepool 3.5.0 looks to be working properly | 23:08 |
pabelanger | Yay | 23:09 |
tristanC | tobiash: SpamapS: executor should skip the setup phase for kubectl connection because of https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L57 | 23:34 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: add flatten checkbox https://review.openstack.org/642047 | 23:45 |
SpamapS | mordred: Shrews that's called "turbo fish" type annotation. | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!