*** jamesmcarthur has joined #zuul | 00:04 | |
*** sshnaidm|afk has quit IRC | 00:04 | |
*** sshnaidm has joined #zuul | 00:06 | |
*** panda|rover has quit IRC | 00:10 | |
*** panda has joined #zuul | 00:11 | |
openstackgerrit | Jimmy McArthur proposed zuul/zuul-website master: Revert "Revert promotional message banner and event list" https://review.opendev.org/660232 | 00:19 |
---|---|---|
pabelanger | SpamapS: thanks, testing that out too | 00:22 |
tristanC | mordred: http://8a4226c6403443a892ae9a1e00578e69.openstack.zuul-preview.opendev.org/ seems to be stuck, so | 00:24 |
mordred | tristanC: one sec | 00:25 |
tristanC | if you go to http://logs.openstack.org/31/660231/2/check/zuul-build-dashboard-multi-tenant/7c90924/npm/html/ then click rdoproject's list and DLRN, you'll have a "Something went wrong." error | 00:25 |
tristanC | and in the console.log, there should be a: TypeError: "n is undefined" stacktrace with a link to the source file: ProjectVariant.jsx:31 | 00:25 |
mordred | tristanC: yes - I agree. | 00:26 |
mordred | tristanC: this is what I would expect to happen on an error - and is much nicer to deal with :) | 00:26 |
tristanC | mordred: (not sure if zuul-preview can work with regular log_url when the build has no artifact) | 00:27 |
mordred | tristanC: so - maybe what we should do is make sure you're online next time we're going to deploy the react2 update | 00:27 |
mordred | and deploy it - and then if it falls over we can capture the issue better | 00:27 |
tristanC | also, the yarn.lock update was pretty old, i'll submit a new generation that hopefully contains more fix than regression :-) | 00:27 |
clarkb | http://paste.openstack.org/show/751488/ is what pabelanger got from the console log fwiw | 00:28 |
tristanC | that's kind of helpful, the "value at" line points at the actual source code file | 00:31 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: Revert "Revert "web: upgrade react and react-scripts to ^2.0.0"" https://review.opendev.org/659991 | 00:32 |
clarkb | tristanC: mordred in that stacktrace I see things about config errors | 00:32 |
clarkb | iirc we did have config errors back then | 00:32 |
clarkb | maybe related to that? we could induce a config error (maybe?) and check if the code works? | 00:33 |
tristanC | clarkb: indeed, the stack trace also refer to App.jsx:152 which is where the config errors are fetched | 00:34 |
*** jamesmcarthur has quit IRC | 00:35 | |
clarkb | btw don't try to open the yarn.lock diff in gerrit web ui | 00:35 |
clarkb | it makes gerrit stop working in your browser for a bit | 00:35 |
clarkb | I've got to run now, but that seems to be a thread we can pull on | 00:37 |
fungi | gertty (on a 1gb vm with a bunch of other stuff also running) pauses for several seconds when you request that diff | 00:37 |
fungi | not an understatement either. it was maybe 3 seconds after hitting the d key before i could start scrolling full-speed through it | 00:38 |
*** mattw4 has quit IRC | 00:46 | |
tristanC | hmm, i can't reproduce with both PS on a setup with config-errors... | 00:50 |
*** jamesmcarthur has joined #zuul | 01:05 | |
*** jamesmcarthur_ has joined #zuul | 01:10 | |
*** jamesmcarthur has quit IRC | 01:11 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: Revert "Revert "web: upgrade react and react-scripts to ^2.0.0"" https://review.opendev.org/659991 | 01:53 |
tristanC | new PS adds a react test to ensure config errors are rendered properly (it even simulates a mouse click event :) | 01:54 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: Revert "Revert "web: upgrade react and react-scripts to ^2.0.0"" https://review.opendev.org/659991 | 01:58 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: DNM: this should cause a React exception https://review.opendev.org/660231 | 02:51 |
*** saneax has quit IRC | 03:09 | |
*** jamesmcarthur_ has quit IRC | 03:23 | |
*** bhavikdbavishi has joined #zuul | 03:42 | |
*** yolanda_ has quit IRC | 03:49 | |
*** jamesmcarthur has joined #zuul | 03:54 | |
*** jamesmcarthur has quit IRC | 03:59 | |
*** jamesmcarthur has joined #zuul | 05:01 | |
*** pcaruana has joined #zuul | 05:02 | |
*** raukadah is now known as chandankumar | 05:06 | |
*** jamesmcarthur has quit IRC | 05:07 | |
*** pcaruana has quit IRC | 05:11 | |
*** bhavikdbavishi1 has joined #zuul | 05:28 | |
*** bhavikdbavishi has quit IRC | 05:28 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 05:28 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: WIP: Fix broken setRefs whith missing objects https://review.opendev.org/621667 | 05:43 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Match tag items against containing branches https://review.opendev.org/578557 | 05:46 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Use implied branch matcher for implied branches https://review.opendev.org/640272 | 05:47 |
*** saneax has joined #zuul | 05:51 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Update cached repo during job startup only if needed https://review.opendev.org/648229 | 05:53 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Add support for smart reconfigurations https://review.opendev.org/652114 | 05:57 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Add --check-config option to zuul scheduler https://review.opendev.org/542160 | 05:57 |
*** zbr_ has quit IRC | 06:05 | |
*** bhavikdbavishi has quit IRC | 06:10 | |
*** bhavikdbavishi has joined #zuul | 06:12 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Match tag items against containing branches https://review.opendev.org/578557 | 06:20 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Use implied branch matcher for implied branches https://review.opendev.org/640272 | 06:20 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Update cached repo during job startup only if needed https://review.opendev.org/648229 | 06:26 |
*** jamesmcarthur has joined #zuul | 06:35 | |
*** jamesmcarthur has quit IRC | 06:39 | |
*** gtema has joined #zuul | 07:00 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: config: add tenant.toDict() method and REST endpoint https://review.opendev.org/621344 | 07:18 |
*** pcaruana has joined #zuul | 07:21 | |
*** jamesmcarthur has joined #zuul | 07:25 | |
*** jamesmcarthur has quit IRC | 07:30 | |
*** pcaruana has quit IRC | 07:39 | |
*** jangutter has joined #zuul | 07:44 | |
*** bhavikdbavishi has quit IRC | 07:51 | |
*** themroc has joined #zuul | 07:52 | |
*** jpena|off is now known as jpena | 07:56 | |
*** hashar has joined #zuul | 08:07 | |
*** bhavikdbavishi has joined #zuul | 08:56 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: web: add tenant and project scoped, JWT-protected actions https://review.opendev.org/576907 | 09:02 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Allow operator to generate auth tokens through the CLI https://review.opendev.org/636197 | 09:02 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Zuul CLI: allow access via REST https://review.opendev.org/636315 | 09:02 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Add Authorization Rules configuration https://review.opendev.org/639855 | 09:02 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Web: plug the authorization engine https://review.opendev.org/640884 | 09:02 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Zuul Web: add /api/user/authorizations endpoint https://review.opendev.org/641099 | 09:03 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: authentication config: add optional token_expiry https://review.opendev.org/642408 | 09:03 |
*** openstackstatus has quit IRC | 09:13 | |
*** openstackstatus has joined #zuul | 09:15 | |
*** ChanServ sets mode: +v openstackstatus | 09:15 | |
*** jamesmcarthur has joined #zuul | 09:26 | |
*** electrofelix has joined #zuul | 09:30 | |
*** jamesmcarthur has quit IRC | 09:31 | |
*** gtema has quit IRC | 11:03 | |
*** gtema has joined #zuul | 11:03 | |
*** snapiri has quit IRC | 11:03 | |
*** panda is now known as panda|rover|eat | 11:13 | |
*** markwork has joined #zuul | 11:15 | |
markwork | Woohoo, our demo went well, and we've got some feedback from the wider organization :) | 11:15 |
*** jamesmcarthur has joined #zuul | 11:27 | |
*** jamesmcarthur has quit IRC | 11:33 | |
*** jamesmcarthur has joined #zuul | 11:43 | |
*** jamesmcarthur has quit IRC | 11:55 | |
*** jamesmcarthur has joined #zuul | 12:00 | |
mordred | markwork: congrats, that's exciting! | 12:00 |
*** jpena is now known as jpena|lunch | 12:00 | |
*** themroc has quit IRC | 12:15 | |
*** jamesmcarthur has quit IRC | 12:16 | |
*** jamesmcarthur has joined #zuul | 12:16 | |
*** bhavikdbavishi has quit IRC | 12:23 | |
*** panda|rover|eat is now known as panda|rover | 12:27 | |
*** themroc has joined #zuul | 12:30 | |
*** jamesmcarthur has quit IRC | 12:32 | |
*** rlandy has joined #zuul | 12:37 | |
*** flepied has joined #zuul | 12:38 | |
*** jamesmcarthur has joined #zuul | 12:42 | |
*** jpena|lunch is now known as jpena | 12:52 | |
*** saneax has quit IRC | 12:53 | |
fungi | markwork: we're happy to hear any feedback you like to pass along too | 13:26 |
*** jamesmcarthur has quit IRC | 13:39 | |
*** themroc has quit IRC | 13:42 | |
*** bhavikdbavishi has joined #zuul | 13:45 | |
*** jamesmcarthur has joined #zuul | 13:48 | |
*** themroc has joined #zuul | 13:48 | |
*** rf0lc0 has joined #zuul | 13:57 | |
*** jangutter_ has joined #zuul | 13:57 | |
*** jamesmcarthur_ has joined #zuul | 13:59 | |
*** tobiash_ has joined #zuul | 14:01 | |
*** jangutter has quit IRC | 14:06 | |
*** SpamapS has quit IRC | 14:06 | |
*** rfolco has quit IRC | 14:06 | |
*** jamesmcarthur has quit IRC | 14:06 | |
*** electrofelix has quit IRC | 14:06 | |
*** sshnaidm has quit IRC | 14:06 | |
*** toabctl has quit IRC | 14:06 | |
*** swest has quit IRC | 14:06 | |
*** aspiers has quit IRC | 14:06 | |
*** smcginnis has quit IRC | 14:06 | |
*** nickx-intel has quit IRC | 14:06 | |
*** edmondsw_ has quit IRC | 14:06 | |
*** tobiash has quit IRC | 14:06 | |
*** electrofelix has joined #zuul | 14:07 | |
*** rf0lc0 is now known as rfolco | 14:12 | |
*** mattw4 has joined #zuul | 14:12 | |
*** sshnaidm has joined #zuul | 14:12 | |
*** SpamapS has joined #zuul | 14:13 | |
*** swest has joined #zuul | 14:14 | |
*** swest has quit IRC | 14:15 | |
*** aspiers has joined #zuul | 14:15 | |
*** themr0c has joined #zuul | 14:27 | |
*** bhavikdbavishi has quit IRC | 14:28 | |
*** hashar is now known as hasharAway | 14:29 | |
*** themroc has quit IRC | 14:30 | |
*** mattw4 has quit IRC | 14:35 | |
*** hasharAway is now known as hashar | 14:41 | |
*** chandankumar is now known as raukadah | 14:46 | |
*** markwork has quit IRC | 14:55 | |
*** markwork has joined #zuul | 15:02 | |
markwork | Hey ho, one of our Python core dev guys gave my code in my changeset a cosmetics makeover, can I just push that as an additional commit? | 15:02 |
markwork | Like, sorting hashes into alphabetical order, re-ordering imports, that kind of stuff. | 15:03 |
fungi | if your change hasn't merged yet, you can just push a revision of it with those improvements included | 15:09 |
fungi | gerrit will show reviewers the difference between the previous patch and the new one | 15:09 |
fungi | git commit --amend | 15:09 |
clarkb | but you can also have a series of changes in gerrit with dependency chains if it makes more sense to split things out logically or if you'd rather not recet reviews on existing changes | 15:13 |
*** themr0c has quit IRC | 15:13 | |
*** gtema has quit IRC | 15:22 | |
clarkb | s/recet/reset/ | 15:23 |
clarkb | as a heads up skopeo appears to have rebuilt ppa packages with tristanC's fix so fungi is goign to remove our pin of that package. Please say something if you see oddities from the zuul docker image related jobs | 15:28 |
*** hashar has quit IRC | 15:47 | |
*** smcginnis has joined #zuul | 15:57 | |
*** panda|rover has quit IRC | 16:09 | |
*** panda has joined #zuul | 16:10 | |
openstackgerrit | David Shrewsbury proposed zuul/nodepool master: Use py3 pathlib in DibImageFile https://review.opendev.org/660191 | 16:17 |
openstackgerrit | David Shrewsbury proposed zuul/nodepool master: Use py3 pathlib in DibImageFile https://review.opendev.org/660191 | 16:19 |
openstackgerrit | David Shrewsbury proposed zuul/nodepool master: Use py3 pathlib in DibImageFile https://review.opendev.org/660191 | 16:21 |
*** mgoddard has quit IRC | 16:25 | |
*** mgoddard has joined #zuul | 16:28 | |
*** mattw4 has joined #zuul | 16:36 | |
*** mgoddard has quit IRC | 16:42 | |
*** mgoddard has joined #zuul | 16:43 | |
*** mattw4 has quit IRC | 17:07 | |
*** markwork has quit IRC | 17:10 | |
*** hashar has joined #zuul | 17:12 | |
*** jangutter_ has quit IRC | 17:14 | |
mordred | Shrews: if you get bored ... the admin experience when setting an autohold on a multi-node job isn't stellar | 17:20 |
mordred | Shrews: (although it's possible there's no good way to fix it) ... | 17:20 |
clarkb | mordred: how it multinode different from single node? | 17:20 |
*** mattw4 has joined #zuul | 17:20 | |
mordred | Shrews: main issues are - when you go to nodepool list, you just get nodes with no indication of which node it might be | 17:21 |
mordred | for instance, when doing a hold on system-config-run-gitea - one of the nodes is gitea01.opendev.org and the others are not - so to debug thigs on that node, I need to trial-and-error to find the right one | 17:21 |
clarkb | ah yup you have to cross check against the job inventory | 17:22 |
mordred | then - when I'm done debugging, I need to nodepool delete each of them - there is no way to say "thanks, I'm done with all of the held nodes for this job" | 17:22 |
Shrews | mordred: does —detail not show a comment field with the hold reason? | 17:22 |
Shrews | I’m afk for lunch atm | 17:22 |
mordred | Shrews: it does - but the comment is the same for all nodes in a nodeset | 17:22 |
Shrews | I do not have ideas for improving that experience atm | 17:23 |
mordred | Shrews: so for the second, I can do a grep for all the things with that comment - but then I have to do 4 individual deletes | 17:23 |
mordred | Shrews: me neither | 17:23 |
*** jpena is now known as jpena|off | 17:24 | |
mordred | mostly just wanted to mention it to shove it into your brainhole | 17:24 |
clarkb | mordred: for the first one I've had people take the inventory from the job that resulted in the hold then they can look at names/IPs to sort out what is what | 17:24 |
mordred | clarkb: yeah - fair enough - and it's probably not a terrible idea to just look at the inventory instead of doing the nodepool list to find the node | 17:25 |
fungi | that's been the way i've tackled it | 17:26 |
mordred | however - that means I've got to leave shell land and go to browser land - so it's still not *ideal* - but is workable | 17:26 |
mordred | (also, if I wasn't tying the autohold it to a specific change, I'd then have to figure out which change has the failing job - which I can do from the nodepool list --detail output since it mentions refs/changes/{change} | 17:28 |
fungi | mordred: many command-line tools are able to fetch and display yaml served over http ;) | 17:31 |
*** saneax has joined #zuul | 17:32 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Add example statsd_exporter mapping https://review.opendev.org/660472 | 17:33 |
mordred | fungi: sure - but wow that would be a lot of typing- and I'd have to figure out the URL to fetch :) | 17:33 |
tobiash_ | mordred: ^ | 17:34 |
mordred | tobiash_: awesome, thanks! | 17:34 |
tobiash_ | I'll upload the nodepool mapping in a sec | 17:34 |
*** tobiash_ is now known as tobiash | 17:34 | |
*** electrofelix has quit IRC | 17:36 | |
fungi | mordred: ...zuul builds api? | 17:36 |
fungi | but yes, i agree a browser is probably easuer | 17:36 |
fungi | er, easier too | 17:36 |
openstackgerrit | Tobias Henkel proposed zuul/nodepool master: Add statsd_exporter mapping https://review.opendev.org/660473 | 17:37 |
*** saneax has quit IRC | 17:41 | |
*** jamesmcarthur_ has quit IRC | 17:42 | |
*** saneax has joined #zuul | 17:42 | |
mordred | fungi: yeah - mostly - my workflow for this is "ssh to zuul01.openstack.org ; set autohold ; wait ; check to see if autohold on zuul goes away ; logout ; ssh to nl01.openstack.org ; nodepool list --detail | grep $comment ; try to ssh into nodes" | 17:45 |
mordred | which otherwise doesn't involve api calls or browsers | 17:45 |
fungi | instead of waiting for the autohold to go away i just switch to checking nodepool | 17:49 |
fungi | but otherwise yes | 17:49 |
mordred | fungi: you're frequently smarter than me | 17:53 |
*** jamesmcarthur has joined #zuul | 17:58 | |
fungi | nah, checking zuul until the autohold goes away is probably smarter since querying nodepool is a slower operation | 17:58 |
fungi | i'm just lazy | 17:58 |
fungi | and also tend to not start checking for held nodes immediately | 17:58 |
Shrews | mordred: perhaps what is needed is for zuul to begin storing autohold info in zk rather than memory, and all actions, including node release/delete be done through the zuul client. | 18:01 |
Shrews | (as one possible bad/good/otherwise solution) | 18:02 |
*** jamesmcarthur has quit IRC | 18:02 | |
Shrews | zuul could then go crazy with updating the hold request with whatever info it has and a user may need | 18:04 |
fungi | also that would allow it to maintain autohold states across scheduler restarts | 18:06 |
fungi | right now you have to readd them on restart if you still want them | 18:06 |
fungi | so seems like a generally good idea | 18:06 |
Shrews | zuul image builds seem b0rked again? | 18:07 |
clarkb | Shrews: I just looked and we had successfully built a trusty image within about the last hour | 18:08 |
clarkb | Shrews: do you have more info? | 18:08 |
Shrews | docker images | 18:08 |
Shrews | Service 'node' failed to build: Get https://registry-1.docker.io/v2/rastasheep/ubuntu-sshd/manifests/latest: unauthorized: incorrect username or password | 18:08 |
clarkb | oh those image builds sorry. Thats the known issue | 18:08 |
clarkb | chances are that job ran in limestone where we have ipv6 as primary ip connectivity and docker hub has ipv4 only so we have to go over shared NAT with the buildset registry and that is flaky | 18:09 |
Shrews | clarkb: so many images :) | 18:09 |
clarkb | logan-: ^ it might be good to figure out why that is flaky if you have time | 18:09 |
clarkb | logan-: maybe the table is too small or retention too long | 18:09 |
Shrews | it was limestone | 18:09 |
clarkb | Shrews: ianw has been working to spin up new opendev mirror nodes with tls which will allow those jobs to talk to our tls'd docker hub proxy with a dedicated ipv4 floating IP instead of through the flaky docker registry | 18:10 |
logan- | i was just seeing some weirdness with git.openstack.org https: http://paste.openstack.org/raw/751897/ | 18:10 |
clarkb | s/flake docker registry/flaky buildset registry proxy/ | 18:10 |
clarkb | fungi: ^ your bad host is hitting files.o.o fwiw | 18:12 |
clarkb | but load average there is still low ~1 and that is over ipv4 not 6 | 18:12 |
logan- | guess thats unrelated to the docker.io issue there | 18:12 |
*** jamesmcarthur has joined #zuul | 18:12 | |
clarkb | logan-: the docker issue in the job is from limestone test node to docker hub via a registry proxy. This fails so then it tries to access it directly which then fails because the creds are only valid in the registry proxy not docker hub itself | 18:13 |
clarkb | logan-: I believe that the shared NAT for the test nodes may be to blame | 18:13 |
logan- | i can add some monitoring for registry-1.docker.io externally from the nodepool cloud to see if it is an issue of the nodepool cloud failing to hit that endpoint, or our network overall failing to hit that | 18:13 |
logan- | yeah NAT port contention might be the case | 18:13 |
clarkb | it is also possible there is an ipv4 routing issue between those cloud nodes (though i would expect to see other failures likefor multinode testing if that was the case) | 18:14 |
logan- | but it also says "unauthorized", so it did connect and get a response, right? | 18:14 |
*** panda has quit IRC | 18:14 | |
clarkb | logan-: yes. This is due to a quirk of how docker works. If you tell docker that a mirror is a mirror of dockerhub then it will always try to talk to dockerhub directly if the mirror fails | 18:15 |
logan- | ah ok | 18:15 |
clarkb | logan-: so in this case it is first talking to the in cloud docker registry proxy to dockerhub. That fails (possibly due to nat) then it tries to connect directly and that succeeds | 18:15 |
*** panda has joined #zuul | 18:15 | |
clarkb | but it then fails at layer 7 doing auth that is only valid on the in cloud registry proxy | 18:15 |
logan- | where is the docker registry proxy located? on the mirror? or on a different nodepool node? | 18:16 |
fungi | clarkb: hah, well at least it's not really a vulnerable system and can absorb the load. the problem with them hitting wiki.o.o was all the php scripts they were calling into blowing up cpu utilization | 18:16 |
clarkb | logan-: it is located on a single use test node. In zuuls case a different test node in the same cloud region | 18:17 |
logan- | got it | 18:17 |
clarkb | logan-: this is why we think replacing that functionality with the in cloud mirror which has a fip will help | 18:17 |
clarkb | but before we do that we want tls on those mirror nodes | 18:17 |
clarkb | (because we end up publishing the result of these image builds back to dockerhub) | 18:18 |
clarkb | one thing that makes this extra frustrating is docker hub seems to be hosted by an aws elb | 18:18 |
clarkb | which supports ipv6 termination | 18:18 |
logan- | yeah NAT port contention at the gateway is one possibility, ML2 population issues causing node-to-node connectivity issues is another possibility. but iirc nodepool tests multi-node jobs for that type of working connectivity before running the job so the latter is less likely I think. | 18:18 |
clarkb | if they just terminated an ipv6 addr on their load balancer we'd be golden | 18:19 |
*** jamesmcarthur_ has joined #zuul | 18:22 | |
fungi | ipv6: it's not for containering | 18:22 |
*** jamesmcarthur has quit IRC | 18:24 | |
*** mattw4 has quit IRC | 18:24 | |
*** mattw4 has joined #zuul | 18:28 | |
*** bhavikdbavishi has joined #zuul | 18:42 | |
*** sshnaidm is now known as sshnaidm|afk | 18:56 | |
*** bhavikdbavishi has quit IRC | 19:27 | |
*** jamesmcarthur_ has quit IRC | 19:44 | |
*** saneax has quit IRC | 19:51 | |
*** logan- has quit IRC | 19:55 | |
*** jamesmcarthur has joined #zuul | 20:02 | |
*** rlandy is now known as rlandy|biab | 20:06 | |
*** rlandy|biab is now known as rlandy | 20:39 | |
SpamapS | I will forgive random websites and even some web applications for not doing ipv6. But infrastructure like docker? >:| | 21:06 |
*** flepied has quit IRC | 21:07 | |
fungi | see also: ipv6 in kubernetes | 21:07 |
fungi | https://github.com/kubernetes/enhancements/issues/563 | 21:10 |
fungi | [spoiler: it's coming... real soon now] | 21:10 |
*** jamesmcarthur has quit IRC | 21:11 | |
*** logan- has joined #zuul | 21:16 | |
*** hashar has quit IRC | 21:20 | |
*** flepied has joined #zuul | 21:26 | |
*** flepied has quit IRC | 21:33 | |
*** tjgresha has joined #zuul | 22:51 | |
*** mattw4 has quit IRC | 23:11 | |
*** mattw4 has joined #zuul | 23:12 | |
*** rlandy has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!