ianw | so how are we resolving the circular reference with the skopeo job? | 00:03 |
---|---|---|
ianw | i'm happy to babysit a force for 659296 if that's what we decided | 00:03 |
ianw | i feel like that clears the plate for 659221 | 00:05 |
clarkb | I think it may not be required and I've misunderstood some subtlety there | 00:05 |
openstackgerrit | Merged openstack/openstack-zuul-jobs master: Remove infra trusty puppet jobs https://review.opendev.org/659395 | 00:06 |
clarkb | that said the zuul projecg would proba ly appreciate fixing skopeo sooner than later so force merge may not be worst idea | 00:06 |
clarkb | ianw I'm happy for that to be force merged to unblock them while we sort puppet | 00:06 |
ianw | ahh, right, so 659221 is green just with the puppet updates | 00:06 |
clarkb | ya | 00:07 |
ianw | so i can babysit 659221 if we like | 00:09 |
ianw | it seems the only maybe question is if it goes mad during bridge.o.o pulses and re-installs puppet | 00:09 |
clarkb | ya that would be the minor concern | 00:10 |
clarkb | if you are able to watch it go in I can keep an eye on irc and help if it goes sideways for about another 3-4 hours | 00:10 |
fungi | bridge.o.o shouldn't have puppet on it to begin with | 00:11 |
ianw | fungi: oh i mean on the puppet hosts, if somehow we missed something | 00:11 |
fungi | ahh | 00:11 |
ianw | during their runs from run_all | 00:11 |
clarkb | ianw the other minir concern I had with it is it may try to do things on trusty nodes that eill fail | 00:11 |
clarkb | and not sure if that eill prevent other updates on the trusty nodes | 00:12 |
clarkb | but a tually that should only be a concern for ask | 00:12 |
ianw | well, i think there's only one way to find out ... i'll watch it | 00:12 |
clarkb | because ask is the only puppet 3 + trusty and the oyhers will noop | 00:12 |
ianw | better to debug now than later :) | 00:12 |
ianw | also i have real internet to monitor ... FINALLY the right person to do the job came out. replaced the amplifier two pits up from mine, and remove the corroded and illegally split junctions in my pit and installed a proper 8x way splitter | 00:15 |
ianw | https://bit.ly/2Q4MkDz ... it was a miracle any signal was getting through at all | 00:19 |
*** yamamoto has quit IRC | 00:19 | |
fungi | the fault is clearly yours for expecting timely, competent work from a utility company | 00:19 |
fungi | looks like a portal to a darker, wetter dimension | 00:21 |
clarkb | just put more electrical tape on it | 00:21 |
fungi | looks like someone already tried that, but not quite hard enough | 00:23 |
ianw | it just wastes so much of everyone's time. to get a lead-in cable installed and this fixed has taken *literally* 9 technicians in various ways appearing. all these guys (they have all been guys) have driven all over town, appeared, variously made things worse | 00:24 |
ianw | this must play out thousands of times a day ... just time and effort and lifespan evaporating into the bureaucracy | 00:24 |
fungi | in my country we like to call that "job security" | 00:25 |
fungi | if they're all that incompetent, clearly none of them can be fired | 00:26 |
clarkb | my tech was actuallu really good | 00:27 |
clarkb | I got him to upgrade the fiber terminal so I could update to gig speeds if I want to pay for them and he terminated cat5 instead of coax with moca | 00:27 |
fungi | okay, so just to clarify... we *don't* think we'll need to bypass zuul for merging 659296? | 00:29 |
clarkb | correct | 00:29 |
ianw | yep, if we get the puppet stuff in, a recheck should work | 00:29 |
fungi | un-staging my verify +2 and taking myself back out of project bootstrappers then | 00:29 |
ianw | and also maybe the debootstrap ppa install bits for nodepool-builder for zigo -- which is where I started with all this yesterday too :) | 00:30 |
fungi | yeesh, this rabbit hole goes deep indeed | 00:30 |
clarkb | heh we had two different threads lead to the same spot | 00:31 |
*** gyee has quit IRC | 00:36 | |
*** xek has quit IRC | 00:37 | |
*** dpawlik has joined #openstack-infra | 00:38 | |
*** rascasoft has quit IRC | 00:39 | |
*** rascasoft has joined #openstack-infra | 00:41 | |
*** dpawlik has quit IRC | 00:42 | |
*** armax has quit IRC | 00:45 | |
*** diablo_rojo has quit IRC | 00:45 | |
*** ricolin has joined #openstack-infra | 00:50 | |
*** factor has joined #openstack-infra | 00:50 | |
ianw | hrm, it's -2'd itself | 00:53 |
ianw | Pulling registry (registry:2)...", "Get https://registry-1.docker.io/v2/library/registry/manifests/2: unauthorized: incorrect username or password" | 00:54 |
*** lseki has quit IRC | 00:54 | |
*** dpawlik has joined #openstack-infra | 00:54 | |
ianw | Due to high demand, we are working to enable repositories on release-archives. You can follow along here: https://tickets.puppetlabs.com/browse/CPR-685 | 00:57 |
clarkb | ianw that error is due to running on ipv6 cloud I think you can recheck | 00:58 |
*** cloudnull has joined #openstack-infra | 00:58 | |
*** dpawlik has quit IRC | 00:59 | |
*** d34dh0r53 has joined #openstack-infra | 01:01 | |
*** armax has joined #openstack-infra | 01:02 | |
fungi | and once we can get the artifacts fix merged, we need a scheduler restart before we give the release team a heads-up on resuming approving releases? | 01:05 |
clarkb | yup | 01:09 |
*** oyrogerg has joined #openstack-infra | 01:11 | |
fungi | barring new surprises, we ought to have that cleared up by their weekly meeting at 19:00 | 01:11 |
*** bgmccollum has joined #openstack-infra | 01:16 | |
ianw | i thought we had a full rspec test of ask.o.o ... but now on staging i'm seeing issues with jetty package name | 01:28 |
*** bgmccollum has quit IRC | 01:29 | |
ianw | ahh, we do ... for ask ... this is coming in via solr ... sigh :/ | 01:31 |
ianw | of course the upstream puppet-solr we use is dead ... | 01:31 |
*** bgmccollum has joined #openstack-infra | 01:33 | |
*** nicolasbock has quit IRC | 01:36 | |
*** armax has quit IRC | 01:37 | |
fungi | for some reason i thought we had our own puppet-solr mrmartin had created | 01:38 |
*** rh-jelabarre has quit IRC | 01:39 | |
ianw | it may literally be s/jetty/jetty8 but getting that one char into what it's doing seems annoying | 01:41 |
fungi | more or less annoying than creating an equivs package to alias jetty to jetty8? | 01:49 |
ianw | fungi: you read my mind :) trying that, but now it seems /usr/share/jetty is not /usr/share/jetty8 and that's important to puppet somehow :/ | 01:52 |
fungi | :( | 01:53 |
*** ijw has quit IRC | 01:55 | |
ianw | hrm, a symlink might do it ... now hte service name is different | 01:57 |
ianw | this is a hot mess | 01:57 |
*** apetrich has quit IRC | 01:57 | |
openstackgerrit | Merged opendev/system-config master: Handle moved puppet repos https://review.opendev.org/659221 | 02:02 |
*** whoami-rajat has joined #openstack-infra | 02:06 | |
dangtrinhnt | fungi, Yes, you can just start doing that asap :) Many thanks :) | 02:08 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Remove ask-staging from disabled list https://review.opendev.org/659421 | 02:11 |
*** dklyle has joined #openstack-infra | 02:14 | |
*** rlandy|bbl is now known as rlandy | 02:21 | |
*** dhellmann has quit IRC | 02:23 | |
*** dhellmann has joined #openstack-infra | 02:23 | |
*** kjackal has joined #openstack-infra | 02:23 | |
openstackgerrit | Merged opendev/storyboard master: update StoriesController so users subsribe to stories they create https://review.opendev.org/643546 | 02:24 |
*** kjackal has quit IRC | 02:31 | |
*** armax has joined #openstack-infra | 02:45 | |
*** yamamoto has joined #openstack-infra | 02:48 | |
clarkb | ianw the puppet fix still looking happy? | 02:50 |
clarkb | should we recheck the skopeo change too? (not sure how late it is for you) | 02:51 |
*** oyrogerg has quit IRC | 02:53 | |
*** oyrogerg has joined #openstack-infra | 02:54 | |
*** oyrogerg has quit IRC | 02:55 | |
*** dpawlik has joined #openstack-infra | 02:55 | |
*** dpawlik has quit IRC | 03:00 | |
*** armax has quit IRC | 03:05 | |
*** dpawlik has joined #openstack-infra | 03:11 | |
*** hongbin has joined #openstack-infra | 03:14 | |
*** dpawlik has quit IRC | 03:15 | |
*** bhavikdbavishi has joined #openstack-infra | 03:34 | |
ianw | clarkb: yeah, looking good | 03:36 |
*** ykarel|away has joined #openstack-infra | 03:40 | |
ianw | no weird errors, i was watching log and it should have applied that tick | 03:40 |
*** ykarel|away is now known as ykarel | 03:49 | |
*** yamamoto has quit IRC | 03:50 | |
*** yamamoto has joined #openstack-infra | 03:51 | |
*** udesale has joined #openstack-infra | 03:53 | |
*** hongbin has quit IRC | 03:55 | |
openstackgerrit | Andreas Jaeger proposed openstack/openstack-zuul-jobs master: Remove unused jobs https://review.opendev.org/659340 | 04:07 |
fungi | someone's an early riser | 04:08 |
AJaeger | config-core, please review https://review.opendev.org/#/c/659365 and https://review.opendev.org/659295 | 04:08 |
AJaeger | someone's is a late one ;) | 04:08 |
AJaeger | morning, fungi | 04:08 |
fungi | ;) | 04:08 |
fungi | morning to you too | 04:09 |
AJaeger | fungi, still fighting puppet? | 04:15 |
AJaeger | mnaser, fungi, thanks for late reviews! | 04:16 |
mnaser | not late for me :D | 04:16 |
* mnaser is in beijing | 04:16 | |
fungi | we *think* we're past that now | 04:16 |
AJaeger | mnaser: I see - have a great day | 04:17 |
AJaeger | fungi: good ;) | 04:17 |
openstackgerrit | Merged openstack/openstack-zuul-jobs master: Remove legacy-python-barbicanclient-* https://review.opendev.org/659295 | 04:18 |
fungi | currently trying to get 659296 merged so we can get new zuul changes to merge, including the 659329 artifacts fix which is currently blocking the release team | 04:18 |
openstackgerrit | Merged opendev/system-config master: Pin skopeo to unbreak skopeo+bubblewrap https://review.opendev.org/659296 | 04:19 |
tobiash | fungi: there is another artifacts fix which hit us yesterday: https://review.opendev.org/659262 | 04:19 |
fungi | and there's one more down | 04:20 |
tobiash | you probably want both merged | 04:20 |
fungi | so in theory once puppet rolls 659296 out to the executors, zuul image jobs should work again | 04:20 |
tobiash | I just picked both of them into our deployment as an emergency measure as we got hit by the same artifacts problem | 04:20 |
fungi | tobiash: good to know, thanks! | 04:20 |
ianw | tobiash: yeah, they say they're building repos for them now | 04:21 |
ianw | https://tickets.puppetlabs.com/browse/CPR-685 | 04:22 |
fungi | ianw: not the puppet packages, the artifacts bug which hung our release and release-post pipelines earlier | 04:22 |
tobiash | as the skopeo fix is merged we can recheck zuul changes (artifacts fixes first) in ~30min? | 04:23 |
fungi | zuul trying to treat jobs which pass artifacts as triggered from changes even when they aren't | 04:23 |
openstackgerrit | Merged openstack/project-config master: Move sqlalchemy-migrate jobs in-tree https://review.opendev.org/659365 | 04:23 |
fungi | tobiash: roughly | 04:24 |
tobiash | k | 04:24 |
*** janki has joined #openstack-infra | 04:43 | |
AJaeger | config-core, https://review.opendev.org/#/c/659340 is the last of my spring cleanup changes, reviews welcome (I just rechecked) | 04:53 |
AJaeger | and couple of new repos: https://review.opendev.org/657764 https://review.opendev.org/656657 https://review.opendev.org/657862 , please review | 04:53 |
*** cloudkiller has joined #openstack-infra | 04:55 | |
*** cloudkiller has quit IRC | 05:08 | |
openstackgerrit | Ian Wienand proposed opendev/puppet-askbot master: Move to apache 2.4 access always https://review.opendev.org/659436 | 05:10 |
*** cloudkiller has joined #openstack-infra | 05:11 | |
*** _d34dh0r53_ has joined #openstack-infra | 05:11 | |
*** ykarel is now known as ykarel|afk | 05:12 | |
*** dpawlik has joined #openstack-infra | 05:12 | |
*** ykarel|afk has quit IRC | 05:16 | |
*** dpawlik has quit IRC | 05:17 | |
*** dpawlik has joined #openstack-infra | 05:28 | |
*** ykarel|afk has joined #openstack-infra | 05:31 | |
*** ykarel|afk is now known as ykarel | 05:31 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: [WIP] zuul_stream: add debug to investigate tox-remote failures https://review.opendev.org/657914 | 05:40 |
*** yboaron_ has joined #openstack-infra | 05:40 | |
*** hamzy has quit IRC | 05:44 | |
*** jbadiapa has joined #openstack-infra | 05:49 | |
*** e0ne has joined #openstack-infra | 05:53 | |
*** kjackal has joined #openstack-infra | 05:59 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: [WIP] zuul_stream: add debug to investigate tox-remote failures https://review.opendev.org/657914 | 06:01 |
*** quiquell has quit IRC | 06:04 | |
*** quiquell has joined #openstack-infra | 06:04 | |
*** quiquell has quit IRC | 06:05 | |
*** udesale has quit IRC | 06:05 | |
*** udesale has joined #openstack-infra | 06:06 | |
*** quiquell has joined #openstack-infra | 06:06 | |
*** sarob has joined #openstack-infra | 06:09 | |
*** kjackal has quit IRC | 06:16 | |
*** sarob has quit IRC | 06:19 | |
*** mujahidali has joined #openstack-infra | 06:19 | |
*** hamzy has joined #openstack-infra | 06:21 | |
*** oyrogerg has joined #openstack-infra | 06:29 | |
*** ccamacho has joined #openstack-infra | 06:31 | |
*** e0ne has quit IRC | 06:36 | |
*** admcleod has quit IRC | 06:39 | |
*** pgaxatte has joined #openstack-infra | 06:41 | |
ianw | AJaeger: i'll review it, but it's an autumn cleanup for some ;) | 06:46 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: quick-start: pin docker image to 2.16.8 https://review.opendev.org/659454 | 06:46 |
openstackgerrit | melissaml proposed openstack/diskimage-builder master: Replace git.openstack.org URLs with opendev.org URLs https://review.opendev.org/658564 | 06:46 |
AJaeger | ianw: sorry ;) | 06:46 |
openstackgerrit | Ian Wienand proposed opendev/puppet-askbot master: Install redis 2.10.3 https://review.opendev.org/659455 | 06:50 |
*** udesale has quit IRC | 06:51 | |
*** udesale has joined #openstack-infra | 06:52 | |
openstackgerrit | Merged openstack/project-config master: Set up cyborg-tempest-plugin repository https://review.opendev.org/657764 | 06:53 |
*** xek has joined #openstack-infra | 06:54 | |
*** apetrich has joined #openstack-infra | 06:56 | |
*** rcernin has quit IRC | 07:01 | |
*** udesale has quit IRC | 07:02 | |
*** openstackgerrit has quit IRC | 07:03 | |
*** udesale has joined #openstack-infra | 07:03 | |
*** tesseract has joined #openstack-infra | 07:05 | |
*** jaosorior has quit IRC | 07:06 | |
*** ginopc has joined #openstack-infra | 07:07 | |
*** admcleod has joined #openstack-infra | 07:09 | |
*** kjackal has joined #openstack-infra | 07:10 | |
*** pcaruana has joined #openstack-infra | 07:13 | |
*** rpittau|afk is now known as rpittau | 07:17 | |
*** kopecmartin|off is now known as kopecmartin | 07:18 | |
*** xek has quit IRC | 07:21 | |
*** openstackgerrit has joined #openstack-infra | 07:21 | |
openstackgerrit | Ian Wienand proposed opendev/puppet-askbot master: Install redis 2.10.3 https://review.opendev.org/659455 | 07:21 |
*** ralonsoh has joined #openstack-infra | 07:23 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: [WIP] zuul_stream: add debug to investigate tox-remote failures https://review.opendev.org/657914 | 07:27 |
*** oyrogerg has quit IRC | 07:27 | |
openstackgerrit | Merged openstack/project-config master: Set up cyborg-tempest-plugin repository https://review.opendev.org/657767 | 07:31 |
*** jpena|off is now known as jpena | 07:32 | |
*** yamamoto has quit IRC | 07:33 | |
openstackgerrit | Andreas Jaeger proposed zuul/zuul master: bubblewrap: bind mount /etc/subuid https://review.opendev.org/659218 | 07:34 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul master: Make image build jobs voting again https://review.opendev.org/659242 | 07:35 |
*** iurygregory has joined #openstack-infra | 07:35 | |
*** udesale has quit IRC | 07:36 | |
*** udesale has joined #openstack-infra | 07:37 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: ask.o.o : workaround old puppet-solr package https://review.opendev.org/659473 | 07:38 |
*** yamamoto has joined #openstack-infra | 07:39 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Revert "Pin skopeo to unbreak skopeo+bubblewrap" https://review.opendev.org/659474 | 07:39 |
*** slaweq has joined #openstack-infra | 07:40 | |
ianw | clarkb: ^ yeah, so when around ... i looked in the /var/cache archives for a deb of 14 but couldn't find one. i dunno if it exists any more at all :/ | 07:40 |
*** amoralej|off is now known as amoralej | 07:41 | |
*** bhavikdbavishi has quit IRC | 07:43 | |
*** ykarel is now known as ykarel|lunch | 07:46 | |
*** tosky has joined #openstack-infra | 07:48 | |
*** tesseract has quit IRC | 07:52 | |
*** tesseract has joined #openstack-infra | 07:52 | |
*** e0ne has joined #openstack-infra | 08:08 | |
*** lucasagomes has joined #openstack-infra | 08:12 | |
*** pkopec has joined #openstack-infra | 08:14 | |
openstackgerrit | Tobias Rydberg proposed openstack/project-config master: Renaming of Public Cloud WG to Public Cloud SIG https://review.opendev.org/659489 | 08:16 |
*** tkajinam has quit IRC | 08:32 | |
*** ykarel|lunch is now known as ykarel | 08:33 | |
*** ginopc has quit IRC | 08:34 | |
*** derekh has joined #openstack-infra | 08:38 | |
*** dtantsur|afk is now known as dtantsur | 08:50 | |
*** ginopc has joined #openstack-infra | 09:06 | |
*** panda is now known as panda|rover | 09:17 | |
*** ginopc has quit IRC | 09:36 | |
*** ginopc has joined #openstack-infra | 09:40 | |
*** ykarel is now known as ykarel|meeting | 10:07 | |
*** yamamoto has quit IRC | 10:09 | |
*** tosky has quit IRC | 10:10 | |
*** tosky has joined #openstack-infra | 10:11 | |
openstackgerrit | Mark Meyer proposed zuul/zuul master: Create a basic Bitbucket event source https://review.opendev.org/658835 | 10:11 |
*** jaosorior has joined #openstack-infra | 10:12 | |
*** yamamoto has joined #openstack-infra | 10:20 | |
*** yamamoto has quit IRC | 10:25 | |
*** jlibosva has joined #openstack-infra | 10:26 | |
*** pcaruana has quit IRC | 10:27 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: [WIP] runner: enable reuse of a job-dir https://review.opendev.org/657955 | 10:27 |
*** xek has joined #openstack-infra | 10:27 | |
*** sshnaidm|afk is now known as sshnaidm | 10:38 | |
*** ricolin has quit IRC | 10:39 | |
*** kjackal has quit IRC | 10:42 | |
*** ykarel|meeting is now known as ykarel | 10:53 | |
*** nicolasbock has joined #openstack-infra | 10:57 | |
*** bhavikdbavishi has joined #openstack-infra | 11:00 | |
*** jpena is now known as jpena|lunch | 11:03 | |
*** amoralej is now known as amoralej|lunch | 11:04 | |
*** panda|rover is now known as panda|rover|eat | 11:04 | |
*** kjackal has joined #openstack-infra | 11:11 | |
*** ykarel is now known as ykarel|afk | 11:11 | |
*** mujahidali has quit IRC | 11:13 | |
*** udesale has quit IRC | 11:15 | |
*** ginopc has quit IRC | 11:28 | |
*** pcaruana has joined #openstack-infra | 11:37 | |
fungi | okay, so to recap discussion from #zuul, the skopeo pin failed to downgrade our executors because the ppa built yet another package in the meantime and only keeps at most 2 on hand, so the version we were hoping to roll back to no longer exists there | 11:40 |
fungi | so we could try to manually build that version and distribute it to the executors | 11:41 |
fungi | or point the ppa at a forked copy of the skopeo repo with the git state we want | 11:41 |
fungi | however, if we're going to that trouble, tristanC has pushed a pr for this problem which looks likely to get approved by the skopeo maintainers in short order, so we could i suppose build with that | 11:42 |
fungi | in unrelated news, the gerrit 3.0.0 release yesterday *also* broke the zuul quickstart jobs because of new acls needed for project creation | 11:42 |
*** yamamoto has joined #openstack-infra | 11:44 | |
*** yamamoto has quit IRC | 11:45 | |
*** yamamoto has joined #openstack-infra | 11:45 | |
*** dtantsur is now known as dtantsur|brb | 11:46 | |
*** ykarel|afk is now known as ykarel | 11:59 | |
*** yamamoto has quit IRC | 12:03 | |
*** gfidente has joined #openstack-infra | 12:04 | |
*** panda|rover|eat is now known as panda | 12:06 | |
*** yamamoto has joined #openstack-infra | 12:10 | |
*** panda is now known as panda|rover | 12:11 | |
*** yamamoto has quit IRC | 12:19 | |
*** rh-jelabarre has joined #openstack-infra | 12:21 | |
*** gfidente has quit IRC | 12:23 | |
*** gfidente has joined #openstack-infra | 12:24 | |
*** iurygregory has quit IRC | 12:27 | |
*** janki has quit IRC | 12:28 | |
*** jpena|lunch is now known as jpena | 12:29 | |
*** amoralej|lunch is now known as amoralej | 12:32 | |
*** rlandy has joined #openstack-infra | 12:34 | |
openstackgerrit | Hervé Beraud proposed openstack/pbr master: Update Sphinx requirement https://review.opendev.org/659546 | 12:36 |
*** rh-jelabarre has quit IRC | 12:39 | |
AJaeger | fungi, and https://review.opendev.org/659218 might finally merge to allow moving forward with current state. | 12:39 |
fungi | ahh, cool, i entirely missed that solution had been proposed | 12:41 |
fungi | was currently busy setting up a xenial build chroot on my workstation to try and rebuild from the skopeo source packages in the ppa | 12:41 |
openstackgerrit | Merged zuul/zuul master: bubblewrap: bind mount /etc/subuid https://review.opendev.org/659218 | 12:43 |
fungi | infra-root: looks like if we want to take advantage of ^ we're going to need a restart of all the executors once it gets installed on them | 12:47 |
*** aaronsheffield has joined #openstack-infra | 12:50 | |
fungi | i'm watching for it to roll out and will then get the executor daemons restarted once i've confirmed that commit is installed on all of them | 12:54 |
*** snapiri has quit IRC | 12:56 | |
*** ccamacho has quit IRC | 12:56 | |
*** ccamacho has joined #openstack-infra | 12:57 | |
*** ginopc has joined #openstack-infra | 12:57 | |
tristanC | fungi: bind mounting /etc/subuid will fix the current error but skopeo will still fail later when trying to create the user namespace | 13:01 |
*** lseki has joined #openstack-infra | 13:01 | |
fungi | oh | 13:01 |
fungi | so we do still need a downgraded or patched skopeo installed on our executors anyway? | 13:01 |
*** udesale has joined #openstack-infra | 13:02 | |
tristanC | fungi: iiuc yes | 13:02 |
fungi | in that case i'll go back to getting a build environment put together | 13:02 |
tristanC | fungi: you can check the version by running grep "subuid mapping" $(which skopeo); that shouldn't match un-affected version | 13:03 |
fungi | thanks | 13:03 |
*** jcoufal has joined #openstack-infra | 13:07 | |
*** rfarr has joined #openstack-infra | 13:11 | |
clarkb | maybe we should wrote our own tool like everyone else but this one can actually work >_> | 13:13 |
*** mriedem has joined #openstack-infra | 13:15 | |
*** yamamoto has joined #openstack-infra | 13:15 | |
*** dtantsur|brb is now known as dtantsur | 13:18 | |
mordred | clarkb: :) | 13:18 |
mordred | wait - so the old versions of skopeo *aren't* in the ppa? theyu're still listed on the page | 13:19 |
fungi | mordred: http://ppa.launchpad.net/projectatomic/ppa/ubuntu/pool/main/s/skopeo/ | 13:21 |
mordred | ugh. yeah | 13:21 |
*** Goneri has joined #openstack-infra | 13:21 | |
mordred | oh - awesome - tristanC pushed a PR to skopeo ... thanks tristanC ! | 13:23 |
* mordred hands tristanC a cookie | 13:23 | |
openstackgerrit | Mark Meyer proposed zuul/zuul master: Create a basic Bitbucket event source https://review.opendev.org/658835 | 13:24 |
clarkb | where ia the PR? | 13:24 |
fungi | in fact, only the latest package is technically in that repo | 13:24 |
fungi | wget -qO- http://ppa.launchpad.net/projectatomic/ppa/ubuntu/dists/xenial/main/binary-amd64/Packages.gz|zgrep -A6 '^Package: skopeo$' | 13:24 |
fungi | just one package entry, Version: 0.1.36-1~dev~ubuntu16.04.2~ppa19 | 13:25 |
*** ianychoi has joined #openstack-infra | 13:26 | |
fungi | clarkb: https://github.com/containers/skopeo/pull/653 | 13:27 |
*** xek_ has joined #openstack-infra | 13:28 | |
clarkb | thanks | 13:28 |
clarkb | interesting how parsing an image name results in a chown | 13:28 |
clarkb | good to see they are willing to fix the issue for us | 13:29 |
*** xek has quit IRC | 13:30 | |
openstackgerrit | Hervé Beraud proposed openstack/pbr master: Update Sphinx requirement https://review.opendev.org/659546 | 13:33 |
*** sreejithp has joined #openstack-infra | 13:34 | |
fungi | okay, i've now got a working xenial chroot (via mmdebstrap from debian/sid) with the skopeo build-deps installed into it, working on confirming i can build the binary packages from the latest source package in that ppa before moving on to patching it with tristanC's pr | 13:37 |
*** ginopc has quit IRC | 13:40 | |
*** rh-jelabarre has joined #openstack-infra | 13:54 | |
*** pcaruana has quit IRC | 13:56 | |
fungi | that much seems to have worked. now trying a patched build | 13:56 |
*** ginopc has joined #openstack-infra | 13:57 | |
fungi | infra-root: assuming this build completes, should i put ze01 in the emergency file and try installing the resulting package there? | 13:58 |
fungi | it's effectively the current package from the ppa but with https://patch-diff.githubusercontent.com/raw/containers/skopeo/pull/653.patch applied | 13:58 |
fungi | (emergency file would be to prevent us from installing yet another new build from the ppa which may not yet contain that commit) | 13:59 |
fungi | the patched package seems to have built successfully | 14:00 |
mordred | fungi: wfm | 14:00 |
clarkb | fungi: fine with me | 14:01 |
fungi | i now have a skopeo_0.1.36-1~dev~ubuntu16.04.2~ppa19.1_amd64.deb in ~fungi on ze01 | 14:01 |
fungi | (note the 19.1 instead of just 19) | 14:01 |
fungi | though the broken ensure for that package resource i think means there's no need to actually add the executors to the disable list since puppet is repeatedly failing to find the package version we're asserting there anyway | 14:02 |
mriedem | http://status.openstack.org/reviews/#nova was last refreshed last week | 14:03 |
mriedem | is that page broken? | 14:03 |
fungi | so going to upgrade ze01's skopeo package from the 0.1.36-1~dev~ubuntu16.04.2~ppa17 currently present to the 0.1.36-1~dev~ubuntu16.04.2~ppa19.1 i just built | 14:03 |
*** rfarr has quit IRC | 14:04 | |
*** iurygregory has joined #openstack-infra | 14:04 | |
*** rfarr has joined #openstack-infra | 14:04 | |
fungi | Unpacking skopeo (0.1.36-1~dev~ubuntu16.04.2~ppa19.1) over (0.1.36-1~dev~ubuntu16.04.2~ppa17) ... | 14:07 |
fungi | #status log temporarily upgraded skopeo on ze01 to manually-built 0.1.36-1~dev~ubuntu16.04.2~ppa19.1 package with https://github.com/containers/skopeo/pull/653 applied | 14:08 |
openstackstatus | fungi: finished logging | 14:08 |
*** ginopc has quit IRC | 14:08 | |
fungi | anything else we should be doing other than just making sure ze01 doesn't fall over before we upgrade skopeo to the patched package on the remaining executors? | 14:13 |
clarkb | I think that is probably it | 14:13 |
clarkb | in particular that bwrap continues to run jobs successfuly | 14:13 |
clarkb | since it and atomic container stuff play in similar spaces | 14:14 |
*** HenryG has quit IRC | 14:14 | |
*** ykarel is now known as ykarel|afk | 14:14 | |
*** ykarel|afk is now known as ykarel | 14:15 | |
fungi | i'll take a quick look on status.o.o and see why reviewstats has ceased updating | 14:17 |
*** ykarel is now known as ykarel|away | 14:17 | |
*** tdasilva has joined #openstack-infra | 14:18 | |
*** HenryG has joined #openstack-infra | 14:20 | |
*** cloudkiller is now known as blah23 | 14:27 | |
*** blah23 is now known as cloudnull|afk | 14:27 | |
*** cloudnull has quit IRC | 14:31 | |
*** cloudnull|afk is now known as cloudnull | 14:31 | |
fungi | bit of a head scratcher... | 14:32 |
fungi | http://paste.openstack.org/show/751478/ is all i find (repeated over and over) in /var/log/reviewday.log | 14:32 |
*** yamamoto has quit IRC | 14:33 | |
fungi | the cheetah package doesn't seem to be installed at all though | 14:33 |
fungi | which would of course explain that exception, but... | 14:33 |
fungi | i can't imagine what would have suddenly uninstalled it a week ago | 14:33 |
*** yamamoto has joined #openstack-infra | 14:33 | |
*** yamamoto has quit IRC | 14:33 | |
fungi | the reviewday and puppet-reviewday repos haven't seen new commits in a while | 14:34 |
*** yamamoto has joined #openstack-infra | 14:34 | |
fungi | not since the opendev migration | 14:34 |
fungi | status.o.o concidentally is still on trusty and has an uptime of 490 days | 14:35 |
fungi | so no rebuild a week ago to explain it | 14:36 |
*** d34dh0r53 has quit IRC | 14:36 | |
*** d34dh0r53 has joined #openstack-infra | 14:37 | |
*** armax has joined #openstack-infra | 14:37 | |
fungi | #status log deleted groups.openstack.org and groups-dev.openstack.org servers, along with their corresponding trove instances | 14:37 |
openstackstatus | fungi: finished logging | 14:37 |
*** pcaruana has joined #openstack-infra | 14:37 | |
*** Lucas_Gray has joined #openstack-infra | 14:37 | |
Shrews | fungi: maybe that error has always been there? | 14:38 |
fungi | perhaps... | 14:38 |
fungi | unfortunately we only have one week of retention on that log file, so hard to know | 14:39 |
*** yamamoto has quit IRC | 14:39 | |
fungi | and i can `flock -n /var/lib/reviewday/update.lock bash` without any trouble, so doesn't seem to be a stale lock causing this | 14:41 |
*** d34dh0r53 is now known as Guest28904 | 14:41 | |
openstackgerrit | Paul Belanger proposed zuul/zuul master: Support Ansible 2.8 https://review.opendev.org/631933 | 14:42 |
openstackgerrit | Merged zuul/zuul master: Fix missing check if logger is None https://review.opendev.org/659262 | 14:44 |
fungi | https://opendev.org/opendev/puppet-reviewday/src/branch/master/manifests/site.pp#L83-L89 should fire each time a change merges to the reviewday repo | 14:45 |
fungi | and pip freeze says everything in https://opendev.org/openstack/reviewday/src/branch/master/requirements.txt is installed *except* cheetah | 14:46 |
fungi | i'm going to attempt to run that exec manually and see what it does | 14:46 |
fungi | and now cheetah is installed. no errors | 14:48 |
fungi | and manually running what the cronjob would do seems to be generating content now | 14:48 |
AJaeger | clarkb, could you review https://review.opendev.org/#/c/659340/ https://review.opendev.org/657862 https://review.opendev.org/656657 and https://review.opendev.org/657657 , please? | 14:49 |
AJaeger | or any other config-core ^ | 14:49 |
fungi | i wonder if this is more mysterious fallout from the great dockering accident, though i can't immediately imagine how | 14:49 |
zigo | Where has gone the cgit link in gerrit? :/ | 14:50 |
openstackgerrit | Paul Belanger proposed zuul/zuul master: executor: use node python path https://review.opendev.org/637339 | 14:51 |
openstackgerrit | Paul Belanger proposed zuul/zuul master: Update quickstart nodepool node to python3 https://review.opendev.org/658486 | 14:51 |
fungi | zigo: we still need to revisit fixing that. we had to temporarily set it to a local gitweb link on the gerrit server (until we can start replicating all in-flight review refs to gitea), but there's something not quite right with the local gitweb i think? | 14:52 |
zigo | fungi: It's just that the link is just gone, nowhere to click so I can download the patch as a file ... | 14:52 |
zigo | Or am I missing a new place where it is supposed to be? | 14:53 |
fungi | the good news is that gitea merged corvus's fix for the ref replication problem so switching to that's probably not far out now (next time they tag a release unless we want to use unreleased gitea in the intermi) | 14:53 |
mordred | fungi: yeah - I think we *could* use the stuff from corvus' branch since they've merged it upstream (so it shouldn't be dangerous to rely on unmerged feature) | 14:54 |
mordred | but we could also just wait | 14:54 |
fungi | zigo: yes, the gitweb link is currently missing. i can't remember if we merged a patch to fix the configuration problem which caused it to not be displayed, and if so we may simply be pending a gerrit restart for that to take effect | 14:54 |
AJaeger | zigo: go to the upper left, there's a "Download" drop down | 14:54 |
*** UdayTKumar has joined #openstack-infra | 14:54 | |
AJaeger | zigo, that offers "Patch-File" | 14:55 |
mordred | fungi: we'll still need to actually replicate the refs - which will likely take a *While* - so we should probably set up something that's not gerrit to do an push of all the refs/* for all of the repos to all of the giteas before we turn on gerrit replication of everything again | 14:55 |
*** smarcet has joined #openstack-infra | 14:56 | |
clarkb | fungi: the gitweb link is missing because our config for that was incomplete | 14:56 |
clarkb | fungi: mordred tested a fix for this at the ptg on review-dev and I think got a change up for that which if it hasn't merged yet needs to be merged then gerrit restarted again | 14:56 |
*** jcoufal_ has joined #openstack-infra | 14:56 | |
openstackgerrit | Merged zuul/zuul master: Handle artifacts on non change types. https://review.opendev.org/659329 | 14:57 |
mordred | oh - right. lemme find link | 14:57 |
fungi | that matches my recollection, other than i couldn't recall if the fix had been written or merely discussed | 14:57 |
zigo | AJaeger: I can base64 decode it, but it's not super convenient ! :) | 14:57 |
zigo | Hum... that's enough with the zip ... :P | 14:58 |
zigo | Thanks. | 14:58 |
clarkb | oh cool the fix for the artifacts thing just merged \o/ | 14:58 |
mordred | https://review.opendev.org/#/c/656809/ | 14:58 |
mordred | clarkb: I thnik it's all been landed/applied - we just need to restart gerrit | 14:58 |
fungi | awesome. so many restarts | 14:58 |
mordred | RESTART ALL THE TINNGS | 14:59 |
fungi | clarkb: tobiash mentioned there are two artifacts fixes we should make sure are applied before restarting the scheduler | 14:59 |
tobiash | fungi: both have just landed | 14:59 |
smcginnis | clarkb: With that artifacts fix, are we all clear to do any javascript releases if they come up? | 14:59 |
fungi | perfect! | 14:59 |
*** jcoufal has quit IRC | 14:59 | |
fungi | smcginnis: needs a scheduler restart once the code is rolled out to the server | 15:00 |
clarkb | smcginnis: we need to restart the zuul scheduler first | 15:00 |
smcginnis | K, will watch for a restart notice before touching anything. Though I don't believe there are any pending. | 15:00 |
*** bhavikdbavishi has quit IRC | 15:00 | |
*** smarcet has quit IRC | 15:08 | |
clarkb | smcginnis: I'll try to remember to ping the release channel when we get to restarting the zuul scheduler | 15:12 |
smcginnis | Thanks clarkb | 15:16 |
*** eernst has joined #openstack-infra | 15:16 | |
clarkb | fungi: mordred I need to find breakfast and boot my day, but afterwards I can be around to help facilitate zuul and gerrit restarts | 15:17 |
clarkb | fungi: do we want to install skopeo on all the executors first? that seems like a low impact improvement | 15:17 |
*** eernst has quit IRC | 15:18 | |
openstackgerrit | Merged zuul/zuul master: Fix occasional gate resets with github https://review.opendev.org/650387 | 15:19 |
mordred | clarkb: yeah. I think installing the edited skopeo everywhere seems like a win | 15:19 |
*** ccamacho has quit IRC | 15:26 | |
*** sarob has joined #openstack-infra | 15:27 | |
openstackgerrit | Paul Belanger proposed zuul/zuul master: Support Ansible 2.8 https://review.opendev.org/631933 | 15:28 |
*** eernst has joined #openstack-infra | 15:28 | |
fungi | clarkb: i can go ahead and roll that patched skopeo out to the remaining executors since ze01 is still up and running jobs | 15:30 |
clarkb | fungi: I think that would be a good next step in the fixing of the list of problems | 15:30 |
clarkb | then if we can get successful zuul docker image build jobs we can flip those back to voting (as well as publish images to dockerhub on merge again) | 15:30 |
*** smarcet has joined #openstack-infra | 15:32 | |
mordred | ++ | 15:32 |
*** roman_g has quit IRC | 15:36 | |
*** _d34dh0r53_ is now known as d34dh0r53 | 15:38 | |
*** e0ne has quit IRC | 15:38 | |
*** ricolin has joined #openstack-infra | 15:38 | |
openstackgerrit | Merged zuul/zuul master: Centralize logging adapters https://review.opendev.org/658644 | 15:39 |
clarkb | as a sanity check before we restart things zuul and nodepool (we aren't restarting nodepool) memory use continues to look good | 15:41 |
clarkb | mordred: have you confirmed the needed config update is in place on review.o.o? | 15:43 |
clarkb | I'm checking zuul.o.o now to see if it has updated its zuul install yet | 15:44 |
*** ykarel|away has quit IRC | 15:44 | |
*** yboaron_ has quit IRC | 15:44 | |
clarkb | zuul==3.8.2.dev14 # git sha 27492fc is the version of zuul install on zuul01.o.o which includes d66b504804cede76f7c57d15a20ca5c902f1b8f6 the fix for the release stuff | 15:45 |
mordred | clarkb: yes. I believe the config on disk on review.o.o is correct | 15:45 |
clarkb | so I think zuul shceduler is ready when we are /me looks at status page now | 15:45 |
mordred | it lists type = gitweb and cgi = /usr/share/gitweb/gitweb.cgi | 15:45 |
mordred | and /usr/share/gitweb/gitweb.cgi is on the filesystem | 15:45 |
clarkb | there are a few things in the post pipeline that I think we may as well let filter through, but I can do the scheduler restart in a few minutse when that pipeline is shorter | 15:46 |
clarkb | also some requirements changes are about to merge, should let those through I guess | 15:46 |
clarkb | about 20 minutes away for that | 15:46 |
fungi | xorg crashed out on my workstation so trying to get that back to sanity before i copy the skopeo packages around | 15:47 |
fungi | should be done in a couple more minutes | 15:47 |
*** pcaruana has quit IRC | 15:47 | |
Shrews | clarkb: agreed, nodepool memory usage is surprisingly good | 15:48 |
Shrews | (he said, breathing a heavy sigh of relief) | 15:48 |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: Pagure driver - https://pagure.io/pagure/ https://review.opendev.org/604404 | 15:49 |
*** yamamoto has joined #openstack-infra | 15:50 | |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: Pagure driver - https://pagure.io/pagure/ https://review.opendev.org/604404 | 15:50 |
openstackgerrit | Merged zuul/zuul master: web: upgrade react and react-scripts to ^2.0.0 https://review.opendev.org/631902 | 15:53 |
*** iurygregory has quit IRC | 15:55 | |
fungi | okay, patched skopeo package copied to all remaining executors and working on installing it on them now | 15:56 |
*** pgaxatte has quit IRC | 15:56 | |
fungi | #status log temporarily upgraded skopeo on all zuul executors to manually-built 0.1.36-1~dev~ubuntu16.04.2~ppa19.1 package with https://github.com/containers/skopeo/pull/653 applied | 15:57 |
openstackstatus | fungi: finished logging | 15:57 |
*** yamamoto has quit IRC | 15:58 | |
clarkb | there are some zuul changes queued in the gate that should test if this is mostlyworking for us now | 15:58 |
*** rpittau is now known as rpittau|afk | 15:58 | |
*** imacdonn has quit IRC | 15:58 | |
clarkb | http://zuul.openstack.org/stream/e83a7906741b44acb1eb643d9f59816e?logfile=console.log may have started late enough | 15:58 |
*** lucasagomes has quit IRC | 15:58 | |
*** eernst has quit IRC | 15:59 | |
*** kopecmartin is now known as kopecmartin|off | 16:01 | |
*** imacdonn has joined #openstack-infra | 16:02 | |
*** ykarel|away has joined #openstack-infra | 16:04 | |
*** dims has quit IRC | 16:05 | |
*** tesseract has quit IRC | 16:05 | |
fungi | anybody else having trouble loading our grafana zuul dashboard? all i get is little spinnies where the graphs should appear... http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1 | 16:08 |
clarkb | works here | 16:08 |
clarkb | try a hard refresh maybe? | 16:08 |
fungi | k, likely just my browser being weird | 16:08 |
clarkb | graphite is behind the le cert now and maybe your js is trying to talk to the old thing still? | 16:08 |
fungi | oh, could be | 16:09 |
*** smarcet has quit IRC | 16:09 | |
fungi | oh! hah | 16:10 |
fungi | eff privacy badger is blocking the callouts to graphite.o.o from that page ;) | 16:10 |
fungi | yep, unblocking it seems to have solved my problem | 16:12 |
fungi | not sure why that just started | 16:12 |
clarkb | related to that we seem to be underutilizing our clouds resources | 16:12 |
clarkb | I wonder if a cloud is having a sad | 16:12 |
mloza | What is best way to to do anti affinity between racks not only compute nodes? | 16:12 |
clarkb | mloza: we probably aren't the best group to ask. While we consume a lot of openstack clouds we don't currently run any (we run the developer infrastructure for openstack) | 16:13 |
fungi | mloza: i couldn't begin to tell you... this is the channel where we design and run various collaboration services for the openstack community. you may be better off asking in #openstack | 16:13 |
clarkb | or sending email to the openstack-discuss mailing list | 16:13 |
mloza | oh ok | 16:14 |
mloza | sorry | 16:14 |
*** sarob has quit IRC | 16:16 | |
AJaeger | zuul-build-image succeeded on https://review.opendev.org/580547 | 16:17 |
openstackgerrit | Merged zuul/zuul master: Cleanup executor specific requirements https://review.opendev.org/649428 | 16:17 |
AJaeger | Woohoo! | 16:17 |
clarkb | yay the spice is flowing again | 16:17 |
* AJaeger rechecked https://review.opendev.org/659242 - to make that job voting again. | 16:18 | |
clarkb | I'm waiting on one job to finish in the openstack g ate before I restart the zuul scheduler | 16:18 |
*** dtantsur is now known as dtantsur|afk | 16:19 | |
AJaeger | clarkb: want to review https://review.opendev.org/659340 in the meantime? Or should I self-approve removal of a unused jobs from ozj? | 16:20 |
clarkb | AJaeger: done | 16:21 |
AJaeger | thanks | 16:21 |
clarkb | (we probably can single core approve those cleanups though zuul should check if we do something wrong with them) | 16:21 |
clarkb | ok I am saving queues now and restarting the zuul scheduler | 16:22 |
AJaeger | clarkb, ok - will do next time. | 16:22 |
*** ykarel|away has quit IRC | 16:22 | |
clarkb | the stop command has been issued and I'm waiting for it to actually stop | 16:23 |
clarkb | things are on their way back now | 16:25 |
clarkb | will reload queues once zuul has loaded its config | 16:25 |
fungi | i just realized i forgot to upgrade skopeo on ze11 and ze12 but just finished fixing it | 16:25 |
fungi | so if we see any similar-looking failures logged between 1557 and now, it likely ran from one of those | 16:26 |
*** sarob has joined #openstack-infra | 16:26 | |
clarkb | I wonder if we can get zuul to remember its config when we restart it to speed things up | 16:27 |
fungi | obvious risk for that is when new code changes the way that configuration would be parsed | 16:28 |
fungi | you always want to remember the configuration except for those times when you don't | 16:28 |
*** mattw4 has joined #openstack-infra | 16:28 | |
clarkb | ya | 16:28 |
fungi | could end up being a source of subtle confusion | 16:28 |
clarkb | I guess we had a recent case of that too (the security fix) | 16:29 |
fungi | as for under-utilizing our clouds, we're not really running much of a node request backlog, so some of that is likely just it being a "slow" week | 16:30 |
fungi | post-summit rush has played out, and now everyone's overcome with post-summit exhaustion | 16:30 |
clarkb | gate is reloaded, check is on its way now | 16:30 |
*** sarob has quit IRC | 16:32 | |
clarkb | check is loaded | 16:36 |
*** sshnaidm is now known as sshnaidm|afk | 16:37 | |
openstackgerrit | Merged openstack/openstack-zuul-jobs master: Remove unused jobs https://review.opendev.org/659340 | 16:37 |
clarkb | #status log Restarted zuul-scheduler to pick up a fix for zuul artifact handling that affected non change object types like tags as part of release pipelines. Now running on ee4b6b1a27d1de95a605e188ae9e36d7f1597ebb | 16:40 |
openstackstatus | clarkb: finished logging | 16:40 |
fungi | safe to let #openstack-release know we're open for business again? | 16:40 |
openstackgerrit | Andreas Jaeger proposed openstack/project-config master: Update node to version 10 for release-zuul-python https://review.opendev.org/659620 | 16:40 |
fungi | or should we also do the gerrit restart first and wait for replication to quiesce? | 16:40 |
clarkb | fungi: I let them know about the zuul restart already | 16:41 |
clarkb | I'm torn on the gerrit restart. I personally never use the gitweb links so don't find them to be a major aid | 16:41 |
clarkb | but if we are gonna do it we should probably just take the hit and get it done with | 16:41 |
fungi | various people keep bringing it up, so clearly some were relying on it even if we haven't | 16:42 |
clarkb | mordred: are you still around? | 16:42 |
*** ricolin has quit IRC | 16:44 | |
openstackgerrit | Logan V proposed openstack/project-config master: Increase nodepool image journal size https://review.opendev.org/659215 | 16:44 |
openstackgerrit | Logan V proposed openstack/project-config master: Cleanup nodepool DIB environment using anchors https://review.opendev.org/659621 | 16:45 |
clarkb | logan-: neat! | 16:45 |
fungi | http://paste.openstack.org/show/751484/ is what the production config has currently | 16:45 |
*** rfarr has quit IRC | 16:46 | |
clarkb | fungi: its the cgi = line that we didn't have before that fixes it and mordred said earlier that he was happy with it | 16:46 |
fungi | the cgi definitely exists at that path too | 16:46 |
*** amansi26 has joined #openstack-infra | 16:47 | |
logan- | clarkb :) | 16:47 |
fungi | logan-: wow, that's an excellent observation | 16:49 |
clarkb | This week is a week of fixing all the weird random unexpected bugs | 16:50 |
openstackgerrit | Clark Boylan proposed openstack/project-config master: Update node to version 10 for release-zuul-python https://review.opendev.org/659620 | 16:51 |
clarkb | fungi: well shall we restart gerrit? | 16:51 |
*** smarcet has joined #openstack-infra | 16:52 | |
clarkb | fungi: I'll let the release team know if so | 16:52 |
fungi | yeah, may as well take the hit now | 16:52 |
clarkb | ok I'm letting them know now | 16:52 |
fungi | i'm already logged in and can do the stop/start if you want to handle the status notice? | 16:52 |
*** eernst has joined #openstack-infra | 16:53 | |
clarkb | fungi: hrm there is a relaeses job in the gate | 16:53 |
clarkb | https://review.opendev.org/#/c/652847/ do we need to wait on that to get through zuul first? | 16:53 |
fungi | okay, will wait | 16:53 |
fungi | may as well | 16:53 |
*** bauzas has quit IRC | 16:54 | |
*** michael-beaver has joined #openstack-infra | 16:55 | |
*** bauzas has joined #openstack-infra | 16:57 | |
clarkb | fungi: the change has merged and is now in the post queue | 16:57 |
fungi | yep, that's going to apply the relevant eol tags from the change | 17:00 |
*** derekh has quit IRC | 17:00 | |
clarkb | logan-: http://logs.openstack.org/21/659621/1/check/project-config-nodepool/5749bd1/job-output.txt.gz#_2019-05-16_16_58_16_086627 our nodepool config is too smart for its own good | 17:01 |
logan- | yep, oh well | 17:01 |
clarkb | logan-: you'll need to set the anchor in one of the images so that it is valid | 17:01 |
clarkb | oh except there are various other env vars that are image specific | 17:01 |
clarkb | so ya may need to over account for these things | 17:01 |
logan- | yeah I don't think there are any images with non-specific vars | 17:02 |
*** dims has joined #openstack-infra | 17:03 | |
*** sarob has joined #openstack-infra | 17:05 | |
*** jcoufal_ has quit IRC | 17:05 | |
*** jcoufal has joined #openstack-infra | 17:06 | |
*** harlowja has joined #openstack-infra | 17:07 | |
AJaeger | any config-core wants to review 2 new repo additions? https://review.opendev.org/657862 and https://review.opendev.org/656657 are ready | 17:09 |
openstackgerrit | Logan V proposed openstack/project-config master: Increase nodepool image journal size https://review.opendev.org/659215 | 17:09 |
mordred | clarkb: I am back | 17:09 |
clarkb | mordred: ok we held off resetarting due to a release chagne in flight. It is still in release-post but once done there we are gonna restart | 17:09 |
mordred | kk. awesome | 17:09 |
clarkb | AJaeger: I can probably look after the gerrit restart | 17:10 |
AJaeger | that release change tags 16 tripleo repos with pikeem... | 17:10 |
fungi | for some reason i thought release-post had priority high, but that changeish has been waiting 14 minutes for nodes already | 17:10 |
clarkb | fungi: could be that we are at capacity due to the zuul restart? | 17:10 |
fungi | ahh, yep that | 17:10 |
fungi | just wanted to be sure it wasn't the artifact bug coming back again | 17:10 |
clarkb | I don't think so | 17:11 |
clarkb | but can check the zuul debug log | 17:11 |
*** jpena is now known as jpena|off | 17:11 | |
clarkb | not seeing tracebacks like before | 17:11 |
*** amoralej is now known as amoralej|off | 17:12 | |
fungi | cool, almost certainly just the thundering herd of reenqueues | 17:12 |
*** dims has quit IRC | 17:12 | |
*** panda|rover is now known as panda|rover|off | 17:13 | |
*** dims has joined #openstack-infra | 17:13 | |
openstackgerrit | Merged zuul/zuul master: web: honor allowed-labels setting in the REST API https://review.opendev.org/653895 | 17:14 |
*** ykarel|away has joined #openstack-infra | 17:17 | |
*** Lucas_Gray has quit IRC | 17:18 | |
amansi26 | fungi: I am facing this warning http://paste.openstack.org/show/751485/ while running all the patches. Can you help me with this? | 17:19 |
openstackgerrit | Merged zuul/zuul master: docs: fix a typo in executor zone documentation https://review.opendev.org/653663 | 17:20 |
openstackgerrit | Merged openstack/project-config master: Update node to version 10 for release-zuul-python https://review.opendev.org/659620 | 17:20 |
fungi | amansi26: those are benign warnings you can ignore. gerrit emits replication events on its event stream and zuul isn't configured to know what they are or whether it should do anything with them | 17:20 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate github logs with the event id https://review.opendev.org/658645 | 17:20 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate gerrit event logs https://review.opendev.org/658646 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Attach event to queue item https://review.opendev.org/658647 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate some logs in the scheduler with event id https://review.opendev.org/658648 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate logs in the zuul driver with event ids https://review.opendev.org/658649 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Add event id to timer events https://review.opendev.org/658650 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate pipeline processing with event id https://review.opendev.org/658651 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate merger logs with event id https://review.opendev.org/658652 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate job freezing logs with event id https://review.opendev.org/658888 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate node request processing with event id https://review.opendev.org/658889 | 17:21 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id https://review.opendev.org/658895 | 17:21 |
clarkb | fungi: one of two jobs have run now | 17:21 |
AJaeger | clarkb: and now 12 jobs in tag queue ;( | 17:22 |
AJaeger | 12 changes I mean | 17:22 |
fungi | yep | 17:22 |
clarkb | AJaeger: ya | 17:22 |
fungi | those were generated by the commit in release-post | 17:22 |
AJaeger | yeah, that's the result of that job... | 17:22 |
fungi | right | 17:22 |
* AJaeger was too fast in typing (or too slow to connect the dots ;) | 17:22 | |
fungi | thankfully they're just release notes publication jobs so should go very quickly | 17:23 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate node request processing with event id https://review.opendev.org/658889 | 17:24 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id https://review.opendev.org/658895 | 17:24 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate node request processing with event id https://review.opendev.org/658889 | 17:25 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id https://review.opendev.org/658895 | 17:25 |
openstackgerrit | Andreas Jaeger proposed openstack/project-config master: Fix nodejs version for publish-zuul-python-branch-tarball https://review.opendev.org/659628 | 17:29 |
*** smarcet has quit IRC | 17:29 | |
AJaeger | clarkb, pabelanger, one more nodejs change needed ^ | 17:29 |
*** xek__ has joined #openstack-infra | 17:30 | |
*** rfarr has joined #openstack-infra | 17:32 | |
*** xek_ has quit IRC | 17:32 | |
clarkb | fungi: mordred AJaeger double check me but I think we can restart gerrit now | 17:33 |
fungi | looks like we're all clear, yes | 17:33 |
fungi | if someone wants to status notify i'll get going on the restart | 17:33 |
clarkb | I can do the status notify | 17:33 |
mordred | clarkb: I think we can - I'm unable to ssh atm, so I'm mostly moral support (my linux subsystem is updating which means I can't open a shell) | 17:34 |
AJaeger | clarkb: go ahead | 17:34 |
*** rfarr_ has joined #openstack-infra | 17:34 | |
fungi | it's on its way down now | 17:34 |
clarkb | how about #status notify Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise. | 17:34 |
*** dklyle has quit IRC | 17:34 | |
fungi | wfm | 17:34 |
clarkb | #status notify Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise. | 17:34 |
openstackstatus | clarkb: unknown command | 17:34 |
clarkb | #status notice Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise. | 17:34 |
openstackstatus | clarkb: sending notice | 17:34 |
fungi | just waiting on the java process to end | 17:35 |
*** Goneri has quit IRC | 17:35 | |
*** sarob has quit IRC | 17:35 | |
fungi | oh, i think systemd is confused by the previous gerrit start timing out | 17:36 |
fungi | so stop doesn't try to stop it | 17:36 |
fungi | i'll work around that | 17:36 |
-openstackstatus- NOTICE: Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise. | 17:36 | |
clarkb | ugh | 17:36 |
fungi | running the initscript manually instead | 17:37 |
clarkb | maybe we should edit the init script to have a longer timeout | 17:37 |
fungi | i don't think it's the initscript timing out, is it? | 17:37 |
*** rfarr has quit IRC | 17:37 | |
fungi | okay, cleanly stopped, starting again | 17:37 |
clarkb | fungi: the init script has a 5 minute timeout waiting for gerrit to stop | 17:37 |
fungi | oh, well this wasn't a stop timeout | 17:37 |
fungi | the last time we restarted gerrit, something (systemd?) gave up waiting for the start to return | 17:38 |
*** electrofelix has quit IRC | 17:38 | |
fungi | so systemd has been thinking the gerrit service is stopped this whole time | 17:38 |
openstackstatus | clarkb: finished sending notice | 17:38 |
fungi | even though it's been running | 17:38 |
clarkb | ah | 17:38 |
fungi | so asking systemd to stop gerrit was a no-op from its perspective | 17:39 |
*** sean-k-mooney has quit IRC | 17:39 | |
fungi | there we go... happened again: Job for gerrit.service failed because the control process exited with error code. See "systemctl status gerrit.service" and "journalctl -xe" for details. | 17:39 |
clarkb | it might have a timeout for startup too | 17:39 |
*** pcaruana has joined #openstack-infra | 17:39 | |
clarkb | or maybe it was always a startup timeout | 17:39 |
fungi | yeah, i expect that's it. could be the initscript is giving up early | 17:40 |
mordred | yeah | 17:40 |
*** cfriesen has joined #openstack-infra | 17:40 | |
clarkb | GERRIT_STARTUP_TIMEOUT=`get_config --get container.startupTimeout` | 17:40 |
clarkb | so its somethign we can set in the gerrit config for next time and it defaults to 90 seconds | 17:41 |
*** sean-k-mooney has joined #openstack-infra | 17:41 | |
mordred | ok. I have access to my shell again so I can shell places as needed | 17:41 |
cfriesen | you guys are aware of review.opendev.org being down? | 17:41 |
AJaeger | cfriesen: yes, we're restarting it | 17:41 |
fungi | cfriesen: yep, it's restarting | 17:41 |
openstackgerrit | Merged zuul/zuul master: gerrit: Add some quoting in 'gerrit query' commands https://review.opendev.org/649879 | 17:41 |
cfriesen | cool, thanks | 17:41 |
AJaeger | cfriesen: openstackstatus should have send a message to all channels... | 17:41 |
clarkb | cfriesen: we sent an irc notice to all channels | 17:41 |
fungi | sent notice to all channels where the openstack status bot resides | 17:41 |
fungi | and it looks to be back up and responding finally | 17:42 |
AJaeger | indeed - thanks for updating my sentence ;) | 17:42 |
clarkb | I don't see gitweb links though | 17:42 |
clarkb | maybe I need to clear caches? /me tries a different browser | 17:42 |
fungi | i did a force refresh in mine and it didn't help | 17:42 |
AJaeger | fungi, mordred , could either of you +2A https://review.opendev.org/659628 to fix zuul post jobs, please? | 17:43 |
*** jlibosva has quit IRC | 17:43 | |
clarkb | fungi: using a different browser brought gitweb back | 17:43 |
*** Weifan has joined #openstack-infra | 17:43 | |
sean-k-mooney | for what its worth i did not see it in nova neutron or placemetes irc but i also just got discconected and reconnected form freenode so maybe my client missed it | 17:43 |
clarkb | fungi: so probably need a more forceful cache eviction than force refresh | 17:43 |
fungi | clarkb: loading a different change i hadn't looked at before solve it for me | 17:43 |
* AJaeger sees "gitweb" | 17:43 | |
clarkb | but do they work | 17:43 |
fungi | zigo: gitweb links are back now | 17:43 |
AJaeger | but it does not work for me ;( | 17:43 |
clarkb | ya they don't work for me either | 17:44 |
AJaeger | I get an URL - and that one returns nothing ;( | 17:44 |
fungi | zigo: nevermind, seems something's still not quite right | 17:44 |
clarkb | possible our apache is rewriting things wrongly? | 17:44 |
mordred | POO | 17:45 |
fungi | we're proxypassing those urls to http://localhost:8081/ | 17:46 |
clarkb | which is gerrit | 17:46 |
sean-k-mooney | that is weired i see that too | 17:46 |
fungi | we do expect the gerrit service to serve that, i assume | 17:46 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: DNM: further zuul-remote debugging attempts https://review.opendev.org/659631 | 17:46 |
mordred | yes, it's supposed to serve it as a cgi | 17:46 |
clarkb | ERROR com.google.gerrit.httpd.gitweb.GitwebServlet : CGI: Can't locate CGI.pm in @INC (you may need to install the CGI module) | 17:46 |
*** udesale has quit IRC | 17:46 | |
mordred | oh. HAHAHAHAH | 17:46 |
clarkb | so we are msising a dependency? | 17:47 |
clarkb | perl-cgi? | 17:47 |
mordred | yes. and we rmeoved it on purpose anyway | 17:47 |
fungi | hah | 17:47 |
mordred | the gitweb package isn't required for the gitweb cgi to exist | 17:47 |
sean-k-mooney | if i dsable my browser cache with the debug pain the gitweb links come back | 17:47 |
mordred | so we stopped installing it, because it pulls in, you know, perl | 17:47 |
clarkb | libcgi-pm-perl | 17:47 |
fungi | at least that's easy to fix | 17:47 |
clarkb | I think its that package but will let monty who udnerstands this better decide which package we need | 17:48 |
mordred | we could probably just install gitweb | 17:48 |
mordred | how about I try installing it by hand for now - and if that fixes it, we can update the puppet | 17:48 |
clarkb | that would likely cover other deps | 17:48 |
clarkb | mordred: wfm | 17:48 |
*** Goneri has joined #openstack-infra | 17:48 | |
mordred | clarkb: ok. done - are you still getting the error? | 17:49 |
clarkb | ERROR com.google.gerrit.httpd.gitweb.GitwebServlet : CGI: BEGIN failed--compilation aborted at /usr/share/gitweb/gitweb.cgi line 13 | 17:49 |
clarkb | new error | 17:49 |
mordred | it did not install libcgi-pm-perl | 17:49 |
mordred | neat | 17:49 |
clarkb | use CGI qw(:standard :escapeHTML -nosticky); is line 13 | 17:50 |
clarkb | its teh first real line of code after use warn | 17:50 |
mordred | yeah | 17:50 |
fungi | sean-k-mooney: yeah, it was your disconnect... the notice went to #openstack-nova at 17:37z and at 17:39z the channel saw you disconnect for a 255 second timeout | 17:50 |
clarkb | and use strict | 17:50 |
mordred | clarkb: how about I install libcgi-pm-perl just for giggles? | 17:50 |
sean-k-mooney | fungi: right on my side it was for maybe 5 seconds but i guess i was not getting updates for a bit | 17:50 |
clarkb | mordred: ok | 17:50 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate github logs with the event id https://review.opendev.org/658645 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate gerrit event logs https://review.opendev.org/658646 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Attach event to queue item https://review.opendev.org/658647 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate some logs in the scheduler with event id https://review.opendev.org/658648 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate logs in the zuul driver with event ids https://review.opendev.org/658649 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Add event id to timer events https://review.opendev.org/658650 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate pipeline processing with event id https://review.opendev.org/658651 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate merger logs with event id https://review.opendev.org/658652 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate job freezing logs with event id https://review.opendev.org/658888 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Annotate node request processing with event id https://review.opendev.org/658889 | 17:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id https://review.opendev.org/658895 | 17:51 |
mordred | clarkb: ok. that did it | 17:51 |
mordred | clarkb: I'm going to re-uninstall gitweb to see if we needed it actually | 17:51 |
openstackgerrit | Merged openstack/project-config master: Fix nodejs version for publish-zuul-python-branch-tarball https://review.opendev.org/659628 | 17:51 |
clarkb | mordred: k | 17:51 |
AJaeger | works for me as well1 | 17:51 |
AJaeger | \o/ | 17:51 |
sean-k-mooney | mordred: works for me too | 17:51 |
mordred | gitweb links work without gitweb package | 17:51 |
mordred | so it's libcgi-pm-perl that was needed | 17:51 |
sean-k-mooney | and as i said if you clear you borwser cache or disable it with the debug pannel the gitweb links show up again on reviews where they are missing | 17:52 |
fungi | zigo: okay, *now* the gitweb links are working again ;) | 17:52 |
AJaeger | sean-k-mooney: and now you can even "click" on the links | 17:53 |
*** eernst has quit IRC | 17:53 | |
sean-k-mooney | yep well you could click on them before and be disapointed | 17:53 |
sean-k-mooney | i see ye also fixed code search to wrok with opendev recently too | 17:53 |
openstackgerrit | Monty Taylor proposed opendev/puppet-gerrit master: Install libcgi-pm-perl for gitweb https://review.opendev.org/659632 | 17:54 |
mordred | clarkb, fungi: ^^ | 17:54 |
*** jcoufal has quit IRC | 17:55 | |
*** auristor has quit IRC | 17:56 | |
clarkb | AJaeger: release-zuul-python works now thanks! | 17:56 |
AJaeger | yes, it does! | 17:56 |
AJaeger | now checking the publish branch-tarball... | 17:56 |
*** smarcet has joined #openstack-infra | 17:57 | |
clarkb | mordred: can you review https://review.opendev.org/#/c/655476/ these are our fixes to the renaming playbook based on our day of fun a few weeks back | 17:57 |
clarkb | I guess it was almost am onth ago now wow | 17:57 |
clarkb | ok I think that means we have fixed our known backlog of problems from yesterday | 17:59 |
clarkb | puppet works again, skopeo chagne merged but didn't work because PPAs so fungi made a package himself, zuul was updated to handle artifacts outside of change contexts and zuul and gerrit restarted to get fixes in | 17:59 |
clarkb | \o/ | 17:59 |
clarkb | feels like a full day already :) | 18:00 |
AJaeger | ;) | 18:00 |
mordred | clarkb: I have just learned that accidentally typing Alt-M causes weechat to attempt to capture all of my mouse things, including copy/paste of text and url launching | 18:00 |
clarkb | mordred: you can /set mouse.disable or something like that | 18:01 |
clarkb | I run into that randomly occasionally and freak out for a minute until I remember how to turn it off | 18:01 |
mordred | yeah - but Alt-M is the hotkey to toggle that | 18:01 |
mordred | which I just learned :) | 18:01 |
openstackgerrit | Merged zuul/zuul master: github: add event filter debug https://review.opendev.org/580547 | 18:02 |
*** nicolasbock has quit IRC | 18:02 | |
*** jcoufal has joined #openstack-infra | 18:06 | |
fungi | right, setting it disabled doesn't help disable the toggle. i'm considering rebinding/unbinding that key combo as a solution | 18:06 |
fungi | i don't need irc to know anything about my pointer | 18:07 |
fungi | i use alt-j all the time to navigate between channel buffers, and j is very close to m on my keyboards and i'm a rather hamfisted typist | 18:08 |
fungi | problem is i do it unconsciously, so tend not to catch that it's been reenabled until some time later when i attempt to copy or paste something in my irc terminal | 18:08 |
fungi | and then i get to spend several minutes figuring out what the magic combo is to turn it back off | 18:09 |
fungi | because i never can remember | 18:09 |
AJaeger | and zuul-branch-tarball is fixed as well - so, zuul post jobs are fixed... | 18:11 |
*** auristor has joined #openstack-infra | 18:11 | |
*** amansi26 has quit IRC | 18:11 | |
clarkb | yay and we are publishing to dockerhub again | 18:13 |
mordred | fungi: the real fun behavior this time was that "click-drag" to try to highlight something did not highlight the thing - instead it changed me to the next window like I'd typed Alt-<arrow> | 18:13 |
mordred | clarkb: woohoo! | 18:14 |
fungi | mordred: yep, and some pointer interactions in that mode will scroll the buffer too | 18:14 |
mordred | fungi: oh - so it will! that's actually kind of handy | 18:14 |
clarkb | I find it unnecessarily confusing to have mouse mode on | 18:15 |
*** ianychoi has quit IRC | 18:15 | |
mordred | if I could tell it "please obey scrollwheel but please do not capture anythign else" that would work | 18:15 |
fungi | i have a feeling this is mostly to placate folks using weechat on a touchscreen device | 18:15 |
clarkb | fungi: I duno its the touchscreen where I get extra confused | 18:15 |
fungi | i too like nothing about the mouse mode in weechat and wish it had an actual config option to disable the ability to accidentally enter that mode | 18:16 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Add foreground option https://review.opendev.org/635649 | 18:16 |
mordred | fungi: I have a touchscreen. I do not understand wanting to use it | 18:18 |
fungi | i have a touchscreen and i've tried to use it, but it seems like a bit of a novelty given i'm accustomed to doing almost everything through key combos | 18:18 |
fungi | if i did sketching or similar sorts of art on my portable devices then maybe | 18:19 |
fungi | but i have a bluetooth pen tablet for that purpose | 18:19 |
mordred | yeah. like - use cases where touch is actually a better UI than a keyboard | 18:19 |
*** ralonsoh has quit IRC | 18:20 | |
clarkb | I have two issues with my touchscreens. First is it is a battery drain so may as well turn it off and operate longer. Second is hands tend to not be clean and then I can't see what I'm looking at | 18:20 |
clarkb | logan-: I've +2'd https://review.opendev.org/#/c/659215/3 I think it is safe to merge that before dib updates and dib will just ignore the env var until it doesn't | 18:22 |
clarkb | logan-: I think you can remove your WIP as a result | 18:22 |
logan- | thanks clarkb, makes sense to me | 18:22 |
*** ccamacho has joined #openstack-infra | 18:23 | |
*** gfidente is now known as gfidente|afk | 18:24 | |
*** smarcet has quit IRC | 18:24 | |
clarkb | infra-root re https://review.opendev.org/#/c/657862/3 any concern with adding repos to x/ ? | 18:25 |
clarkb | I seem to recall some people thinking we wouldn't add much there? I don't think its an issue if people are happy with the generic prefix | 18:25 |
fungi | when i suggested x originally i saw it as a nice place for a catch-all when folks don't care/want a namespace | 18:25 |
fungi | not merely as an hostoric artifact of the openstack namespace eviction event | 18:26 |
clarkb | also I approved https://review.opendev.org/#/c/656657/3 which hopefully isn't an issue with replication liekly stuill happening | 18:26 |
fungi | er, historic | 18:26 |
mordred | I don't have a _problem_ with it | 18:26 |
clarkb | fungi: ya me too its super boring an generic "I don't care" type space | 18:26 |
mordred | but I also think it'll be nice to get our story for new namespaces fleshed out | 18:26 |
clarkb | whcih I expect many people will be in that boat | 18:26 |
fungi | x marks the spot or your buried treasure! | 18:27 |
clarkb | at 10k ish tasks | 18:27 |
* mordred would like buried treasure | 18:27 | |
*** ykarel|away has quit IRC | 18:28 | |
openstackgerrit | Merged openstack/project-config master: Add new Keystone SAML Mellon charm repositories https://review.opendev.org/656657 | 18:38 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: DNM: further zuul-remote debugging attempts https://review.opendev.org/659631 | 18:41 |
openstackgerrit | Merged zuul/zuul master: Don't check out a commit if it is already checked out https://review.opendev.org/163922 | 18:47 |
*** trident has quit IRC | 18:51 | |
*** trident has joined #openstack-infra | 18:52 | |
openstackgerrit | Merged zuul/zuul master: Use a more visible selection color https://review.opendev.org/648865 | 18:59 |
clarkb | infra-root as an fyi I am going to recheck https://review.opendev.org/#/c/656880/1 which will prune our docker images | 19:00 |
clarkb | it didn't manage to merge during the ptg and then it got put to the side with all of the brokeness sicne being back | 19:01 |
clarkb | but I'm finally in a place to watch it so trying to get it in now | 19:01 |
mordred | clarkb: ++ | 19:01 |
openstackgerrit | Paul Belanger proposed zuul/zuul master: Support Ansible 2.8 https://review.opendev.org/631933 | 19:07 |
*** imacdonn has quit IRC | 19:12 | |
*** imacdonn has joined #openstack-infra | 19:14 | |
clarkb | bah the idempotent check in beaker job makes that chagne fail again | 19:17 |
clarkb | I'll rechcek again (and maybe we need to loosen that rule) | 19:17 |
clarkb | I'll rechcek again (and maybe we need to loosen that rule) | 19:17 |
clarkb | derp | 19:17 |
openstackgerrit | Merged zuul/zuul master: Make image build jobs voting again https://review.opendev.org/659242 | 19:18 |
clarkb | hrm all the system-config run ansible jobs also failed and look super unhappy | 19:21 |
clarkb | oh ansible 2.8 released | 19:21 |
clarkb | I bet that is our issue | 19:21 |
clarkb | yay | 19:21 |
clarkb | did ansible 2.8 drop the paramiko requirement? | 19:22 |
clarkb | seems that testinfra is failing to ssh? | 19:22 |
*** nicolasbock has joined #openstack-infra | 19:22 | |
clarkb | https://github.com/ansible/ansible/commit/6ce9cf7741679449fc3ac6347bd7209ae697cc5b#diff-b4ef698db8ca845e5845c4618278f29a yup | 19:24 |
clarkb | should we try to pin ansible in our test jobs? | 19:24 |
*** imacdonn has quit IRC | 19:25 | |
*** Weifan has quit IRC | 19:26 | |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Cap ansible to <2.8 to fix testinfra https://review.opendev.org/659646 | 19:27 |
*** sarob has joined #openstack-infra | 19:27 | |
clarkb | lets see if that works | 19:27 |
clarkb | that doesn't actually run the testinfra tests.. | 19:28 |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Cap ansible to <2.8 to fix testinfra https://review.opendev.org/659646 | 19:28 |
*** Goneri has quit IRC | 19:32 | |
*** xek__ has quit IRC | 19:32 | |
*** xek__ has joined #openstack-infra | 19:32 | |
openstackgerrit | Merged zuul/zuul master: zuul-quick-start: run docker-compose as regular user https://review.opendev.org/658738 | 19:32 |
*** Goneri has joined #openstack-infra | 19:33 | |
openstackgerrit | Dean Troyer proposed opendev/system-config master: Add #starlingx to statusbot channels https://review.opendev.org/659652 | 19:40 |
*** imacdonn has joined #openstack-infra | 19:42 | |
*** smarcet has joined #openstack-infra | 19:44 | |
pabelanger | zuul.o.o isn't loading for me | 19:46 |
pabelanger | clarkb: mordred^ | 19:46 |
pabelanger | I seem to be getting a JS error | 19:46 |
clarkb | I'm guessing react 2.0 broke something | 19:46 |
*** imacdonn_ has joined #openstack-infra | 19:46 | |
pabelanger | http://paste.openstack.org/show/751488/ | 19:47 |
clarkb | I know nothing about the react 2.0 work so not a great person to debug it | 19:47 |
clarkb | https://reactjs.org/docs/error-decoder.html/?invariant=130&args[]=undefined&args[]= | 19:47 |
*** gfidente|afk has quit IRC | 19:48 | |
pabelanger | k, will bring it up in #zuul | 19:49 |
clarkb | pupept did just run and it did update the web stuff | 19:49 |
*** imacdonn has quit IRC | 19:49 | |
*** Goneri has quit IRC | 19:50 | |
mordred | pabelanger: it's loading for me | 19:51 |
mordred | pabelanger: have you tried shift-reloading? | 19:51 |
mordred | oh - hrm | 19:51 |
clarkb | mordred: I think you are on the old code if it is working | 19:51 |
pabelanger | yah, even cleared cache | 19:52 |
mordred | http://zuul.opendev.org/t/opendev/status works for me | 19:52 |
mordred | but http://zuul.opendev.org/t/openstack/status | 19:52 |
mordred | doesn't | 19:52 |
*** pcaruana has quit IRC | 19:52 | |
pabelanger | same | 19:52 |
mordred | I'm grumpy about how that is doing what it's doing | 19:53 |
mordred | we used to produce source maps so that we could get real errors even with minification | 19:53 |
mordred | the react error decoder is useless | 19:54 |
clarkb | mordred: there is still a map file in the static dir | 19:54 |
mordred | awesome. well - I don't know why we're getting useless error messages | 19:54 |
mordred | but I recommend we revert | 19:54 |
clarkb | wfm I'll let other people drive as I need to eat a late lunch now | 19:54 |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Cap ansible to <2.8 to fix testinfra https://review.opendev.org/659646 | 19:55 |
pabelanger | k, will get revert up | 19:55 |
openstackgerrit | David Shrewsbury proposed zuul/zuul master: Revert "web: upgrade react and react-scripts to ^2.0.0" https://review.opendev.org/659655 | 19:55 |
clarkb | also if infra-root can review ^ it would be helpful for fixing system-config tests. note ps2 tests ran all the jobs | 19:55 |
clarkb | I've pushed ps3 without the comment in .zuul.yaml which will run fewer tests | 19:55 |
pabelanger | +2 | 19:56 |
pabelanger | clarkb: yah, users now need to manage paramiko themself for ansible, IIRC | 19:56 |
clarkb | pabelanger: I'm expecting testinfra will make an update that includes it as a dep | 19:57 |
clarkb | since testinfra needs it | 19:57 |
pabelanger | +1 | 19:57 |
pabelanger | clarkb: k, I'll do a PR with testinfra now | 19:59 |
*** Weifan has joined #openstack-infra | 20:02 | |
*** ccamacho has quit IRC | 20:02 | |
pabelanger | clarkb: ah, reading docs, paramiko is an extra install, since it is only used by specific backends. so, we likely need to update our job to do the install | 20:05 |
*** whoami-rajat has quit IRC | 20:05 | |
*** imacdonn_ is now known as imacdonn | 20:07 | |
*** sarob has quit IRC | 20:09 | |
*** e0ne has joined #openstack-infra | 20:10 | |
clarkb | ok, we dont use 2.8 in production yet so the cap seems appropriate for now | 20:11 |
clarkb | but can add paramiko and 2.8 in the same go | 20:11 |
*** e0ne has quit IRC | 20:12 | |
pabelanger | ++ | 20:13 |
openstackgerrit | Merged zuul/zuul master: Add logs spec https://review.opendev.org/648714 | 20:18 |
*** Lucas_Gray has joined #openstack-infra | 20:19 | |
*** Weifan has quit IRC | 20:19 | |
*** Goneri has joined #openstack-infra | 20:27 | |
*** Weifan has joined #openstack-infra | 20:30 | |
*** jcoufal has quit IRC | 20:38 | |
*** dave-mccowan has joined #openstack-infra | 20:38 | |
*** tbarron has left #openstack-infra | 20:41 | |
openstackgerrit | Merged zuul/zuul master: web: add tags to jobs list https://review.opendev.org/633654 | 20:41 |
*** ijw has joined #openstack-infra | 20:46 | |
*** diablo_rojo has joined #openstack-infra | 20:48 | |
openstackgerrit | Merged opendev/system-config master: Cap ansible to <2.8 to fix testinfra https://review.opendev.org/659646 | 20:55 |
clarkb | yay I can recheck my other change I want to merge now :) | 20:55 |
*** armax has quit IRC | 20:55 | |
*** kjackal has quit IRC | 20:56 | |
openstackgerrit | Merged zuul/zuul master: Ensure correct state in MQTT connection https://review.opendev.org/652932 | 20:57 |
pabelanger | clarkb: I was trying to get czuul working, but doesn't seem to support python3 | 20:58 |
pabelanger | and didn't feel like installing python2 | 20:58 |
clarkb | czuul? | 20:58 |
clarkb | ah harlowja's zuul status viewer | 20:59 |
*** trident has quit IRC | 20:59 | |
*** trident has joined #openstack-infra | 21:00 | |
pabelanger | yah | 21:01 |
*** sarob has joined #openstack-infra | 21:06 | |
clarkb | whats the new status json url? | 21:08 |
clarkb | /api/status ? | 21:08 |
clarkb | ya that seems to be it | 21:08 |
clarkb | I'm not sure how to read czuul | 21:08 |
*** rh-jelabarre has quit IRC | 21:09 | |
clarkb | the revert is in the gate | 21:12 |
*** slaweq has quit IRC | 21:18 | |
openstackgerrit | Merged zuul/zuul master: Fix dequeue ref not handling the project name https://review.opendev.org/659110 | 21:19 |
harlowja | someone uses czull still, nice, lol | 21:19 |
clarkb | harlowja: well a react update broke the web dashboard | 21:20 |
clarkb | so using it to see how the fix for the broken web dashboard is doing :) | 21:20 |
harlowja | lol, cool | 21:20 |
clarkb | I'm not quite sure I undesttand how to read czuul's render but understand it well enough to know the fix is in the gate | 21:21 |
harlowja | ya, this was my introduction to urwid, lol | 21:22 |
harlowja | *it was | 21:22 |
harlowja | ^ not very good code as a result, lol | 21:23 |
clarkb | the docker prune chagne is failing though ;/ | 21:29 |
openstackgerrit | Merged zuul/zuul master: Add support for submitting reviews on GitHub https://review.opendev.org/656541 | 21:32 |
*** xek has joined #openstack-infra | 21:33 | |
*** xek__ has quit IRC | 21:35 | |
*** Goneri has quit IRC | 21:42 | |
openstackgerrit | Jeremy Stanley proposed zuul/zuul master: Install latest git-review from PyPI in quickstart https://review.opendev.org/659674 | 21:45 |
*** xek_ has joined #openstack-infra | 21:47 | |
*** xek has quit IRC | 21:47 | |
*** smarcet has quit IRC | 21:50 | |
*** smarcet has joined #openstack-infra | 21:50 | |
*** smarcet has joined #openstack-infra | 21:51 | |
openstackgerrit | Merged zuul/zuul master: Support fail-fast in project pipelines https://review.opendev.org/652764 | 21:51 |
*** mriedem has quit IRC | 21:53 | |
*** armax has joined #openstack-infra | 21:55 | |
*** xek has joined #openstack-infra | 21:56 | |
*** xek_ has quit IRC | 21:56 | |
*** xek has quit IRC | 21:58 | |
*** xek has joined #openstack-infra | 21:59 | |
*** tosky has quit IRC | 21:59 | |
*** smarcet has quit IRC | 22:02 | |
*** smarcet has joined #openstack-infra | 22:03 | |
openstackgerrit | Merged zuul/zuul master: Support Ansible 2.8 https://review.opendev.org/631933 | 22:04 |
clarkb | I think we should consider direct enqueing the react revert for zuul when it kicks out of the gate again | 22:06 |
clarkb | I guess I could dequeue then enqueue it | 22:06 |
*** sreejithp has quit IRC | 22:06 | |
*** smarcet has quit IRC | 22:06 | |
fungi | alternative trick is to just keep re-promoting it as soon as it fails a build but before it reports on the buildset | 22:07 |
*** sarob has quit IRC | 22:12 | |
*** rfarr_ has quit IRC | 22:15 | |
clarkb | when the current one fails (so that we get a report on the change with logs) I'll reenqueue straight to the gate then I need to step out for a bit | 22:24 |
fungi | i'm in and out taking breaks from the lawn, need to get it finished before dark | 22:25 |
fungi | note that you can enqueue it straight to the gate regardless. the buildset in check will continue and will report back in gerrit anyway | 22:26 |
clarkb | ya I didn't recheck it just enqueued it to gate | 22:27 |
clarkb | I just wanted record of the py36 failure before I did that (so didn't dequeue) | 22:27 |
clarkb | the docker image prune is finally going to make it to the gate I Think | 22:28 |
*** jtomasek has quit IRC | 22:33 | |
*** UdayTKumar has quit IRC | 22:48 | |
*** iokiwi has joined #openstack-infra | 22:48 | |
fungi | Service 'node' failed to build: Get https://registry-1.docker.io/v2/rastasheep/ubuntu-sshd/manifests/latest: unauthorized: incorrect username or password | 22:50 |
fungi | is that the ipv6 registry problem? | 22:50 |
*** ijw has quit IRC | 22:52 | |
openstackgerrit | Jeremy Stanley proposed zuul/zuul master: Install latest git-review from PyPI in quickstart https://review.opendev.org/659674 | 22:54 |
*** tkajinam has joined #openstack-infra | 22:55 | |
openstackgerrit | Merged opendev/system-config master: Prune docker images after docker-compose up https://review.opendev.org/656880 | 22:55 |
* fungi cheers | 22:59 | |
*** xek has quit IRC | 23:03 | |
openstackgerrit | Merged zuul/zuul master: Revert "web: upgrade react and react-scripts to ^2.0.0" https://review.opendev.org/659655 | 23:04 |
clarkb | fungi: yes that is the ipv6 problem | 23:04 |
*** rcernin has joined #openstack-infra | 23:04 | |
clarkb | fungi: docker should talk to the buildset registry proxy whihc proxies to dockerhub and pulls off the auth | 23:04 |
clarkb | if that fails then it tries to talk to dockerhub directly with auth which fails | 23:04 |
*** _erlon_ has quit IRC | 23:05 | |
clarkb | we think that happens because in ipv6 only clouds you get many to one nat to dockerhub from the buikdset registry and it isn't as reliable | 23:05 |
*** slaweq has joined #openstack-infra | 23:05 | |
fungi | okay, that's what i thought i recalled from the earlier discussion (at the ptg?) | 23:07 |
fungi | and we were going to try pointing that at the per-provider mirror servers at some point, but i guess we haven't yet? | 23:07 |
clarkb | no corvus pointed out we really want tls for that | 23:08 |
clarkb | because we are publishing the end result images not just consuming them | 23:08 |
clarkb | this is why ianw's work to deploy tls'd mirrors is important | 23:08 |
clarkb | (if you haven't reviewed those changes yet please do!) | 23:08 |
fungi | i think i have, but maybe i haven't. will make sure | 23:09 |
clarkb | infra-root we are down to 4.2GB ish of free space on the gitea cluster members | 23:10 |
*** slaweq has quit IRC | 23:10 | |
clarkb | It appears to /var/lib/docker-old represents about 6GB we cloud clear out | 23:10 |
clarkb | we had moved the /var/lib/docker aside when recovering from the k8sification of everything, but is there any reason to keep those dirs around anymore or can we rm them? | 23:10 |
clarkb | mordred: corvus ^ you may want to weigh in on that | 23:11 |
*** slaweq has joined #openstack-infra | 23:11 | |
clarkb | its not an emergency yet so I won't go deleting anything | 23:11 |
clarkb | but if we are happy with gitea + docker now I can go through and clean those docker-old dirs up | 23:11 |
*** Lucas_Gray has quit IRC | 23:14 | |
fungi | looks like 658930 is currently parented to a dnm demonstration, but i approved the one ahead of that in the series | 23:15 |
*** Lucas_Gray has joined #openstack-infra | 23:15 | |
*** slaweq has quit IRC | 23:16 | |
clarkb | https://review.opendev.org/#/c/658281/29 -> https://review.opendev.org/#/c/658930/3 -> https://review.opendev.org/#/c/652801/15 is the stack I see | 23:17 |
clarkb | thank you for approving the two at the bottom | 23:17 |
clarkb | worth noting that because we had to create a new /var/lib/docker after the k8sification, our image pruning isn't currently going to prune much | 23:18 |
fungi | ahh, no you're right, gertty's current attempts at rendering a nonlinear dependency are a little hard to interpret | 23:18 |
clarkb | but its good that change finally merged as it is one less thing to worry about going forward | 23:19 |
pabelanger | clarkb: zuul.o.o is back online | 23:19 |
clarkb | pabelanger: cool puppet picked up the new (old) js then | 23:19 |
fungi | so the revert worked, good | 23:19 |
pabelanger | ya | 23:20 |
*** dklyle has joined #openstack-infra | 23:21 | |
*** diablo_rojo has quit IRC | 23:24 | |
ianw | clarkb: sorry just coming online ; did you see the skopeo issues with the package being removed from the ppa? | 23:24 |
clarkb | ianw: yes fungi built a new package and just manually installed it I think | 23:25 |
clarkb | ianw: tristanC's fix appears like it will get merged so hopefully upstream skopeo will work again for us shortly | 23:25 |
ianw | ok, cool; yeah that was the next thing. we could put it in openstack-ci-core ppa but sounds like we can just live with it | 23:25 |
clarkb | if tristanC's fix didn't already have a ton of traction I think we would put it in our ppa, but considering they have already acked it a bunch I think we are good to coast on fungi's package for a few days while they get it merged and built in their ppa | 23:26 |
tristanC | clarkb: i'll update the PR shortly according to the last suggestion | 23:27 |
ianw | clarkb: so i've started looking more at ask.o.o ... i think we can get it on xenial on life support ... | 23:28 |
pabelanger | clarkb: re zuul / ansible 2.8.0 we should keep an eye for restarts to zuul in openstack. Once we do, jobs will be able to use ansible 2.8. I'm happy to help with testing when we want to schedule that, either tomorrow or early next week. | 23:28 |
ianw | it's going to be a great target for someone to containerise and migrate; i don't think any of us feel the need to get into puppet hacking around it | 23:28 |
*** aaronsheffield has quit IRC | 23:30 | |
mordred | ianw: ++ | 23:31 |
mordred | ianw: I actually think that services like that are potentially a great win for our new container overlords | 23:31 |
clarkb | pabelanger: not just will be able to but will be forced to by default right? | 23:33 |
clarkb | ianw: ya I'm fine with life support and then putting up the flag that says this needs work please to container/ansible | 23:34 |
ianw | especially, excellent and persistent work from cmurphy and others not withstanding, but almost literally the day we got to the point we could turn off puppet3 support, puppet4 went EOL :) | 23:34 |
*** rlandy has quit IRC | 23:34 | |
clarkb | ianw: not just eol they deleted it | 23:34 |
clarkb | because thats what you do, delete all the packages on eol day | 23:34 |
clarkb | ianw: re life support do you think that will work with existing puppetry or do we need to do another upgrade in place? | 23:34 |
ianw | clarkb: sshhh, you'll start giving pip/setuptools ideas! ;) | 23:35 |
*** dklyle has quit IRC | 23:35 | |
clarkb | with ask there are a couple big struggles we've had with it. First we were asked to deploy and run it and we basically said no because its a pain (after I spent weeks trying to get it ot work) | 23:36 |
ianw | clarkb: i'm not sure; i've been playing on ask-staging https://ask-staging.openstack.org/ but it's not looking great | 23:36 |
clarkb | but then it got deployed by the askbot people on a server we hosted for them until that contract went away or something | 23:36 |
pabelanger | clarkb: no, we didn't change the default yet. It should be 2.7 still | 23:36 |
clarkb | so we sort of got forced into it :/ | 23:36 |
clarkb | but then it is also the type of service where people using it area also least likely to be involved in helping to run it | 23:37 |
pabelanger | clarkb: I figure once we roll it out to opendev, then confirm zuul-jobs works with 2.8 we can land the bump to 2.8 by default in zuul | 23:37 |
clarkb | so we need to figure out a more sustainable way forward with it otherwise it will die under the pressure of we can't keep it going for people that can't help | 23:37 |
clarkb | pabelanger: gotcha | 23:37 |
ianw | clarkb: in place would eliminate starting the ask virtualenv fresh, which would be good because just non-pinned dependency pulled in cascades into hard to debug failures | 23:37 |
clarkb | ianw: hrm if it is in a virtualenv an inplace upgrade will probably break it though | 23:38 |
ianw | e.g. it's incompatible with later redis packages -> https://review.opendev.org/659455 | 23:38 |
clarkb | ianw virtualenvs don't survive minor python updates | 23:38 |
ianw | hrm, yes true | 23:38 |
fungi | yeah, replace your python interpreter and you need to rebuild your venv | 23:38 |
clarkb | ya problems like that redis one are why we basically gave up trying to run it ourselves the first time around | 23:38 |
clarkb | it required a very particular set of tooling from django to postgres to python | 23:39 |
ianw | i don't know if you saw -> https://review.opendev.org/#/c/659473/ ... | 23:39 |
ianw | yep, it's all pretty fragile as i'm seeing | 23:39 |
ianw | in fact, ask-staging is now throwing a different error to when i left it last night | 23:39 |
clarkb | so maybe the thing to start doing here is gather info on the fragility so we can make concrete and not hand wavy argument for why this service is unsustainable | 23:39 |
clarkb | ianw: I wonder if the "proper" way to do this server upgrade is to also upgrade the service? | 23:40 |
ianw | clarkb: sure, i have an increasing number of reviews pointing in that direction | 23:41 |
ianw | i'm certain it is ... ancient django versions, incompatability with any modern redis, etc | 23:41 |
clarkb | https://github.com/ASKBOT/askbot-devel/blob/master/askbot/doc/source/upgrade.rst appears to be their doc on this | 23:41 |
ianw | but that puts us at the point of basically re-hacking all the puppet ... at which point we should be containerising it | 23:41 |
ianw | which i'd be happy to work with someone who is interested in it on, but i'm not sure if i want to be the driver ... | 23:42 |
clarkb | ianw: ya | 23:42 |
ianw | clarkb: how about this; i'll spend a little more time now just seeing where i get with it, and update the trusty etherpad page with some details | 23:43 |
*** Lucas_Gray has quit IRC | 23:43 | |
clarkb | that sounds good then we can send an email to the openstack-discuss list that basically says "look this service has not been cared for and requires a ton of effort ot get it into a happy place. We need help doing that otherwise we'll likely have to shut off the service" (something along those lines) | 23:44 |
ianw | we can decide if we want to, basically, have this as a stand-alone service ... like puppet it, but accept that the idea it could be re-deployed is gone | 23:44 |
ianw | yeah, then what you said. call for volunteers to containerise it, which i'm happy to "mentor" (i'd be learning too) | 23:44 |
fungi | the codebase for it seemed to have stabilized significantly. i don't know if that means it's still getting updates to keep it working on more modern platforms just no new features or... a handful of new features with increasingly stale platform support | 23:45 |
fungi | if it's the latter and we need to maintain a fork just to keep it secure, then yes we need folks stepping up to help or else it's going away | 23:46 |
clarkb | fungi: its not just that though, just running it is a significant pita | 23:46 |
clarkb | there is a reason we refused to run it the first time we were asked | 23:46 |
clarkb | it has a ton of moving parts all of which you have to get just right (whcih often isn't documented) otherwise the thing breaks | 23:47 |
clarkb | the major issue I ran into when trying to run it was to have mysql be the db which was documented as an option just didn't work | 23:47 |
clarkb | that and getting localization to work | 23:47 |
clarkb | which sounds very similar to needing a specific version of redis and so on | 23:48 |
fungi | oh, sure, i'm just saying a volunteer or two could probably keep it going *if* it's still being updated for more modern operating systems and frameworks | 23:48 |
mordred | yeah | 23:48 |
fungi | if it's not and we need to do development work on the software to get it to support newer versions of stuff, that's going to need more than a volunteer or two | 23:49 |
clarkb | also why do people keep using redis? operationally it is so pain ful to run | 23:49 |
*** yamamoto has joined #openstack-infra | 23:49 | |
mordred | and also - if it can be done via containers and whatnot, where running a specific redis and a specific postgres might be easier for a contributor less deeply ingrained in infra-lore could do | 23:49 |
mordred | clarkb: becuase it's easy for a dev to spin up on their laptop | 23:49 |
clarkb | mordred: until it OOMs and your kernel kills firefox/chrome | 23:49 |
mordred | clarkb: and devops == "get rid of ops people and bow to the wisdom of devs with no ops experience" | 23:49 |
* clarkb has really bad memories of redis | 23:49 | |
fungi | and this is why they all by laptops with 64gb+ of ram | 23:49 |
fungi | er, buy | 23:49 |
* mordred grumps some more | 23:50 | |
clarkb | fungi: they are all single stick non user replaceable now which means you get a max of 16GB | 23:50 |
clarkb | fungi: but usually 8GB is really all you get :/ | 23:50 |
fungi | yeesh | 23:50 |
mordred | yeah. they don't sell laptops with ram | 23:50 |
fungi | in modern laptops? really? | 23:50 |
mordred | yeah | 23:50 |
clarkb | fungi: yup it was easier to load up a laptop with memory 10 years ago than now | 23:50 |
fungi | 8gb ram is what this little netbook has | 23:50 |
clarkb | everything is soldered to the board now and single dimm | 23:50 |
mordred | because they've decided that laptops should try to be tablets | 23:50 |
clarkb | so on intel cpus that means 16GB max | 23:51 |
mordred | rather than letting tablets be tablets | 23:51 |
fungi | most tablets probably have at least that much ram, yeah | 23:51 |
clarkb | (and was 8GB max not that long ago) | 23:51 |
openstackgerrit | Merged opendev/system-config master: Use handlers for letsencrypt cert updates https://review.opendev.org/652801 | 23:51 |
mordred | and letting laptops be, well, laptops | 23:51 |
openstackgerrit | Merged opendev/system-config master: letsencrypt: use a fake CA for self-signed testing certs https://review.opendev.org/658930 | 23:51 |
ianw | you know what i love; when you google an error message, and the only result is your own mail to the openstack-infra list about that error message | 23:51 |
clarkb | my x240 isn't soldered memory and is user replaceable, but the intel cpu in it only allows for 8GB per dimm | 23:52 |
clarkb | so I have 8GB of memory in my laptop | 23:52 |
clarkb | ianw: thats always my favorite thing | 23:52 |
clarkb | or you get a launchpad bug you filed a year ago | 23:52 |
*** diablo_rojo has joined #openstack-infra | 23:52 | |
fungi | ianw: i love when the only hit is me three years ago asking the same question, receiving no answer, and then forgetting about it | 23:52 |
fungi | yeah, that | 23:52 |
* mordred loves all of those things | 23:52 | |
clarkb | desktop has 16GB of memory 11GB of which is used and the vast majority of that is firefox | 23:53 |
fungi | and then i think, "past me should have tried a little harder, you know?" | 23:53 |
clarkb | I have firefox, thunderbird, and three terminals open | 23:53 |
clarkb | thats 11GB | 23:53 |
*** yamamoto has quit IRC | 23:53 | |
fungi | my workstation is currently using 2.7gb out of 16gb ram | 23:54 |
fungi | and that's with firefox and ~20 terminals open | 23:54 |
clarkb | ya I'm sure some of my overhead is the desktop environment | 23:54 |
clarkb | and I have a tab problem in firefox | 23:54 |
clarkb | I regularly go over 400 ish tabs | 23:54 |
fungi | ahh, right, i'd be surprised if ratpoison is using more than a few kb | 23:54 |
pabelanger | ianw: I especially love, when some new shinny thing is the best, and other people recommend it. Then I decide to install and try it, and it clearly doesn't work because of super obvious that some how every other person has seemed to miss. | 23:54 |
fungi | pabelanger: that's because all the people who are talking about how great it is have so little time left after all that talking to actually try it | 23:55 |
pabelanger | :) | 23:55 |
clarkb | pabelanger: thats like using docker with ipv6 right? | 23:55 |
pabelanger | IKR | 23:56 |
fungi | meh, ipv6 is so last century | 23:56 |
mordred | pabelanger: +100 | 23:56 |
pabelanger | how is that even possible | 23:56 |
fungi | it's hipster to find new and exciting ways to preserve ipv4 and embed it deeper and deeper into the assumptions of your software | 23:56 |
mordred | fungi: using ipv4 in 2019 is so ironic | 23:56 |
mordred | using ipv6 is too obvious | 23:57 |
fungi | my internet protocol has a goatee | 23:57 |
fungi | does yours? | 23:57 |
clarkb | I've begun to learn the value of using children as transport layer | 23:57 |
mordred | clarkb: do you just pin things to their shirts? | 23:57 |
fungi | stick things to them and they eventually arrive in another room | 23:57 |
clarkb | mordred: I ask them for stuff and more often than not it arrives quickly | 23:57 |
mordred | neat! | 23:57 |
fungi | and sometimes it's even what you asked for | 23:58 |
clarkb | they particularly enjoy it if it is something they wouldn't normally be allowed to interact with like my wallet | 23:58 |
fungi | someone needs to invent error correction for that | 23:58 |
mordred | clarkb: have you considered using them as a form of cold storage for backups? "go hide this" | 23:58 |
clarkb | mordred: that is a neat idea | 23:58 |
clarkb | backups so good even I can't delete them | 23:58 |
mordred | ++ | 23:58 |
mordred | of course, the most important thing in a backup is the ability to restore from it | 23:59 |
fungi | they're probably an excellent source of entropy too | 23:59 |
clarkb | I asked for my wallet yseterday because hbo and roku couldn't figure out how to process my monthly payment | 23:59 |
clarkb | then in my attempt at figuring it out decied well I'll just unsubscribe then subscribe | 23:59 |
ianw | ok, ask-staging.openstack.org is back to the same error i was seeing. i'm going to try migrating a dump of the db; if that doesn't fix it ... i dunno | 23:59 |
clarkb | ya no the unsub totally worked but now they can't figure out how to give me a subscription again | 23:59 |
clarkb | so I may have to create a new account to watch GoT finale | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!