*** rlandy is now known as rlandy|bbl | 00:01 | |
*** goldyfruit_ has quit IRC | 00:09 | |
ianw | well not that it's testing the same thing, but i copied the repo out of gitea06's /data and put it in my homedir, and tried the fetch against that (over ssh). it works | 00:10 |
---|---|---|
*** ociuhandu has quit IRC | 00:17 | |
*** ociuhandu has joined #openstack-infra | 00:17 | |
*** armax has quit IRC | 00:22 | |
*** ociuhandu has quit IRC | 00:23 | |
*** slaweq has joined #openstack-infra | 00:34 | |
ianw | i tried serving that up locally via http.server.CGIHTTPRequestHandler using git-http-backend but it doesn't quite seem to work | 00:34 |
ianw | tonyb: i think maybe update the bug with your reproduction and see if anyone has ideas? | 00:34 |
tonyb[m] | Ummm I don't *think* there is anything in that tarball that can't be shared ... | 00:36 |
*** ociuhandu has joined #openstack-infra | 00:36 | |
*** slaweq has quit IRC | 00:38 | |
*** ociuhandu has quit IRC | 00:45 | |
*** gyee has quit IRC | 00:46 | |
*** mattw4 has quit IRC | 00:47 | |
mriedem | gerritbot seems to be dead | 00:48 |
*** michael-beaver has quit IRC | 00:50 | |
ianw | the statusbot also went, again | 00:55 |
*** tosky has quit IRC | 00:56 | |
*** mriedem is now known as mriedem_afk | 00:58 | |
ianw | isn't there something about the irc server we connect to by ip | 01:00 |
clarkb | I think that is only meetbot? | 01:03 |
clarkb | sinceit hasipv6 issues | 01:03 |
*** openstackstatus has joined #openstack-infra | 01:03 | |
*** ChanServ sets mode: +v openstackstatus | 01:03 | |
ianw | yeah, eavesdrop.openstack.org has the pinned hosts entry | 01:03 |
ianw | openstackgerrit does not seem to want to join ... | 01:04 |
ianw | select(11, [10], [], [], {0, 200000}) = 0 (Timeout) | 01:05 |
ianw | gerritbot 18106 gerrit2 10u IPv4 3756294911 0t0 TCP review01.openstack.org:47750->cherryh.freenode.net:6697 (ESTABLISHED) | 01:05 |
ianw | the debug file isn't that helpful | 01:09 |
*** dchen has joined #openstack-infra | 01:10 | |
*** ociuhandu has joined #openstack-infra | 01:10 | |
ianw | maybe it just needs a change | 01:13 |
*** openstackgerrit has joined #openstack-infra | 01:13 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Fedora mirror update: use localauth release https://review.opendev.org/695572 | 01:13 |
ianw | there we go | 01:13 |
ianw | fungi: Release failed: VOLSER: Problems encountered in doing the dump ! | 01:14 |
ianw | this is not good :/ | 01:14 |
*** jistr has quit IRC | 01:14 | |
*** jistr has joined #openstack-infra | 01:15 | |
*** roman_g has quit IRC | 01:22 | |
*** ociuhandu has quit IRC | 01:28 | |
ianw | Fri Nov 22 00:54:32 2019 1 Volser: ReadVnodes: IH_CREATE: File exists - restore aborted | 01:33 |
ianw | Fri Nov 22 00:54:32 2019 Scheduling salvage for volume 536871007 on part /vicepa over FSSYNC | 01:33 |
ianw | i guess it was this | 01:33 |
*** jamesmcarthur has joined #openstack-infra | 01:34 | |
ianw | i'll try again, maybe the salvage fixed it | 01:39 |
*** mriedem_afk has quit IRC | 01:40 | |
*** igordc has quit IRC | 01:57 | |
*** rfolco has joined #openstack-infra | 02:22 | |
*** rfolco has quit IRC | 02:27 | |
*** ociuhandu has joined #openstack-infra | 02:28 | |
*** ociuhandu has quit IRC | 02:39 | |
*** ricolin has joined #openstack-infra | 02:50 | |
*** rlandy|bbl is now known as rlandy | 02:53 | |
*** rlandy has quit IRC | 02:56 | |
*** apetrich has quit IRC | 03:09 | |
*** jamesmcarthur has quit IRC | 03:13 | |
*** armax has joined #openstack-infra | 03:14 | |
*** armax has quit IRC | 03:14 | |
*** ociuhandu has joined #openstack-infra | 03:14 | |
*** ociuhandu has quit IRC | 03:19 | |
*** udesale has joined #openstack-infra | 03:41 | |
*** larainema has joined #openstack-infra | 04:23 | |
*** dave-mccowan has quit IRC | 04:58 | |
*** surpatil has joined #openstack-infra | 05:27 | |
*** ociuhandu has joined #openstack-infra | 05:30 | |
*** ociuhandu has quit IRC | 05:36 | |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Update bindep.txt for some missing dependencies https://review.opendev.org/693975 | 05:44 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Dockerfile: install vhd-util https://review.opendev.org/693976 | 05:45 |
*** ramishra has joined #openstack-infra | 05:49 | |
openstackgerrit | Merged zuul/zuul-jobs master: install-podman: also install slirp4netns https://review.opendev.org/695601 | 05:55 |
*** rfolco has joined #openstack-infra | 05:58 | |
*** rfolco has quit IRC | 06:03 | |
*** zbr has joined #openstack-infra | 06:06 | |
*** zbr|ooo has quit IRC | 06:08 | |
*** Buggys has joined #openstack-infra | 06:12 | |
*** jtomasek has joined #openstack-infra | 06:14 | |
*** slaweq has joined #openstack-infra | 06:23 | |
*** ociuhandu has joined #openstack-infra | 06:25 | |
*** slaweq has quit IRC | 06:25 | |
ianw | #status log mirror.fedora released manually successfully, manual lock dropped | 06:26 |
openstackstatus | ianw: finished logging | 06:26 |
*** janki has joined #openstack-infra | 06:31 | |
openstackgerrit | Merged openstack/diskimage-builder master: Add grub-efi-x86_64 pkg-map https://review.opendev.org/694579 | 06:37 |
*** raukadah is now known as chkumar|rover | 06:37 | |
openstackgerrit | Merged openstack/diskimage-builder master: Introduce manual setting of DIB_INIT_SYSTEM https://review.opendev.org/414179 | 06:43 |
openstackgerrit | Merged opendev/system-config master: Fedora mirror update: use localauth release https://review.opendev.org/695572 | 06:50 |
*** ociuhandu has quit IRC | 06:52 | |
openstackgerrit | Merged openstack/diskimage-builder master: Only wait for checksum processes https://review.opendev.org/695338 | 06:57 |
openstackgerrit | Merged openstack/diskimage-builder master: Adds support for GPG keyring https://review.opendev.org/693426 | 06:57 |
openstackgerrit | Merged openstack/diskimage-builder master: Ensure nouveau is blacklisted in initramfs too https://review.opendev.org/679992 | 06:57 |
openstackgerrit | Merged openstack/diskimage-builder master: Add output for mis-configured element scripts https://review.opendev.org/680785 | 06:57 |
*** ociuhandu has joined #openstack-infra | 07:03 | |
*** gfidente has joined #openstack-infra | 07:18 | |
*** xarses has joined #openstack-infra | 07:18 | |
*** pkopec has joined #openstack-infra | 07:19 | |
*** apetrich has joined #openstack-infra | 07:22 | |
*** ociuhandu has quit IRC | 07:32 | |
*** slaweq has joined #openstack-infra | 07:32 | |
openstackgerrit | Merged openstack/diskimage-builder master: Update bindep.txt for some missing dependencies https://review.opendev.org/693975 | 07:43 |
*** janki has quit IRC | 07:52 | |
*** pgaxatte has joined #openstack-infra | 07:57 | |
*** slaweq has quit IRC | 08:07 | |
*** ccamacho has joined #openstack-infra | 08:09 | |
*** ccamacho has quit IRC | 08:13 | |
*** tosky has joined #openstack-infra | 08:15 | |
*** pcaruana has joined #openstack-infra | 08:21 | |
*** ccamacho has joined #openstack-infra | 08:27 | |
*** tesseract has joined #openstack-infra | 08:35 | |
*** rpittau|afk is now known as rpittau | 08:58 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: authentication config: add optional token_expiry https://review.opendev.org/642408 | 09:03 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Authorization rules: support YAML nested dictionaries https://review.opendev.org/684790 | 09:04 |
*** lucasagomes has joined #openstack-infra | 09:10 | |
*** sshnaidm is now known as sshnaidm|off | 09:21 | |
*** iurygregory has joined #openstack-infra | 09:21 | |
*** ralonsoh has joined #openstack-infra | 09:26 | |
*** Lucas_Gray has joined #openstack-infra | 09:34 | |
*** ricolin has quit IRC | 09:43 | |
*** derekh has joined #openstack-infra | 09:44 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Fix deletion of stale build dirs on startup https://review.opendev.org/620697 | 10:01 |
*** ociuhandu has joined #openstack-infra | 10:02 | |
*** ociuhandu has quit IRC | 10:07 | |
*** dtantsur|afk is now known as dtantsur | 10:22 | |
*** Lucas_Gray has quit IRC | 10:26 | |
*** Lucas_Gray has joined #openstack-infra | 10:35 | |
*** Lucas_Gray has quit IRC | 10:40 | |
*** Lucas_Gray has joined #openstack-infra | 10:40 | |
*** ociuhandu has joined #openstack-infra | 10:50 | |
*** ociuhandu has quit IRC | 10:52 | |
*** ociuhandu has joined #openstack-infra | 10:52 | |
*** ociuhandu has quit IRC | 10:53 | |
*** ociuhandu has joined #openstack-infra | 11:01 | |
*** yolanda__ has joined #openstack-infra | 11:08 | |
*** ociuhandu has quit IRC | 11:08 | |
*** yolanda has quit IRC | 11:09 | |
*** udesale has quit IRC | 11:10 | |
openstackgerrit | Riccardo Pittau proposed openstack/project-config master: Manage pyghmi jobs at project level https://review.opendev.org/695661 | 11:18 |
*** gfidente has quit IRC | 11:31 | |
*** yolanda__ has quit IRC | 11:33 | |
*** gfidente has joined #openstack-infra | 11:34 | |
*** pgaxatte has quit IRC | 11:42 | |
*** yolanda has joined #openstack-infra | 11:46 | |
*** yolanda has quit IRC | 11:52 | |
*** rfolco has joined #openstack-infra | 11:55 | |
*** Lucas_Gray has quit IRC | 11:55 | |
*** rcernin has quit IRC | 12:01 | |
*** Lucas_Gray has joined #openstack-infra | 12:01 | |
*** yolanda has joined #openstack-infra | 12:05 | |
iurygregory | hey infra o/ quick question the check-requirements still require the markers of the python version, is there any plans to fix to not require? Since we are dropping py2.7 requirements in ironic we are trying to remove the markers | 12:32 |
fungi | iurygregory: that's more a question for the requirements team in #openstack-requirements | 12:40 |
iurygregory | fungi, oh tks =0 | 12:40 |
iurygregory | =) * | 12:40 |
frickler | iurygregory: also see the discussion in https://review.opendev.org/#/c/693631/4/openstack_requirements/tests/test_check.py,unified , this is planned as future development | 12:42 |
*** slaweq has joined #openstack-infra | 12:42 | |
iurygregory | frickler, ty! | 12:42 |
*** yoctozepto has quit IRC | 12:43 | |
*** yoctozepto has joined #openstack-infra | 12:43 | |
*** larainema has quit IRC | 12:50 | |
*** jamesmcarthur has joined #openstack-infra | 13:04 | |
*** rlandy has joined #openstack-infra | 13:04 | |
*** priteau has joined #openstack-infra | 13:06 | |
*** pgaxatte has joined #openstack-infra | 13:20 | |
*** slaweq has quit IRC | 13:20 | |
*** goldyfruit_ has joined #openstack-infra | 13:20 | |
*** surpatil has quit IRC | 13:25 | |
*** priteau has quit IRC | 13:33 | |
*** eharney has quit IRC | 13:37 | |
*** mgoddard has quit IRC | 13:44 | |
*** mgoddard has joined #openstack-infra | 13:45 | |
*** pgaxatte has quit IRC | 13:50 | |
*** jaosorior has joined #openstack-infra | 14:00 | |
*** pgaxatte has joined #openstack-infra | 14:00 | |
*** ociuhandu has joined #openstack-infra | 14:08 | |
*** jamesmcarthur has quit IRC | 14:12 | |
*** ociuhandu has quit IRC | 14:16 | |
*** mriedem has joined #openstack-infra | 14:19 | |
*** pgaxatte has quit IRC | 14:32 | |
*** jamesmcarthur has joined #openstack-infra | 14:37 | |
*** eharney has joined #openstack-infra | 14:54 | |
*** ociuhandu has joined #openstack-infra | 15:04 | |
*** pgaxatte has joined #openstack-infra | 15:14 | |
*** Goneri has joined #openstack-infra | 15:17 | |
*** Lucas_Gray has quit IRC | 15:22 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Fix deletion of stale build dirs on startup https://review.opendev.org/620697 | 15:24 |
*** ociuhandu has quit IRC | 15:25 | |
*** slaweq has joined #openstack-infra | 15:28 | |
*** slaweq has quit IRC | 15:32 | |
*** ccamacho has quit IRC | 15:32 | |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: add podman support https://review.opendev.org/695051 | 15:38 |
*** armax has joined #openstack-infra | 15:42 | |
tosky | I suspect it has been already asked: would it be possible to change the favicon of review.opendev.org? Right now it's the standard gerrit one | 15:42 |
mordred | tosky: I believe it would be possible - but we'd need to design something to change it *to* - as the opendev logo is already the favicon for the gitea servers | 15:46 |
mordred | I agree though - when I have gerrit tabs open for both opendev and upstream gerrit - I can't always find the tab I'm looking for | 15:46 |
tosky | exactly :) | 15:47 |
tosky | different set of gerrit servers, but I have the same problem | 15:48 |
mordred | maybe jimmy has magical powers | 15:51 |
*** sreejithp has joined #openstack-infra | 15:51 | |
*** slaweq has joined #openstack-infra | 15:54 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Fix deletion of stale build dirs on startup https://review.opendev.org/620697 | 16:06 |
*** gyee has joined #openstack-infra | 16:09 | |
*** pgaxatte has quit IRC | 16:12 | |
*** chkumar|rover is now known as raukadah | 16:31 | |
*** ricolin has joined #openstack-infra | 16:32 | |
*** mattw4 has joined #openstack-infra | 16:35 | |
*** slaweq has quit IRC | 16:37 | |
*** priteau has joined #openstack-infra | 16:44 | |
clarkb | tosky: mordred I use tree style tabs and tab containers to help organize things independent of favicons | 16:47 |
clarkb | tree style tabs to collapse groups of tabs under a root tab and tab containers to separate cookies and actual underlying data bits from each other | 16:47 |
*** yolanda has quit IRC | 16:50 | |
*** pkopec has quit IRC | 16:51 | |
*** slaweq has joined #openstack-infra | 16:53 | |
*** slaweq has quit IRC | 16:58 | |
*** lucasagomes has quit IRC | 16:59 | |
*** rpittau is now known as rpittau|afk | 17:01 | |
*** jaosorior has quit IRC | 17:04 | |
*** ociuhandu has joined #openstack-infra | 17:17 | |
*** dtantsur is now known as dtantsur|afk | 17:21 | |
*** ricolin has quit IRC | 17:21 | |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: add podman support https://review.opendev.org/695051 | 17:25 |
openstackgerrit | Clark Boylan proposed zuul/zuul master: Increase mariadb log verbosity https://review.opendev.org/695737 | 17:27 |
*** Guest24639 has joined #openstack-infra | 17:32 | |
*** Guest24639 is now known as mgagne_ | 17:34 | |
*** tesseract has quit IRC | 17:35 | |
*** michael-beaver has joined #openstack-infra | 17:35 | |
*** iurygregory has quit IRC | 17:43 | |
*** gfidente has quit IRC | 17:43 | |
*** jaosorior has joined #openstack-infra | 17:44 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: [WIP] admin REST API: zuul-web integration https://review.opendev.org/643536 | 17:46 |
*** ociuhandu has quit IRC | 17:54 | |
*** ociuhandu has joined #openstack-infra | 17:54 | |
*** ociuhandu has quit IRC | 18:00 | |
*** jamesmcarthur has quit IRC | 18:00 | |
*** rascasoft has quit IRC | 18:00 | |
*** derekh has quit IRC | 18:00 | |
*** bnemec is now known as beekneemech | 18:04 | |
openstackgerrit | Clark Boylan proposed zuul/zuul master: DNM: Increase mariadb log verbosity https://review.opendev.org/695737 | 18:08 |
openstackgerrit | Clark Boylan proposed zuul/zuul master: DNM: Increase mariadb log verbosity https://review.opendev.org/695737 | 18:14 |
*** cmurphy is now known as cmorpheus | 18:18 | |
*** priteau has quit IRC | 18:19 | |
*** igordc has joined #openstack-infra | 18:23 | |
*** irclogbot_0 has quit IRC | 18:27 | |
*** ralonsoh has quit IRC | 18:30 | |
*** irclogbot_2 has joined #openstack-infra | 18:30 | |
*** lseki has joined #openstack-infra | 18:31 | |
*** diablo_rojo has joined #openstack-infra | 18:31 | |
lseki | hi, is openstack-infra@lists.openstack.org the correct mailing list to ask about issues deploying zuul v3? | 18:34 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: add podman support https://review.opendev.org/695051 | 18:35 |
*** beisner has quit IRC | 18:35 | |
*** beisner has joined #openstack-infra | 18:36 | |
fungi | lseki: you want zuul-discuss@lists.zuul-ci.org | 18:41 |
fungi | also there's a #zuul channel on freenode | 18:41 |
fungi | you could just ask in there | 18:42 |
lseki | @fungi thank you! I'll ask there | 18:42 |
*** harlowja has joined #openstack-infra | 18:44 | |
openstackgerrit | Merged zuul/zuul master: Fix deletion of stale build dirs on startup https://review.opendev.org/620697 | 18:58 |
*** eharney has quit IRC | 19:04 | |
*** jaosorior has quit IRC | 19:08 | |
*** diablo_rojo has quit IRC | 19:13 | |
rm_work | hey, seeing this show up in the octavia grenade tests, during keystone upgrade: | 19:19 |
rm_work | 2019-11-22 19:00:47.992 | Found existing installation: PyYAML 3.12 | 19:19 |
rm_work | 2019-11-22 19:00:48.412 | ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. | 19:19 |
rm_work | I feel like that's generally something that happens when the images have changed? | 19:19 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: add podman support https://review.opendev.org/695051 | 19:20 |
*** sreejithp has quit IRC | 19:22 | |
clarkb | rm_work: its a pip thing | 19:23 |
clarkb | rm_work: it refuses to uninstall distutils managed packages (whcih may have come from system packaging) | 19:24 |
rm_work | yeah but it started all the sudden? | 19:24 |
clarkb | sure could be the image packages udpated their dep list or went from setuptools to distutils | 19:24 |
clarkb | or it could be the job added a package install somewhere for some reason | 19:24 |
rm_work | hmm | 19:25 |
rm_work | wonder where to even go to address this | 19:25 |
rm_work | i wonder if all the keystone gates are broken? or somehow just grenade? | 19:25 |
clarkb | I would start by finding out where pyyaml is getting installed | 19:25 |
clarkb | then based on that yuo can determine how best to update it | 19:25 |
rm_work | hmm | 19:25 |
rm_work | well, the failure is during grenade devstack installing keystone test-requirements | 19:26 |
rm_work | but need to figure out what installed it originally? | 19:26 |
clarkb | yes you need to find out what pulledi n pyyaml initially | 19:26 |
clarkb | grenade runs d-g which may install it for the service listings? | 19:27 |
clarkb | rm_work: have a link to the failure? | 19:34 |
rm_work | https://48dd2212f38eca3e7c7b-78ed15d7b1d21cdbe9742e6d06f5d670.ssl.cf1.rackcdn.com/693765/15/check/octavia-grenade/5c21387/logs/grenade.sh.txt.gz | 19:34 |
*** auristor has quit IRC | 19:34 | |
clarkb | interesting. It actually installs pyyaml 5.1.2 very early in that log | 19:37 |
clarkb | then later it intsalls python-yaml from distro packaging. Then Installs pyyaml 5.1.2 a bit more and eventually fails | 19:37 |
clarkb | ok I see. Pip ends up being upgraded somewhere in there | 19:39 |
clarkb | and the old pip 9.0.3 doesnt' care about the distutils thing. New pip 19.2.3 does and then it breaks | 19:39 |
johnsom | And some of those are py27 some py3 | 19:39 |
clarkb | ya at the very beginning of the job it downgrades pip, then later (which I haven't found yet it must upgrade pip, I can see that newer pip is being used later though) | 19:41 |
clarkb | one possible fix is to remove python-yaml from install_nova_hypervisor | 19:42 |
clarkb | and instead install pyyaml from pip (which we seem to do everywhere else) | 19:42 |
clarkb | oh that is a dependency of novnc? | 19:42 |
clarkb | ya I think that is how python-yaml gets sucked in. It is from the novnc system package | 19:43 |
*** ociuhandu has joined #openstack-infra | 19:43 | |
*** auristor has joined #openstack-infra | 19:44 | |
*** michael-beaver has quit IRC | 19:45 | |
*** auristor has quit IRC | 19:50 | |
*** ociuhandu has quit IRC | 19:52 | |
openstackgerrit | Clark Boylan proposed zuul/zuul master: DNM: Increase mariadb log verbosity https://review.opendev.org/695737 | 19:53 |
*** auristor has joined #openstack-infra | 19:54 | |
rm_work | uhhh ok so ... this is a nova issue? | 19:58 |
fungi | rm_work: a nova issue in that nova is requiring novnc? | 19:59 |
clarkb | rm_work: it is a pip behavior change issue | 19:59 |
rm_work | yeah but the "fix" would be in nova? | 20:00 |
clarkb | nova tickles it by installing novnc which installs an old version of python-yaml with distutils | 20:00 |
rm_work | or where | 20:00 |
rm_work | it's not something we can fix in the octavia repo, i'm 99% sure of that at least :D | 20:00 |
clarkb | The real fix would be to update the python-yaml pacakge to use setuptools instead of distutils. Chances of that happening for a stable release of a distro are slim | 20:00 |
rm_work | so i'm trying to figure out where the fix would go | 20:00 |
clarkb | One option is to install everything in virtualenvs | 20:00 |
clarkb | I poked at that for a short bit before giving up because there didn't seem to be enough momentum behind it to get it done | 20:01 |
rm_work | is that just a var we can set in the grenade gate job definition? like USE_VENV=True? | 20:01 |
rm_work | yeah i wasn't sure if that ever fully worked in devstacl | 20:01 |
clarkb | no, the current code for it does not work at all | 20:01 |
rm_work | k | 20:01 |
clarkb | and even after poking at it for a couple weeks I couldn't quite get it right for all the corner cases (oddly enough ironic was one of those corner cases) | 20:01 |
rm_work | so ... do you see what i'm getting at though? like, ok, we need to change how this installs... but *where*? devstack repo? nova repo? | 20:02 |
clarkb | rm_work: I don't think there is a good answer | 20:02 |
rm_work | T_T | 20:02 |
clarkb | if it were me I would delete novnc | 20:02 |
clarkb | as an openstack user for ~7 years I've never used it | 20:02 |
fungi | i doubt nova has a devstack plug-in doing that, it's probably in devstack | 20:02 |
clarkb | but I bet there are many that think I'm crazy for that suggestion | 20:02 |
rm_work | octavia's gate is frozen until we can fix this, or until we just decide to stop running grenade as voting <_< | 20:03 |
rm_work | i'd rather not do the latter | 20:03 |
clarkb | Another option may be to install novnc as the very last thing devstack does | 20:03 |
fungi | it's a topic to bring up with the qa team next, i suspect | 20:03 |
rm_work | i mean, we definitely don't use novnc in our gate, so maybe we COULD tell it to not use that somehow? is that possible I wonder? | 20:04 |
fungi | if devstack has an option you can pass for that, then maybe | 20:05 |
rm_work | ah well, i can't track this presently -- need to run to meet someone for lunch T_T will ponder and maybe hit up QA like you recommended | 20:05 |
clarkb | rm_work: disable the n-vnc service | 20:05 |
rm_work | k will look into that later | 20:05 |
rm_work | thanks for the advice :) | 20:06 |
fungi | it looks like octavia's grenade job is custom anyway, not the standard grenade job, so you presumably have some room to make adjustments to how it runs | 20:06 |
clarkb | another option is to pin pip again | 20:06 |
fungi | if it were the standard grenade job then it would probably take convincing more people or making an octavia-specific grenade job (which conveniently you already have) | 20:06 |
clarkb | I'm guessing we stopped pinning and that is what caused this problem to pop up actually | 20:07 |
clarkb | fungi: well I don't think we test vnc at all in any grenade or tempest job | 20:07 |
clarkb | other than "did the service start" we aren't getting much value out of running it | 20:07 |
fungi | yeah, if all it being there does is test that it can install and start then it seems questionable to keep testing that everywhere | 20:07 |
fungi | but especially in an upgrade job | 20:08 |
fungi | if you need to test that, test it in a basic devstack job that runs on changes to devstack | 20:08 |
*** yolanda has joined #openstack-infra | 20:10 | |
efried | o/ infra. | 20:13 |
efried | Any update on the "zuul.o.o/jobs crashes my browser and eventually my computer" issue? | 20:13 |
clarkb | efried: I think tristanC found some bugs in the js? I don't know if they have been fixed yet | 20:13 |
clarkb | https://review.opendev.org/#/c/695450/ maybe? | 20:14 |
tristanC | clarkb: that's the fix yes | 20:14 |
efried | nice! | 20:14 |
clarkb | looks like a yarn error due to a 503 from github | 20:15 |
efried | clarkb, tristanC: so since this is zuul.o.o, what's the signal for me being able to try again? IIRC it's not as simple as "once the patch merges". | 20:15 |
clarkb | tristanC's recheck should hopefully get it merged (and then because it is js I thinkwe deploy that automatically) | 20:15 |
clarkb | efried: usually within an hour of the chagne merging the js will have been updated in production | 20:16 |
*** eharney has joined #openstack-infra | 20:16 | |
clarkb | I don't know if we expose js versions easily though | 20:16 |
clarkb | we do expose the python version on the web ui but that is different | 20:16 |
efried | cool. I'll just remember to shut down all my programs before trying again, and watch my memory so I can kill the mofo if it goes off the rails. | 20:18 |
efried | thanks tristanC clarkb! | 20:18 |
*** ociuhandu has joined #openstack-infra | 20:23 | |
*** ociuhandu has quit IRC | 20:28 | |
openstackgerrit | Clark Boylan proposed zuul/zuul master: Disable Mariadb TZINFO table generation https://review.opendev.org/695737 | 20:33 |
*** slaweq has joined #openstack-infra | 20:38 | |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: add podman support https://review.opendev.org/695051 | 20:41 |
*** smarcet has joined #openstack-infra | 20:44 | |
*** rcernin has joined #openstack-infra | 20:49 | |
*** slaweq has quit IRC | 20:56 | |
efried | o/ again infra | 21:02 |
efried | Thought I would report this in case it's not already known. | 21:02 |
efried | "frequently" the logs for a job seem unreachable from a job result page | 21:03 |
efried | i.e. the 'Logs' tab doesn't show up, the failure summary doesn't appear (if applicable), and clicking through to the log link just spins | 21:03 |
clarkb | efried: did the jobs run more than 30 days ago? | 21:03 |
efried | this *seems* to happen most frequently when the logs are in rackcdn | 21:03 |
efried | no, it's not expiry, it's actually unreachable destination. | 21:04 |
efried | if I wait long enough I get a net error (I think) | 21:04 |
clarkb | have an example? | 21:04 |
efried | e.g.https://d642c724308b53c92c32-9e6a8447d6325ce608287fa851bc53b0.ssl.cf1.rackcdn.com/695759/1/check/openstack-tox-docs/4b88e27/docs/ | 21:04 |
efried | whoops | 21:04 |
efried | https://d642c724308b53c92c32-9e6a8447d6325ce608287fa851bc53b0.ssl.cf1.rackcdn.com/695759/1/check/openstack-tox-docs/4b88e27/docs/ | 21:04 |
efried | from https://zuul.opendev.org/t/openstack/build/4b88e273522744ff9f083637a660bada | 21:04 |
efried | and the problem seems to resolve itself at some point, sometimes, maybe, eventually, dunno. | 21:04 |
clarkb | ya that seems to work for me | 21:05 |
efried | orly? | 21:05 |
clarkb | yup | 21:05 |
clarkb | I wonder if there is a bad cdn endpoint closer to you than me | 21:05 |
efried | the page loads, and you see a Logs tab, and you can click through to the logs, and and and? | 21:05 |
efried | I get ERR_CONNECTION_TIMED_OUT when I try to click through to that longer link (.../docs/) | 21:06 |
clarkb | yes I can get the log dir listing and open log files | 21:06 |
efried | that seems... odd | 21:06 |
efried | that it would work for you and not for me. | 21:06 |
efried | obv my network is healthy enough | 21:07 |
efried | I'm not firewalled or VPNed or anything. | 21:07 |
clarkb | it could be an issue in the cdn | 21:07 |
clarkb | where a cdn endpoint is't working properly and I happen to not talk to it | 21:07 |
efried | sorry for dummyness but what's a cdn? | 21:08 |
*** eharney has quit IRC | 21:08 | |
fungi | content distribution network | 21:08 |
clarkb | basically a big caching system to get data into more local areas of the internet | 21:08 |
openstackgerrit | Merged zuul/zuul master: web: handle jobs that have multiple parent https://review.opendev.org/695450 | 21:09 |
efried | oh, well, THAT will never work | 21:09 |
*** ricpf has joined #openstack-infra | 21:10 | |
efried | I guess it's opaque to the user (i.e. same URL can come from different CDNs kind of thing)? | 21:10 |
fungi | yes | 21:11 |
efried | ...opaque unless it works for you and doesn't work for me, of course :P | 21:11 |
efried | Okay, so like, nothing really to be done about this? | 21:11 |
fungi | usually it's coupled with either anycast addressing or selective dns responses | 21:11 |
efried | o, maybe I recycle my DNS? | 21:12 |
*** ociuhandu has joined #openstack-infra | 21:12 | |
fungi | so depending on where your client is on the internet you end up getting data returned from a different server | 21:12 |
fungi | well, odds are you'll get sent to the same one because the selection is usually based on locality | 21:12 |
efried | pah | 21:12 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: add podman support https://review.opendev.org/695051 | 21:13 |
*** mgagne_ is now known as mgagne | 21:13 | |
efried | well, maybe it'll be better when I get back from school runs. Thanks for the info y'all. | 21:14 |
clarkb | efried: one way to try and test that theory is to bounce through some other internet connection (maybe corporate vpn or ssh tunnel to a irc bouncer etc) | 21:17 |
openstackgerrit | Merged zuul/zuul master: Disable Mariadb TZINFO table generation https://review.opendev.org/695737 | 21:18 |
*** rlandy has quit IRC | 21:18 | |
*** ricpf has quit IRC | 21:25 | |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: Add build-container-image role https://review.opendev.org/695251 | 21:34 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: use-buildset-registry: Add podman support https://review.opendev.org/695051 | 21:34 |
fungi | 695450 merged at 21:09, last puppet apply on zuul.o.o completed at 20:56 so we're waiting for the next one i guess | 21:39 |
fungi | they seem to be coming roughly 30 minutes apart, so this pulse is already is overdue | 21:41 |
clarkb | fungi: ya they tend to go between every 30 minute and every 45 minutes | 21:41 |
clarkb | I think we are close to the cutoff so if we run a little long we wait for the next one | 21:41 |
clarkb | I'm guessing it will apply at about 21:57UTC | 21:42 |
fungi | Nov 22 21:42:53 zuul01 puppet-user[7055]: (/Stage[main]/Zuul::Web/Exec[unpack-zuul-web]) 'rm -rf /opt/zuul-web/source/* && tar -C /opt/zuul-web/source -xzf ./zuul-js-content-master.tar.gz' returned 2 instead of one of [0] | 21:50 |
clarkb | fungi: I think we may get our js out of the pip install now | 21:52 |
clarkb | maybe | 21:52 |
clarkb | that target dir is empty now and a hard refresh on the zuul status page still works for me | 21:52 |
fungi | ahh | 21:52 |
fungi | so maybe that exec can just be cleaned up | 21:53 |
clarkb | corvus: ^ you may know how to confirm, but ya I would expect my browser to be sad right now based on that dir being empty if we still relied on it | 21:53 |
*** mattw4 has quit IRC | 21:54 | |
corvus | clarkb: yes, i think we tried to normalize our installation since we kept breaking it | 21:57 |
*** rfolco has quit IRC | 21:58 | |
corvus | hrm | 21:59 |
*** mattw4 has joined #openstack-infra | 21:59 | |
corvus | on second look, it's /opt/zuul-web/content that's important | 21:59 |
corvus | and i think that comes from a js tarball build | 21:59 |
corvus | it has timestamps of 19:27 | 22:00 |
corvus | the tarball has a timestamp of 21:42 | 22:00 |
corvus | so i think we can infer that it was build at 1927 and installed at 2142 | 22:00 |
*** Goneri has quit IRC | 22:02 | |
clarkb | huh that is when puppet ran | 22:05 |
*** eharney has joined #openstack-infra | 22:06 | |
clarkb | looking at the puppet what we seem to do is unpack to the source/ dir then rsync from ther eto the content/ dir | 22:07 |
clarkb | that rsync depends on the unpack so shouldn't have run because we failed | 22:07 |
clarkb | Nov 22 19:27:09 zuul01 puppet-user[24412]: (/Stage[main]/Zuul::Web/Exec[sync-zuul-web]) Triggered 'refresh' from 1 events | 22:09 |
clarkb | I think that means we successfully installed the previous content but not the current content | 22:09 |
fungi | ick | 22:09 |
clarkb | http://tarballs.openstack.org/zuul/zuul-content-latest.tar.gz is the file not zuul-js-content-master.tar.gz | 22:10 |
clarkb | zuul-js-content... is what we have on disk though and that timestamp matches the more recent run | 22:12 |
clarkb | ok the tarball is just a tar, no gzip. This is why it failed | 22:13 |
clarkb | the time stamps roughly match when that would've been built for the js update | 22:14 |
clarkb | aha ok my checkout of puppet-zuul was old | 22:16 |
clarkb | https://tarballs.opendev.org/zuul/zuul/zuul-js-content-master.tar.gz is what we want | 22:17 |
clarkb | and confirmed that is a bad tarball | 22:17 |
clarkb | corvus: ^ I think the issue is we are publishign a tar instead of a tar.gz | 22:17 |
*** kopecmartin is now known as kopecmartin|off | 22:18 | |
corvus | <sigh> | 22:19 |
*** smarcet has quit IRC | 22:20 | |
clarkb | I'm tracking this back through the job defs | 22:20 |
clarkb | https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/fetch-javascript-content-tarball/tasks/main.yaml#L1-L8 is what I expect to have been run, but that has a z for gzip | 22:22 |
*** rcernin has quit IRC | 22:24 | |
clarkb | http://zuul.opendev.org/t/zuul/build/15267f7cba8b40799f8dc929afd4ec14/console#4/0/0/ubuntu-bionic is the build and that shows it doing czf too | 22:24 |
*** nhicher has quit IRC | 22:24 | |
clarkb | except that produces http://tarballs.openstack.org/zuul/zuul-content-latest.tar.gz I think | 22:26 |
clarkb | note no js in those file names | 22:26 |
corvus | we're using the log server as an intermediary -- what if our encoding settings there are wrong for this? | 22:26 |
clarkb | but timestamps don't support that. I'm confused | 22:26 |
clarkb | oh that could be, we are fetching the "plain text" version back ebcause we don't send an accept-encoding: gzip or whatever | 22:27 |
clarkb | that may actually be it and explain a good deal of the confusion :) | 22:27 |
corvus | see the artifact at http://zuul.opendev.org/t/zuul/build/15267f7cba8b40799f8dc929afd4ec14 which later gets promoted to the tarballs site | 22:27 |
clarkb | `curl -H 'accept-encoding: gzip' https://tarballs.opendev.org/zuul/zuul/zuul-js-content-master.tar.gz` still produces a tar for me not a tar.gz | 22:30 |
corvus | clarkb: try the log url | 22:31 |
corvus | https://8059d9eb12f46e379447-cfcb5348d0a5bd4d7cf82711ec310965.ssl.cf5.rackcdn.com/695737/5/gate/build-javascript-content-tarball/15267f7/artifacts/zuul-content-latest.tar.gz | 22:31 |
corvus | clarkb: i think that's where it's going to get ungzipped | 22:31 |
clarkb | oh right tarballs will already be encoded | 22:31 |
clarkb | yup that seems to be it | 22:32 |
*** nhicher has joined #openstack-infra | 22:32 | |
clarkb | this would've been caused by our switch to gzip from deflate | 22:32 |
clarkb | because prior to that gzip encoded files would have had to be returned with the identity encoding when clients didn't specify deflate | 22:33 |
corvus | this is the download task we need to fix i think: http://zuul.opendev.org/t/zuul/build/c13a2c0462064de1b0818592d80404c2/console#1/0/5/localhost | 22:33 |
corvus | looks like we're using the uri ansible module | 22:34 |
clarkb | I wonder if the http rfcs allow for us to specify we want identity | 22:34 |
corvus | we can add arbitrary headers to the request | 22:35 |
clarkb | if I pass accept-encoding: identity I don't get the identity version back from rax cdn | 22:36 |
clarkb | now to try it with a weight | 22:36 |
corvus | agreed, i get tar | 22:36 |
*** slaweq has joined #openstack-infra | 22:37 | |
clarkb | explicit weight of 1 is ignored too | 22:37 |
clarkb | this is a fun problem. I'm not sure what the best way to approach this generically would be | 22:39 |
clarkb | I think the behavior we want is for artifacts to always be returned exactly as uploaded | 22:40 |
corvus | yeah | 22:40 |
clarkb | they are a special class of object that we don't want browsers to interpret for us | 22:40 |
auristor | mirror.epel, mirror.fedora, mirror.opensuse, and mirror.yum-puppetlabs still need to be fixed | 22:40 |
corvus | clarkb: it's not clear to me why the server is decompressing it | 22:42 |
*** slaweq has quit IRC | 22:42 | |
clarkb | corvus: I think because not sending an accept encoding imllies you prefer text and the server is trying to accomodate that | 22:42 |
clarkb | auristor: thank you for the reminder | 22:42 |
corvus | clarkb: what about "gzip,*"? | 22:43 |
corvus | or identity,gzip | 22:43 |
corvus | or *,gzip | 22:44 |
corvus | all of those return what we want in this case; it's hard to extrapolate what they might mean though since the cdn seems to be ignoring identity | 22:44 |
*** KeithMnemonic has quit IRC | 22:44 | |
*** slaweq has joined #openstack-infra | 22:45 | |
corvus | they all produce a gzip when i pull https://8059d9eb12f46e379447-cfcb5348d0a5bd4d7cf82711ec310965.ssl.cf5.rackcdn.com/695737/5/gate/build-javascript-content-tarball/15267f7/job-output.txt | 22:46 |
corvus | though, to be fair, that was stored with a gzip content-encoding | 22:46 |
clarkb | ya, we can probably treat gzip as a special file type that you might archive then always request it back? willthat get us gzipped files when we dont want them though? | 22:46 |
corvus | "if artifact path ends with .tgz or .gz accept gzip"? :) | 22:47 |
corvus | i wonder if doc promotion is broken too? | 22:48 |
clarkb | could be unless it untars without specifying type? I think newer tar magically handles that for us | 22:49 |
*** slaweq has quit IRC | 22:50 | |
corvus | another solution is that we could get rid of all of the logfiles that jobs pre-gzip and then drop the content-encoding headers on already-compressed data | 22:51 |
corvus | that's kind of a long-term fix though | 22:51 |
*** ociuhandu has quit IRC | 22:51 | |
*** rh-jelabarre has quit IRC | 22:52 | |
corvus | clarkb: the docs use the 'unarchive' module which i think handles decompression before handing it off to tar | 22:53 |
corvus | either that, or it got an uncompressed file and knew not to decompress it | 22:53 |
corvus | those seem equally plausible, since the input is a filename wich ends with ".bz2" but it says it's running a tar command on a tempfile with no extension and there are no compression args | 22:54 |
corvus | i feel certain one of those is true which is why docs are being published; i think we can set that aside for the moment | 22:55 |
clarkb | k | 22:56 |
corvus | clarkb: i think the best solution would be to change upload-logs-swift to not set content-encoding on already compressed data, which will break browsing ".gz" text logs in the browser, but fix downloading tarballs (for this and other cases). and then we go and remove pre-compression anywhere it's still happening. what do you think? | 22:57 |
clarkb | we'd essentially stop letting the serves know they can decompress this data? | 22:58 |
clarkb | I think that will work. Not sure how widespread the explicitly gzipped log files still are | 22:58 |
corvus | clarkb: yeah, and then reconcile things so that's only happening on things that they shouldn't be decompressing | 22:59 |
*** mriedem is now known as mriedem_away | 23:00 | |
corvus | the stage-output role compresses logs | 23:01 |
corvus | so does fetch-subunit-output | 23:01 |
*** mattw4 has quit IRC | 23:01 | |
corvus | stage-output has an option; we can add that to fetch-subunit-output | 23:01 |
corvus | maybe use the same option name, and add it to site vars | 23:02 |
corvus | or, rather, the base job, so it can be overridden | 23:02 |
corvus | i think those 2 things would probably take care of most of the v3-native jobs. who knows what's still going on in d-g. | 23:02 |
corvus | that's an after-thanksgiving thing. i think the only thing we could safely to before thanksgiving is fork the download-artifact role and add an accept header. honestly, i think maybe we can live with the jobs page being broken another week? | 23:03 |
corvus | (also, we could just manually unpack that tarball right now to fix that) | 23:04 |
clarkb | I expect d-g is still compressing everything it knows about | 23:04 |
clarkb | but we are getting close to finally not using d-g so maybe less of a concern | 23:05 |
corvus | yeah, and if folks care, a patch could be merged to just stop doing that | 23:05 |
clarkb | ya I expect it is also of lower priority given the holiday | 23:05 |
corvus | clarkb: how about i manually unpack the zuul webui, then after thanksgiving we talk about changing those compression settings on log uploads? | 23:06 |
corvus | clarkb: oh wait i missed something: something is being untarred on zuul01, what is it? | 23:08 |
clarkb | corvus: that zuul-js-content tarball is being untarred on zuul01 | 23:09 |
clarkb | and failing because it isn't wrapped in a gzip egg as expected | 23:09 |
corvus | -rw-r--r-- 1 root root 8611527 Jun 12 23:10 zuul-content-latest.tar.gz | 23:09 |
clarkb | /opt/zuul-web/zuul-js-content-master.tar.gz | 23:09 |
corvus | -rw-r--r-- 1 root root 41492480 Nov 22 21:42 zuul-js-content-master.tar.gz | 23:09 |
clarkb | it is the second one | 23:09 |
corvus | why do we have current timestamps then? | 23:10 |
corvus | -rw-r--r-- 1 root root 3275461 Nov 22 19:27 main.fbb9e77c.js | 23:10 |
clarkb | corvus: that timestamp is from a previous build today. My hunch is that ovh does not do the gzip -> text for us | 23:11 |
corvus | ooooooh | 23:11 |
clarkb | and that this comes down to which system archived the tarball | 23:11 |
*** lseki has quit IRC | 23:11 | |
clarkb | the timestamps for the Nov 22 21:42 tarball's files are 20:42-20:43 ish | 23:12 |
corvus | yep: -rw-rw-r-- 1 zuul zuul 3275589 Nov 22 20:43 main.9c66fbe6.js | 23:12 |
corvus | a reload of the js app followed by a visit to http://zuul.opendev.org/t/openstack/jobs works now | 23:13 |
corvus | (be sure to reload first :) | 23:13 |
clarkb | excellent. efried ^ fyi | 23:13 |
clarkb | this turned out to be a really brain twisting behavior | 23:14 |
efried | corvus, clarkb: cool, thank you. Um, what's a "reload of the js app"? | 23:14 |
efried | oh, meaning if I already had the zuul.o.o front page open, reload it first before punching the jobs tab? | 23:14 |
clarkb | efried: I do a ctrl + shift + R relaod of the zuul page | 23:14 |
corvus | yep | 23:14 |
clarkb | a simple f5 is probably sufficient | 23:15 |
efried | ack, thanks. | 23:15 |
*** slaweq has joined #openstack-infra | 23:15 | |
corvus | (it's a monolithing js app, so if you click on the jobs tab, it'll load the data without reloading the js which had the bug) | 23:15 |
corvus | monolithic even | 23:15 |
efried | woot! | 23:15 |
efried | monolithing is some kind of troll habitat | 23:16 |
clarkb | where the wild fungi grow | 23:16 |
corvus | i want a postcard from there | 23:16 |
corvus | btw, i just saw the job tags in action on http://zuul.opendev.org/t/zuul/jobs | 23:16 |
efried | Thanks for this fix tristanC. Before I knew about zuul.o.o/jobs I never missed it, but once it quit working I found myself needing it all the time. | 23:16 |
efried | and was very sad when it would crash my world | 23:17 |
corvus | if you put 'bifrost' in the search box, you can see the fix too | 23:17 |
corvus | bifrost-base is parented differently on different branches, so it shows up multiple times now | 23:18 |
*** slaweq has quit IRC | 23:19 | |
corvus | so are 2 children of that job as well | 23:19 |
corvus | er, 3 children | 23:19 |
clarkb | the tags are the "auto-generated" and "all-platforms" things? | 23:20 |
corvus | clarkb: ye | 23:20 |
corvus | whatever you put in "tags: " shows up there | 23:20 |
corvus | we use those to give the auto-generating helper script hints | 23:21 |
*** ociuhandu has joined #openstack-infra | 23:34 | |
*** slaweq has joined #openstack-infra | 23:43 | |
*** tosky has quit IRC | 23:45 | |
*** jtomasek has quit IRC | 23:49 | |
*** slaweq has quit IRC | 23:52 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!