*** EmilienM_ has joined #zuul | 00:04 | |
*** odyssey4me has quit IRC | 00:11 | |
*** odyssey4me has joined #zuul | 00:11 | |
*** EmilienM_ is now known as EmilienM | 00:48 | |
*** EmilienM has joined #zuul | 00:48 | |
*** JasonCL has quit IRC | 00:48 | |
*** JasonCL has joined #zuul | 00:49 | |
*** JasonCL has quit IRC | 00:59 | |
*** JasonCL has joined #zuul | 01:36 | |
*** JasonCL has quit IRC | 01:45 | |
*** JasonCL has joined #zuul | 02:49 | |
SpamapS | Hrm. really wish we had left the tenant out of the API/console-stream urls. Would make the nginx/apache configs even simpler. | 02:49 |
---|---|---|
* SpamapS disengages hindsight | 02:50 | |
*** JasonCL has quit IRC | 02:54 | |
*** JasonCL has joined #zuul | 03:02 | |
* SpamapS finally updating his zuul master to do the javascript things. :-P | 03:05 | |
*** JasonCL has quit IRC | 03:06 | |
SpamapS | hmmm | 03:32 |
SpamapS | $ curl http://127.0.0.1:9000/ | 03:32 |
SpamapS | curl: (52) Empty reply from server | 03:32 |
SpamapS | that can't be right | 03:33 |
SpamapS | so | 03:37 |
SpamapS | pip installing from a git checkout seems *not* to put a static dir under web | 03:37 |
SpamapS | http://paste.openstack.org/show/718143/ | 03:38 |
SpamapS | pip installed... but no statics | 03:38 |
tristanC | SpamapS: i think you need yarn pre-installed, see zuul/_setup_hook.py | 03:39 |
SpamapS | it is | 03:40 |
SpamapS | and all the static content is present | 03:40 |
SpamapS | In fact ln -s /opt/source/zuul/zuul/web/static /opt/venvs/zuul/lib/python3.5/site-packages/zuul/web/static fixed it | 03:41 |
tristanC | SpamapS: why would you prefer the console-stream urls out of the api/ path? | 03:44 |
SpamapS | tristanC: I would prefer that I can use location / | 03:49 |
SpamapS | without having to capture the tenant and re-pass it. | 03:49 |
SpamapS | It's ok, turns out I can do that, because I dn't actually want static offload | 03:50 |
SpamapS | what's getting me now is why yarn.lock changed | 03:50 |
SpamapS | ahh need --frozen-lockfile | 03:50 |
SpamapS | and now that doesn't work either | 03:56 |
SpamapS | seems our yarn.lock is broken | 03:58 |
SpamapS | something went wrong | 04:30 |
SpamapS | I torched the venv and the next install created the dir | 04:30 |
SpamapS | oy, looks like I have to update all my webhooks too | 04:32 |
* SpamapS should try apps | 04:32 | |
tristanC | Nodepool rawhide packages are now available: https://apps.fedoraproject.org/packages/nodepool , f28 and f27 are pending updates | 04:47 |
tristanC | can't do Zuul because fedora is already running ansible-2.4/2.5 | 04:49 |
*** eandersson has quit IRC | 05:16 | |
*** eandersson has joined #zuul | 05:17 | |
*** JasonCL has joined #zuul | 05:31 | |
*** JasonCL has quit IRC | 05:36 | |
SpamapS | IIRC the move to 2.4 was slated for a "shortly after 3.0" status | 05:38 |
tristanC | yes, but then the same issue will happen with 2.5 and the next versions. It seems like this can be mitigated with Fedora modularity where a module can pin a package branch, e.g. ansible-2.3 | 05:43 |
tristanC | so i think we should wait for f28 or f29 to get a zuul module available for fedora users | 05:44 |
SpamapS | I actually would love to drive a feature in to zuul to support multiple versions. | 05:46 |
tristanC | that would also be convenient :-) | 05:47 |
*** JasonCL has joined #zuul | 06:15 | |
SpamapS | frankly I'd also like to drive streaming hooks into ansible | 06:19 |
SpamapS | so new versions aren't so problematic. :) | 06:19 |
SpamapS | Oh and ponies | 06:19 |
*** JasonCL has quit IRC | 06:19 | |
SpamapS | want ponies | 06:19 |
* SpamapS has upgraded our zuul.. yay | 06:19 | |
*** xinliang_ has joined #zuul | 07:27 | |
*** xinliang_ has quit IRC | 07:49 | |
*** xinliang_ has joined #zuul | 07:49 | |
*** xinliang_ has quit IRC | 07:51 | |
*** xinliang has joined #zuul | 07:54 | |
*** xinliang has quit IRC | 07:54 | |
*** xinliang has joined #zuul | 07:54 | |
*** JasonCL has joined #zuul | 08:57 | |
*** JasonCL has quit IRC | 09:07 | |
*** Wei_Liu has joined #zuul | 09:10 | |
*** JasonCL has joined #zuul | 10:00 | |
*** JasonCL has quit IRC | 10:24 | |
*** JasonCL has joined #zuul | 10:52 | |
*** JasonCL has quit IRC | 10:55 | |
*** JasonCL has joined #zuul | 10:55 | |
*** pwhalen has quit IRC | 11:46 | |
*** pwhalen has joined #zuul | 11:47 | |
*** pwhalen has joined #zuul | 11:47 | |
*** weshay_mod is now known as weshay | 12:12 | |
*** elyezer has joined #zuul | 12:13 | |
*** odyssey4me has quit IRC | 12:17 | |
*** odyssey4me has joined #zuul | 12:17 | |
*** sshnaidm is now known as sshnaidm|bbl | 12:32 | |
*** maeca has joined #zuul | 12:39 | |
*** maeca has quit IRC | 12:42 | |
*** gouthamr has joined #zuul | 12:47 | |
*** ymartineau has joined #zuul | 12:58 | |
*** ymartineau has quit IRC | 12:59 | |
*** ymartineau has joined #zuul | 13:00 | |
*** elyezer has quit IRC | 13:05 | |
*** elyezer has joined #zuul | 13:16 | |
mordred | SpamapS: leaving tenant out of console-stream urls would not make apache configs any better - the apache rewrite rule needed for console-stream is to transform the scheme and is needed for apache to proxy websockets properly aiui | 13:19 |
mordred | SpamapS: (although yes, given that console-stream is already parameterized via build we totally could have not put it under tenant | 13:20 |
*** dkranz has joined #zuul | 13:21 | |
mordred | SpamapS: and yay for being upgraded! | 13:21 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul master: Upgrade from angularjs (v1) to angular (v5) https://review.openstack.org/551989 | 13:25 |
*** JasonCL has quit IRC | 13:45 | |
*** JasonCL has joined #zuul | 13:49 | |
*** evrardjp has quit IRC | 13:52 | |
*** JasonCL has quit IRC | 13:52 | |
*** JasonCL has joined #zuul | 13:53 | |
*** gouthamr has quit IRC | 13:53 | |
*** JasonCL has quit IRC | 13:54 | |
*** gouthamr has joined #zuul | 13:54 | |
*** elyezer has quit IRC | 13:58 | |
*** gouthamr has quit IRC | 14:00 | |
*** gouthamr has joined #zuul | 14:00 | |
*** JasonCL has joined #zuul | 14:01 | |
*** JasonCL has quit IRC | 14:08 | |
*** gouthamr has quit IRC | 14:09 | |
*** gouthamr has joined #zuul | 14:09 | |
*** elyezer has joined #zuul | 14:10 | |
*** gouthamr has quit IRC | 14:14 | |
*** sshnaidm|bbl is now known as sshnaidm | 14:16 | |
*** gouthamr has joined #zuul | 14:18 | |
*** JasonCL has joined #zuul | 14:18 | |
*** gouthamr has quit IRC | 14:19 | |
*** gouthamr has joined #zuul | 14:21 | |
*** JasonCL has quit IRC | 14:26 | |
*** JasonCL has joined #zuul | 14:27 | |
*** gouthamr has quit IRC | 14:34 | |
*** gouthamr has joined #zuul | 14:35 | |
*** elyezer has quit IRC | 14:40 | |
*** elyezer has joined #zuul | 14:42 | |
*** ymartineau has quit IRC | 14:42 | |
clarkb | that packaging conflict may make zuul (and I guess nodepool) better candidates for snaps or flatpak? | 14:44 |
corvus | ultimately we want to support more than one version of ansible simultaneously, so whatever solution we come up with should accomodate that. | 14:45 |
corvus | clarkb: istr you had the idea that zuul could manage the installation of ansible internally? | 14:46 |
clarkb | corvus: ya using pyenv or whatever the new virtualenvin python3 is called | 14:46 |
SpamapS | there's a new one? | 14:46 |
clarkb | gives you a programmtic interface to making virtualenvs with a stable interface | 14:46 |
clarkb | so you can have the ansible 2.3 and 2.4 and 2.5 installs all in one zuul and have jobs pick the one they want | 14:47 |
SpamapS | ah that would be good then yeah | 14:47 |
corvus | i don't know if that makes the fedora packaging problem better or worse. but it sounds workable from my pov. | 14:47 |
clarkb | I think that still goes nicely with eg flatpak because if done right all those venvs will leave in the flatpak itself after install and then when you uninstall zuul that all gets auto cleaned up | 14:48 |
clarkb | but my exposure to that type of packaging iscurrently minimal | 14:48 |
clarkb | I tried it locally for some stuff before decided a docker image or 3 was better for me | 14:48 |
*** acozine1 has joined #zuul | 14:49 | |
SpamapS | mordred: well the non-static-offload config in nginx is... pretty simple. :) | 14:56 |
corvus | SpamapS: show us with a doc change :) | 14:56 |
SpamapS | http://paste.openstack.org/show/718176/ | 14:57 |
SpamapS | good idea | 14:57 |
* SpamapS still finding all the stuff that moved | 14:58 | |
SpamapS | have to add a /t/ | 14:58 |
SpamapS | Or maybe I should whitelabel .. not sure we'll ever have two tenants at GD.. ;) | 14:58 |
fungi | presumably so long as zuul follows the fhs and drops those venvs somewhere sane line /var/lib/zuul/ansibleX.Y then cleanup at package removal should be fairly trivial to implement? | 14:58 |
corvus | SpamapS: https://zuul-ci.org/docs/zuul/admin/installation.html#web-deployment-options | 14:59 |
fungi | i don't see that snap-packs buy you much there, but i could be lacking in imagination | 14:59 |
SpamapS | Yeah right now I'm "Basic reverse proxy" | 14:59 |
clarkb | fungi: snaps and paks fully self contain the install so you rm the top level dir recursively and are done | 14:59 |
fungi | except distro package managers already have smarts to deal with cleaning up after packaged applications/services which create files in well-defined places | 15:00 |
fungi | e.g., for a deb you just stick a line in a file which says that the software it installs will also be putting files under /var/lib/whatever | 15:01 |
fungi | and that way it knows to remove that and its contents if you want to purge the package later | 15:02 |
clarkb | ya and you have to tell apt to purge but otherwise it tends to work | 15:02 |
fungi | well, that's a feature | 15:02 |
fungi | plenty of circumstances where you may want to (perhaps temporarily) uninstall a package but not have its data wiped | 15:02 |
SpamapS | If it's not "data", you can have it removed on regular removal. | 15:02 |
fungi | right | 15:02 |
SpamapS | And that would be preferrable. | 15:03 |
SpamapS | But this gets into a weird space | 15:03 |
SpamapS | If you are wanting to be distro package managed.. | 15:03 |
SpamapS | I don't think you want python specialness in /var/lib | 15:03 |
SpamapS | And the distro package would ideally build all supported versions of Ansible in /usr/lib. | 15:03 |
SpamapS | and into the package | 15:04 |
SpamapS | And then if you want to get crazy and do non-distro ansible, cleaning those up is your problem. | 15:04 |
fungi | that assumes the people packaging ansible for the same platform are okay with the idea that they need to be able to have multiple working versions installed in parallel | 15:04 |
tristanC | supporting multiple ansible version may be easily solved if the zuul-executor delegate the playbook execution to a remote process, e.g. a container that would have the right ansible version pre-installed, like that no need to manage multiple venv | 15:04 |
fungi | managing multiple container installs of ansible is easier (or even different at all) than managing multiple versions of ansible in virtualenvs? | 15:05 |
fungi | a virtualenv is just a very pythonesque "container" | 15:06 |
tristanC | assuming those version would be provided by your distributor instead of having zuul install stuff dynamically | 15:06 |
fungi | (i'm sure people who have preconceived docker-driven ideas abotu what a container is or isn't will argue with me on that point, but whatever) | 15:06 |
fungi | tristanC: in that case, you're saying your distro is more likely to give you multiple container-packaged versions of ansible but not the ability to install multiple versions of ansible on the system | 15:07 |
SpamapS | it's a python chroot ;) | 15:07 |
corvus | tristanC: we're not going to *require* containers | 15:08 |
fungi | sure, containers are chroots too, in my opinion | 15:08 |
fungi | with extra sauce | 15:08 |
SpamapS | chroots and cgroups and namespaces, oh my | 15:10 |
fungi | if there's no need to actually secure different ansible versions from encroaching on memory, network access, and so on for one another then the only actual benefit putting them in different containers is giving you is that they're in separate chroots | 15:13 |
tristanC | fungi: iiuc, that's how fedora is planning to support multiple version of a single component yes | 15:14 |
mordred | I'd say there are two overlapping concerns here a) how does zuul internally think about multiple versions of ansible and b) how are multiple versions of ansible installed/managed | 15:15 |
fungi | well, i suppose for people installing on fedora that might be a fine possibility to support, but i don't see us deciding that zuul will be a fedora-only application | 15:15 |
mordred | for b - it seems plausible that we might want to have both a 'zuul manages versions of ansible itself via virtualenv' and a 'zuul uses container system to fetch and execute specific versions of ansible' | 15:16 |
tristanC | fungi: it's unclear how a distribution can support installing multiple version of, say ansible | 15:17 |
fungi | tristanC: debian and friends manage to install multiple versions of all sorts of libraries and applications if the package maintainers want to be responsible for multiple versions | 15:17 |
fungi | however, i don't think we should count on either of those possibilities | 15:19 |
tristanC | but isn't this cause to cause conflict on /usr/lib/python3.6/site-packages/ansible ? | 15:19 |
tristanC | going to cause* | 15:19 |
fungi | (that a distro will have available container packages or normal packages for the versions of ansible you need) | 15:19 |
fungi | tristanC: well, for starters it doesn't _have_ to put them in the default python path | 15:20 |
corvus | as app developers, we can implement clarkb's suggestion. if distro packagers have another solution, we can look at supporting that (perhaps through configuration options) | 15:20 |
mordred | corvus: yah | 15:20 |
pabelanger | +1 ansible in a virtualenv works great, do that locally myself | 15:21 |
fungi | and yeah, if zuul has a feature to create ephemeral virtualenvs for multiple ansible versions, i don't see any conflict between that and adding configuration to tell it to not do that and here are the explicit paths to the various ansible entrypoints it wants | 15:22 |
corvus | fungi: ++ | 15:22 |
tristanC | fungi: that would works | 15:23 |
*** gouthamr has quit IRC | 15:23 | |
fungi | and that would, unless i'm missing something, make it entirely possible for the deployer to bring their own ansibles (whether that's in the form of distro packages, container pass-through, whatever) | 15:24 |
tristanC | i think it's important to consider distribution integration as it simplifies management, for example package manager takes care of pulling security fix, as far as i can tell, pip/virtualenv doesn't let you install security fix | 15:24 |
fungi | right, pip/virtualenv would basically be installing whatever the latest Z is in each of X.Y.Z | 15:25 |
*** pbrobinson_PTO is now known as pbrobinson | 15:25 | |
fungi | and trusting that if upstream is security-supporting those versions then they're pushing new releases to pypi as soon as they land a security fix | 15:25 |
fungi | if the deployer wants to use a custom-patched ansible for whatever reason, then they should still be able to tell zuul where to find that | 15:27 |
*** gouthamr has joined #zuul | 15:27 | |
tristanC | corvus: no need to requires container, but it seems like zuul-executor could have a driver to leverage k8s and completely isolate playbook run | 15:27 |
corvus | tristanC: i hope to be prepared to discuss containers later in the week after i write up notes from all the various discussions we've had. it's not fruitful to have an irc discussion about it before we have a common reference point. | 15:29 |
fungi | huh... this looks remarkably zuulish: https://concourse-ci.org/ | 15:30 |
tristanC | fungi: being able to tell zuul to not create virtualenv and use a provided ansible module would be ideal indeed | 15:30 |
mordred | fungi: yah - it seems very much what v3 would have been like if we'd not used ansible for our job content | 15:31 |
fungi | oh, and it's written in go i guess | 15:33 |
*** JasonCL has quit IRC | 15:37 | |
*** bhavik1 has joined #zuul | 15:43 | |
*** gouthamr has quit IRC | 15:48 | |
*** gouthamr has joined #zuul | 15:49 | |
*** bhavik1 has quit IRC | 15:52 | |
*** JasonCL has joined #zuul | 15:55 | |
*** patriciadomin has quit IRC | 15:56 | |
*** patriciadomin has joined #zuul | 15:58 | |
*** JasonCL has quit IRC | 15:59 | |
*** JasonCL has joined #zuul | 16:00 | |
*** JasonCL has joined #zuul | 16:01 | |
openstackgerrit | Merged openstack-infra/zuul master: zuul autohold: allow operator to specify nodes TTL https://review.openstack.org/543403 | 16:01 |
*** gouthamr has quit IRC | 16:07 | |
*** gouthamr has joined #zuul | 16:09 | |
*** JasonCL has quit IRC | 16:09 | |
corvus | now that we're post-release, i am going through the unreviewed changes in chronological order. i'm up to mid-february now. | 16:10 |
corvus | i can only spend a few hours a day reviewing changes, so it will be a while before i'm caught up. | 16:11 |
*** JasonCL has joined #zuul | 16:12 | |
Shrews | corvus: so, what's your definition of "user visible" changes (i think that was the phrase from your previous email)? b/c i would expect that 543403 would be a user visible change that deserves some documentation before we merge it | 16:14 |
Shrews | just want to make sure we're on the same page with that stuff | 16:15 |
Shrews | oh, maybe that's self documenting from the help options | 16:17 |
Shrews | yep, nm. i forgot how the client doc worked :) | 16:17 |
corvus | Shrews: good question -- i did evaluate that -- it has the same documentation as the rest of the autohold command (the cli args are documented, and that gets automatically pulled into the html docs). i didn't think it warranted a release note because i don't think it would really change an operator's behavior. that's debatable, but i think this case is borderline. | 16:17 |
corvus | Shrews: otoh, i think adding that feature to nodepool *would* warrant a release note, because that might prompt folks to go enable the setting. except it landed pre-release, so that ship has sailed. :) | 16:18 |
Shrews | *nod* | 16:19 |
corvus | (in fact, mostly today i've been -1ing changes with "add a release note". even a change which had one. oops) | 16:19 |
pabelanger | Shrews: mind looking at https://review.openstack.org/557942/ and https://review.openstack.org/558015/ updates for nodepool-dsvm testing | 16:20 |
Shrews | pabelanger: certainly | 16:20 |
*** JasonCL has quit IRC | 16:23 | |
*** jbryce has joined #zuul | 16:26 | |
clarkb | thinking about that does https://review.openstack.org/#/c/557859/ need a release note? I suppose a bugfix release note won't hurt | 16:26 |
clarkb | in general do we want release notes for bugfixes or just for features? | 16:26 |
corvus | clarkb: i'm mostly concerned with features; i think a bugfix note for that one might be good though; people may have run into problems with that (and perhaps disabled reporters to mitigate it). | 16:28 |
clarkb | corvus: ok I'll push a new patchset shortly with a bugfix note | 16:29 |
openstackgerrit | Merged openstack-infra/nodepool master: Add debian-stretch to nodepool-functional-py35-debian-src https://review.openstack.org/557942 | 16:32 |
*** gouthamr has quit IRC | 16:33 | |
openstackgerrit | Merged openstack-infra/nodepool master: Enable AFS mirrors for ubuntu-bionic testing https://review.openstack.org/558015 | 16:37 |
openstackgerrit | Clark Boylan proposed openstack-infra/zuul master: Report to all reporters even if one fails https://review.openstack.org/557859 | 16:53 |
clarkb | now with releasenotes | 16:53 |
Shrews | pabelanger: Why do we need https://review.openstack.org/555103 ? It seems like we are testing DIB and not nodepool there. | 17:04 |
pabelanger | Shrews: yah, we are, it is to help catch issues before they hit production nodepool.o.o. Since we diskimage-builder landed a change that broke centos-7 a few weeks ago. In this case, we're using nodepool as the test running for testing glean and diskimage-builder | 17:08 |
openstackgerrit | Merged openstack-infra/nodepool master: Reduce logging in _cleanupCurrentProviderUploads function https://review.openstack.org/557791 | 17:08 |
Shrews | pabelanger: this can't be built into dib/glean testing itself? | 17:09 |
pabelanger | Shrews: ATM no, we use nodepool / devstack to boot those images. We could maybe start to think about having each project run their own nodepool.yaml / check_devstack_plugins scripts | 17:11 |
clarkb | Shrews: pabelanger fwiw I think this integration testing has been invaluable to all the projects involved | 17:11 |
pabelanger | but it is helpful for openstack-infra to try and gate on common elements we use | 17:11 |
clarkb | we know longer guess if things work or have to do a bunch of testing locally | 17:11 |
clarkb | so I'm all for just enabling whatever helps in it | 17:11 |
pabelanger | yah, agree | 17:12 |
Shrews | yeah, that's fine. i was just genuinely curious as to why those projects didn't catch it themselves | 17:12 |
odyssey4me | Shrews fair point | 17:13 |
clarkb | Shrews: because to make sure glean works you need an openstack essentially, then you have to boot and check things worked (whcih nodepool already does) | 17:13 |
clarkb | glean doesn't work outside of openstack without reimplementing a bunch of nova | 17:13 |
odyssey4me | perhaps it'd make sense to have some cross-repo/project testing going on? | 17:13 |
clarkb | odyssey4me: that is what is happening | 17:14 |
Shrews | pabelanger: does that need updated with your new debian image? | 17:15 |
odyssey4me | clarkb perhaps I'm missing something, but that looks like a test implemented in the nodepool repo to validate a dib feature? | 17:15 |
odyssey4me | would it not make better sense to have the test be in the dib repo? | 17:16 |
clarkb | odyssey4me: the test is implemented in nodepool to validate nodepool works. It just happens taht because nodepool depends on dib it also validates dib works | 17:16 |
clarkb | odyssey4me: I think this is fine | 17:16 |
clarkb | odyssey4me: and the best way to check that dib image actually works is to boot it, make sure glean worked, and then ssh into it. So you wouldn't be making the test any smaller if it was in dib itself | 17:16 |
odyssey4me | clarkb a test is definitely better than nothing :) but closest to the source is the best test | 17:16 |
*** JasonCL has joined #zuul | 17:17 | |
clarkb | it would do all the same things of build image, upload to openstack, boot it and ssh in. Basically all the things nodepool does for us | 17:17 |
clarkb | if we want to reimplement nodepool in dib's test suite I'd be sad | 17:17 |
odyssey4me | agreed, however doing a cross-repo test might be worth it | 17:18 |
clarkb | it is a cross repo test... | 17:18 |
odyssey4me | oh, sorry - I must be missing the cross-repo thing from dib to nodepool | 17:18 |
clarkb | odyssey4me: the one test is used by dib, glean and nodepool | 17:19 |
clarkb | odyssey4me: as they are relatively tightly coupled and depend on each other to function | 17:19 |
odyssey4me | currently it looks like a nodepool patch tests dib... but not the other way around | 17:19 |
clarkb | dib runs the same tests and glean too | 17:19 |
odyssey4me | perhaps I'm missing something, let me look more closely at the repo tests - sorry, didn't mean to meddle | 17:20 |
*** JasonCL has quit IRC | 17:20 | |
*** JasonCL has joined #zuul | 17:20 | |
pabelanger | Shrews: Yah, likely, I can push up a patch shortly | 17:20 |
*** JasonCL has quit IRC | 17:21 | |
*** JasonCL has joined #zuul | 17:24 | |
*** JasonCL has quit IRC | 17:25 | |
*** JasonCL has joined #zuul | 17:34 | |
*** JasonCL has quit IRC | 17:34 | |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool master: Test growroot in boot tests https://review.openstack.org/555103 | 17:39 |
pabelanger | clarkb: Shrews: ^rebased to pick up debien-stretch | 17:40 |
*** jlk has quit IRC | 17:46 | |
*** jlk has joined #zuul | 17:46 | |
jlk | o/ | 17:47 |
*** JasonCL has joined #zuul | 17:47 | |
corvus | jlk: howdy! it looks like tobiash's fix was merged: https://github.com/sigmavirus24/github3.py/pull/817 are you going to do another release to pick that up? | 18:10 |
tobiash | \o/ | 18:10 |
jlk | Sure. Is this a "today" thing or "this week"? I can take a look at what else is in the pipeline for bugfixes/changes over there too. | 18:20 |
corvus | jlk: my read of the situation is more like 'week' | 18:21 |
*** JasonCL has quit IRC | 18:22 | |
*** JasonCL has joined #zuul | 18:41 | |
*** JasonCL has quit IRC | 18:43 | |
*** sshnaidm has quit IRC | 18:43 | |
*** JasonCL has joined #zuul | 18:43 | |
*** JasonCL has quit IRC | 18:45 | |
*** qwc_ has left #zuul | 18:54 | |
*** JasonCL has joined #zuul | 19:05 | |
*** JasonCL has quit IRC | 19:06 | |
*** JasonCL has joined #zuul | 19:16 | |
*** JasonCL has quit IRC | 19:20 | |
*** JasonCL has joined #zuul | 19:21 | |
*** JasonCL has quit IRC | 19:22 | |
*** sshnaidm has joined #zuul | 19:22 | |
*** JasonCL has joined #zuul | 19:22 | |
*** JasonCL has quit IRC | 19:24 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul master: Reorganize "Zuul From Scratch" document https://review.openstack.org/556988 | 19:31 |
*** JasonCL has joined #zuul | 19:35 | |
*** JasonCL has quit IRC | 19:37 | |
*** JasonCL has joined #zuul | 19:38 | |
*** harlowja has joined #zuul | 19:38 | |
*** JasonCL has quit IRC | 19:40 | |
*** JasonCL has joined #zuul | 19:41 | |
*** gouthamr has joined #zuul | 19:42 | |
*** JasonCL has quit IRC | 19:43 | |
*** dtruong has quit IRC | 19:59 | |
*** dtruong has joined #zuul | 20:11 | |
*** maeca has joined #zuul | 20:31 | |
*** JasonCL has joined #zuul | 20:58 | |
*** JasonCL has quit IRC | 21:02 | |
SpamapS | hm | 21:03 |
SpamapS | http://paste.openstack.org/show/718213/ | 21:05 |
SpamapS | Is anybody beside me testing/using webhook github support? | 21:05 |
SpamapS | seems like something is borken | 21:05 |
clarkb | webhook being distcint from the application? | 21:05 |
clarkb | I think we use the application and so does mrhillsman and tobiash | 21:06 |
mrhillsman | yep | 21:06 |
pabelanger | SpamapS: tobiash fixed that | 21:08 |
corvus | SpamapS, pabelanger: that's fixed in https://github.com/sigmavirus24/github3.py/pull/817 which is merged but not released | 21:09 |
SpamapS | OH | 21:17 |
SpamapS | ty, will pull that in | 21:18 |
*** jbryce has quit IRC | 21:18 | |
*** jbryce has joined #zuul | 21:19 | |
*** gouthamr has quit IRC | 21:30 | |
*** JasonCL has joined #zuul | 21:45 | |
*** acozine1 has quit IRC | 21:49 | |
jhesketh | Morning | 21:57 |
pabelanger | clarkb: mind a review of growroot testing: https://review.openstack.org/555103/ cc ianw | 21:58 |
corvus | pabelanger: why's that in nodepool? | 21:59 |
Shrews | lol | 21:59 |
Shrews | i asked the same thing earlier :) | 21:59 |
mordred | corvus: Shrews asked the same thing earlier | 21:59 |
mordred | Shrews: jinx | 21:59 |
corvus | mordred: oh, i thought you were translating. :) | 22:00 |
corvus | Shrews: it's a good question | 22:00 |
corvus | mordred: tell Shrews i think it's a good question | 22:00 |
jlk | Is there a meeting today? | 22:00 |
corvus | jlk: like right now | 22:00 |
jlk | okay I thought so | 22:00 |
corvus | in #openstack-meeting-alt | 22:00 |
mordred | Shrews: tell corvus the scrollback on the topic is worth reading | 22:00 |
clarkb | basically nodepool and dib and glean are integration tested together. Because nodepool depends on both dib and glean working | 22:02 |
clarkb | also nodepool does all the things you need to test that dib + glean images actually work | 22:02 |
clarkb | whcih is why they are coupled together | 22:02 |
*** JasonCL has quit IRC | 22:08 | |
SpamapS | :( | 22:09 |
SpamapS | http://paste.openstack.org/show/718214/ | 22:09 |
SpamapS | and now that | 22:09 |
*** elyezer has quit IRC | 22:11 | |
jlk | a 404 on setting the status? | 22:13 |
jlk | that's... unexpected. | 22:13 |
SpamapS | yeah I dunno what's up there | 22:14 |
SpamapS | I left my zuul stale too long... just deployed the last 40 or so days of changes and who knows what broke :-P | 22:14 |
jlk | oof. | 22:16 |
jlk | It might be neat if we'd put out what the actual URL was that it tried to hit | 22:17 |
Shrews | SpamapS: stale zuuls are never as tasty as fresh zuuls | 22:18 |
*** elyezer has joined #zuul | 22:19 | |
SpamapS | indeed.. | 22:19 |
SpamapS | really I need a zuul test suite | 22:19 |
SpamapS | something that can like, create a new random org, some random repos, fill them with zuul config, spin up zuul, do some PR's, etc. etc. | 22:19 |
*** JasonCL has joined #zuul | 22:30 | |
*** maeca has quit IRC | 22:33 | |
*** gouthamr has joined #zuul | 22:37 | |
corvus | clarkb, pabelanger, Shrews: i think that change in particular is a red-flag. nothing about nodepool cares about growroot. it's not really reviewable by the nodepool team. | 22:43 |
Shrews | corvus: that's sort of how i felt, but i didn't have a good argument to counter the "coupled repos" thing | 22:44 |
clarkb | corvus: what I tried to explain earlier today is that is effectively an integration test between nodepool dib and glean. One of the benefits of it being run that way is nodepool depends on dib and glean and dib and glean need to do the steps that nodepool takes to be tested properly | 22:44 |
clarkb | corvus: so in the test its fairly tightly coupled together. If we want to move the test definition thats fine, we can cotninue to run it against nodepool | 22:45 |
clarkb | but I think having a single integration test for those three projects has been valuable | 22:45 |
corvus | clarkb: yeah, i'm in favor of an integration test | 22:45 |
clarkb | (technically I think shade is also integration tested there but it doesn't need the dib and glean stuff) | 22:45 |
jlk | corvus: mordred: Question for you two. If I wanted to have Zuul's tests ran when github3.py PRs were opened, and have the results reported as status, what's the way to accomplish that? I think you've done the same for ansible/ansible, and I couldn't work it out in my head while on my bike what the right arrangement is. | 22:45 |
corvus | clarkb: i wonder though if we can construct it such that nodepool is just performing a basic test (boot + ssh), and dib builds on that to test more things? | 22:46 |
clarkb | I don't think we should rewrite the test to not use nodepool and I don't think nodepool should stop running the test | 22:46 |
corvus | jlk: i think we might even have documentation for this... | 22:46 |
SpamapS | jlk: actually it must have been related to tobias's fix | 22:47 |
corvus | jlk: https://docs.openstack.org/infra/manual/drivers.html#outbound-third-party-testing | 22:47 |
SpamapS | pulling that in fixed the status 404's too | 22:47 |
jlk | oh neat. | 22:47 |
jlk | (to both) | 22:47 |
corvus | jlk: though, as i think about it -- i'm not sure we're testing github3.py enough for that to be useful yet | 22:47 |
SpamapS | Probably something bad coming back in the response propagating to the reporter | 22:47 |
clarkb | corvus: we've found that if growroot breaks it breaks dib built nodepool images. So I'm not personally too concerned with having that check fail in nodepool if it is broken | 22:47 |
corvus | jlk: i think zuul's tests mock out too much of github3.py | 22:47 |
clarkb | corvus: we can define it elsewhere if that helps get the right reviewers looking at it though | 22:48 |
jlk | corvus: that's... probably an accurate observation. | 22:48 |
SpamapS | What if we did what I suggested | 22:48 |
SpamapS | an integration test | 22:48 |
corvus | jlk: maybe we can change them to use betamax or something? | 22:48 |
pabelanger | I agree, we are likely over reaching by adding growroot into nodepool-dsvm testing, but it has been very helpful to catch issues and new distro support using nodepool. Moving that into a nodepool.yaml file specific to DIB or glean, seems sane | 22:48 |
SpamapS | that basically ran with a semaphore on a closed org on github.com | 22:48 |
clarkb | pabelanger: I think I'm saying its not an overreach for the integration test since nodepool images break if growroot breaks | 22:48 |
clarkb | pabelanger: I think we should continue to test that | 22:49 |
clarkb | pabelanger: on nodepool | 22:49 |
corvus | SpamapS, jlk: yeah, as an addition to the unit tests (ie, its own job) i think that could be useful. | 22:49 |
jlk | yeah at one time I wanted to use betamax for this | 22:49 |
SpamapS | yeah a fake github server would do too | 22:49 |
jlk | but... it would have required a lot of changing the test structure | 22:50 |
clarkb | pabelanger: but maybe we move the test definition if one of the concerns is nodepool reviewers won't know how to properly review growroot related things | 22:50 |
clarkb | pabelanger: and then nodepool continues to run that same job | 22:50 |
pabelanger | clarkb: I'm happy for it to live some place :) so whatever everybody is good with, works for me | 22:50 |
jlk | betamax uses a real github server (once) to record the interactions | 22:50 |
SpamapS | https://github.com/mzabriskie/mock-github-api | 22:50 |
jlk | and then you can replay them over and over without connecting to a real server with real credentials | 22:50 |
clarkb | if we take that appraoch either project-config or dib probably work fine | 22:50 |
corvus | jlk, SpamapS: there's something about the idea of zuul doing this against live github with secrets that i find pretty compelling. :) | 22:50 |
pabelanger | clarkb: yah, that might work | 22:50 |
SpamapS | corvus: me too :) | 22:50 |
SpamapS | especially if github wouldn't hate us for making random github orgs and deleting them on every check. | 22:51 |
clarkb | corvus: SpamapS just have to be careful of the 5k per hour api limit | 22:51 |
corvus | jlk, SpamapS: anyway, either, or both of those (betamax / live) would be useful and in-scope for openstack's zuul to run against github3.py prs | 22:51 |
jlk | the amount of noise that the tests would create is but a drop in the ocean | 22:51 |
clarkb | the problem isn't the noise its that github does actively defend itself | 22:52 |
corvus | clarkb: judicious use of the files: job attr on our part can help with that | 22:52 |
clarkb | corvus: ya that may be one way to address it | 22:52 |
corvus | also, once we're in the context of an org, we can use an app install for many of our api calls | 22:53 |
jlk | with beta max you can make use of an app installed project to record the interactions | 22:54 |
jlk | it'll give you the larger API limit, and you'd only hit them when you record | 22:54 |
jlk | after that, there's no actual API hit | 22:54 |
corvus | clarkb: is growroot so fundamental it's expected to be in every nodepool use of dib? i'm trying to get at whether we're really testing nodepool here, or we're testing openstack's dib images. | 22:54 |
jlk | github3.py's CI jobs never actually hit the github API | 22:54 |
clarkb | corvus: yes if you don't use growroot your / is only as big as your diskimage so you can't really write anything | 22:54 |
jlk | (unless you delete a betamax recording or need to create a new one) | 22:55 |
clarkb | corvus: this is big enough for the old can I ssh in test which is why it worked without that in the past, but real world requires you have some disk to copy zuul repos for example | 22:55 |
clarkb | corvus: I don't really see it as openstack specific, if you make a dib image you want growroot | 22:56 |
clarkb | or you'll have a full / and things will break when you use the image | 22:56 |
corvus | clarkb: okay, i'm convinced this stands on its own as an enhancement to the nodepool dib tests. | 22:56 |
jlk | corvus: so back to the qeustion | 22:57 |
jlk | question | 22:57 |
clarkb | corvus: and the reason for this is you want the images themselves to be as small as possible while still taking advantage of whatever disk you put on them | 22:57 |
clarkb | *put them on | 22:57 |
clarkb | corvus: are you good with the change as is then without moving the job around or having tiers of that integration job? | 22:58 |
clarkb | oh cool you approved it I guess that answers that question. Thank you | 22:59 |
corvus | clarkb: yep, i +3d it. i think we might want to think about rejiggering it on principle, but i'm fine with the current state. i'd push back harder on a change to nodepool that added, i dunno, a devstack element or something. | 22:59 |
clarkb | corvus: ya I think that is entirely reasonable | 22:59 |
jlk | corvus: if I were to A) install the app on github3.py, B) add it to project-config/zuul/mail.yaml, and C) add an entry for it in project-config/zuul.d/projects.yaml for third-party-check, then I'd list certain zuul jobs to run, right? And those jobs would have to know that the github3.py that zuul requirements lists would have to come from the zuul git repos instead of the pip mirror, right? | 22:59 |
jlk | that's the part I was also stuck on. How does the job magically take advantage of the speculative github3.py state | 22:59 |
corvus | jlk: if it's the zuul unit tests, for example, if the job says "required-projects: sigmavirus24/github3.py" it will all magically happen | 23:00 |
corvus | jlk: that's because the tox roles look at that variable and set up what we call "sibling" repos based on it | 23:00 |
corvus | jlk: (if it finds that a required-project is in requirements.txt, it automatically installs it from the zuul-prepared source checkout) | 23:01 |
jlk | I see | 23:01 |
jlk | how much of that is Zuul roles and not OpenStack roles? | 23:01 |
jlk | like, is this a reusable pattern for projects outside of OpenStack? | 23:01 |
clarkb | I want to say that is in the base tox role | 23:01 |
jlk | (and what would we do for !Python) | 23:01 |
pabelanger | yah, tox role in zuul-jobs | 23:02 |
corvus | jlk: of course, as discussed, the unit test jobs won't actually be effective, but if we did the betamax thing, it would work. any new jobs (like the live github test) that used tox could also take advantage of it easily. non-tox would require a teensy bit of work. | 23:02 |
corvus | jlk, pabelanger, clarkb: yep, it's all generic in zuul-jobs. the only thing this bit of magic requires is the use of pbr. | 23:02 |
corvus | jlk, pabelanger, clarkb: so basically, if you're a python project that uses pbr and tox, if you use the tox roles/jobs from zuul-jobs, you get this for free. | 23:03 |
openstackgerrit | Merged openstack-infra/nodepool master: Test growroot in boot tests https://review.openstack.org/555103 | 23:03 |
jlk | okay | 23:03 |
jlk | and it's only the "thing I want to test" that needs pbr, in this case the zuul project. github3.py doesn't need pbr | 23:03 |
corvus | jlk: correct | 23:03 |
corvus | at least, i'm pretty sure about the pbr thing. i'm double checking | 23:04 |
jlk | the !Python extra work would be to figure out how to replicate this set up, eg for Ruby be able to install the gem from source checkout rather than from the gem servers | 23:04 |
clarkb | it may not even need pbr, iirc the implementation is it does an install of tox env using the only install deps command | 23:04 |
corvus | actually, i may have lied about pbr | 23:04 |
clarkb | then it does a for loop for each required project and installs them over the top | 23:04 |
clarkb | pbr shouldn't be needed to do those two steps | 23:04 |
jlk | ah | 23:04 |
corvus | clarkb: yeah, so i think we only requrie tox | 23:04 |
clarkb | then it runs the tox env telling it to not install any deps | 23:05 |
jlk | so tox installs everything once, then installs all the potentially speculative ones over the top | 23:05 |
jlk | THEN installs the project (zuul) itself? | 23:05 |
clarkb | jlk: ya tox does the project (zuul) as the step prior to running test commands | 23:05 |
corvus | https://zuul-ci.org/docs/zuul-jobs/roles.html#role-tox | 23:05 |
clarkb | thats a built in tox behavior of splitting dep installs from project install | 23:05 |
corvus | http://git.zuul-ci.org/cgit/zuul-jobs/tree/roles/tox/library/tox_install_sibling_packages.py | 23:06 |
corvus | and yeah, it has pbr and non-pbr support in there | 23:07 |
jlk | ah cool, and that's just to discover the name of the thing | 23:08 |
jlk | I wonder if you can slap a betamax recording on a whole series of interactions... | 23:08 |
clarkb | whats neat is setuptools supports setup.cfg now too | 23:08 |
* clarkb checks if it supports that name query | 23:09 | |
jlk | but not in a pbr compatible way IIRC | 23:09 |
clarkb | jlk: overall that is correct but the name bit may be compat | 23:09 |
clarkb | (its really annoying that it was implemented in a non compat way) | 23:09 |
clarkb | metadata.name is used the same way with setuptools | 23:10 |
clarkb | so it should support both setuptools and pbr setup.cfg files | 23:10 |
corvus | jlk: in fact, all you'd need is this project stanza: http://paste.openstack.org/show/718216/ | 23:10 |
corvus | jlk: tox-py35-on-zuul is an already existing job which is "run zuul's unit tests on changes to this other random repo" | 23:11 |
jlk | interesting! | 23:11 |
corvus | we run it on changes to zuul-jobs | 23:11 |
jlk | I wish we could see what, if any, calls github3.py library gets during that test | 23:12 |
jlk | and how much is mocked out | 23:12 |
clarkb | fwiw the gerrit testing is fairly robust and it just uses a fake that doesn't even set up an ssh connection iirc | 23:15 |
clarkb | so betamax'ing it is probably sufficient? | 23:15 |
jlk | betamax records python requests calls, which github3.py makes. If we could mount a cassette and then call something in zuul that would do a bunch of work agains the API, we should be able to record all of it | 23:15 |
corvus | there's something that the current approach (which is to emulate gerrit/github behavior) gives us -- the ability to construct new scenarios without interacting with gerrit or github. basically, we've modeled the behavior of the remote services so that we can say "pretend there are 2 changes and then one of them ..." | 23:18 |
fungi | clarkb: jlk: when setup.cfg support was added to setuptools they mostly used or at least aliased the same parameter names pbr used, and for newer things i've added in setuptools i've submitted matching support in pbr | 23:18 |
corvus | we could do all of that with betamax, but it'd be extra work on each new test to record a new cassette | 23:18 |
clarkb | fungi: if you read they docs they explicitly call out a bunch of stuff that isn't supported. The entirety of metadata should be though | 23:18 |
jlk | correct | 23:18 |
clarkb | fungi: entrypoints and such are not | 23:18 |
jlk | there's the recording upfront cost | 23:18 |
clarkb | corvus: jlk how difficult is it to get betamax to represent a failure behavior that may not always be present in the upstream tool? | 23:19 |
clarkb | would you ahve to hand edit it in that case? | 23:19 |
fungi | clarkb: yeah, there's not 100% alignment but my point was they went out of their way to avoid making things completely different from pbr | 23:19 |
corvus | so we might want to consider that we really have two use cases -- the current system is handling modeling zuul behaviors against remote systems. but we *also* want to verify our ability to interact with them. so we may want to think about a new set of tests explicityl for that, rather than thinking of the current fakes as a limitation. | 23:20 |
jlk | clarkb: hrm. Can you give an example? | 23:21 |
corvus | (of course, we can use most of the existing test infrastructure to make that easy -- just copy the tests we want and don't swap in the fake github on those) | 23:21 |
jlk | ah, ZuulTestCase() has a configure_connections, which has a getGithubConnection() which sets up the fake. | 23:22 |
jlk | So we'd have to record a few things | 23:23 |
jlk | We'd have to recrord the initial bringup, where Zuul logs into GitHub and iterates over the project list repos | 23:23 |
jlk | and also record various activities like "respond to a PR open event" and all the interaction that causes | 23:24 |
clarkb | jlk: github 500s you or whatever response is due to api ratelimit for example | 23:24 |
jlk | That could all be in a single "test case", or maybe a whole class that gets a single cassette | 23:24 |
clarkb | Shrews: left some comments on the ZfS reorg change | 23:25 |
jlk | clarkb: OIC. That is hard to setup, I"d say you'd want to manually mock that out | 23:25 |
clarkb | jlk: ya that was my worry. There tend to be a small number of exceptional yet expected over the course of time cases that would be good to test for | 23:26 |
jlk | clarkb: as corvus says though, we shouldn't think about moving our existing tests to be betamaxed. We could add to our existing tests a scenario in which a 500 is returned instead of some well formatted json. | 23:27 |
clarkb | ya | 23:28 |
jlk | and we should record some NEW tests | 23:28 |
*** elyezer has quit IRC | 23:29 | |
*** elyezer has joined #zuul | 23:36 | |
*** gouthamr has quit IRC | 23:41 | |
*** elyezer has quit IRC | 23:43 | |
*** elyezer has joined #zuul | 23:44 | |
jhesketh | corvus: a while ago we were discussing having zuul handle gearman worker failures more gracefully | 23:46 |
jhesketh | corvus: I had a pass at a patch here if you have time to take a look at the approach: https://review.openstack.org/#/c/554890/ | 23:47 |
*** JasonCL has quit IRC | 23:47 | |
jhesketh | giving the discussions with zk retries etc it may also not be something that we still want to do | 23:47 |
jhesketh | (one of the delays in the patch is that I'm having issues getting working tests for it) | 23:47 |
jhesketh | and if we want to drop gearman then that may also make it redundant | 23:48 |
*** JasonCL has joined #zuul | 23:52 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!