*** mwhahaha has joined #zuul | 00:05 | |
*** fdegir has joined #zuul | 00:10 | |
*** Shunde has joined #zuul | 00:40 | |
Shunde | Hello, I am new to zuulv3, just wondering which is the stable version of zuulv3? | 00:41 |
---|---|---|
clarkb | Shunde: there hasnt been a zuulv3 release yet that is considered stable (major progress is being made though) | 00:43 |
jeblair | we're hoping to finish everything up and release early next year | 00:45 |
Shunde | thanks guys. can I use it in prod by then? | 00:46 |
dmsimard | Shunde: OpenStack runs it's production on it at peak more than a 1000 concurrent jobs. It works for us but not what we'd call stable release quality. | 01:10 |
dmsimard | Real soon now, though. | 01:12 |
Shunde | OK. I'd like to give it a try. Is there any document I can follow if I want to install and have a play? | 01:13 |
jlk | Shunde: https://docs.openstack.org/infra/zuul/ is updated with the v3 content | 01:15 |
pabelanger | odd, that should still be v2 for that URL. https://docs.openstack.org/infra/zuul/feature/zuulv3/ is likey the latest on zuulv3 | 01:18 |
pabelanger | possible we need to retrigger a master build to fix that | 01:18 |
*** Shunde has quit IRC | 01:27 | |
*** jhesketh has quit IRC | 01:59 | |
*** shunde has joined #zuul | 02:05 | |
*** dmellado has quit IRC | 02:42 | |
*** dmellado has joined #zuul | 02:48 | |
*** shunde has quit IRC | 03:17 | |
*** Wei_Liu has quit IRC | 03:34 | |
*** dmellado has quit IRC | 03:37 | |
*** dmellado has joined #zuul | 03:38 | |
SpamapS | so.. how do I get Builds and Jobs for my zuulv3? | 03:47 |
SpamapS | is it hiding somewhere in zuul-web and I just need to plumb it through with my nginx frontend? | 03:48 |
SpamapS | (Those are really neat btw) | 03:48 |
jlk | yeah I think it's hanging off of zuul-web, or is about to be | 03:50 |
*** Wei_Liu has joined #zuul | 03:51 | |
SpamapS | Cool | 03:58 |
SpamapS | funny story.. did you all know that kubernetes has a du-based disk accounting feature too? But it kind of stinks, because they launch a du-per-volume. I was thinking maybe we should help them by changing it to be more like the executor's disk accountant which saves a lot of IO by doing all of the active work dirs at one time. | 03:59 |
SpamapS | (This is for the 'emptyDir' volume driver, which is a way to get writable disk space that survives node reboots and container crashes) | 04:02 |
jlk | SpamapS: wanna learn some Go? :) | 04:03 |
jlk | I sat through a number of "serverless" talks, from people bringing functions as a service to k8s, which was super neat. The concept of a tiny bit of code that's ready to go, a router (LB) that can send tasks to the function, and the ability to use in-built metrics to quickly scale out (and back) the number of pods that can handle the request fits REALLY WELL with a CI system. | 04:05 |
SpamapS | zomg | 04:05 |
SpamapS | https://github.com/gitpython-developers/GitPython/releases/tag/2.1.8 | 04:05 |
SpamapS | @Byron Byron tagged this 12 hours ago ยท 0 commits to master since this tag | 04:05 |
jlk | It had me thinking of ways to bend Zuul to be more "serverless", and then I just had to stop, and realize that Zuul in k8s could also use the metrics for doing horizontal scaling of executors, web handlers | 04:05 |
SpamapS | \o/ | 04:05 |
SpamapS | 'tis up on pypi | 04:06 |
jlk | Was there something significant to the GitPython release? | 04:06 |
SpamapS | py3 fixes and a gc-induced performance regression | 04:06 |
jlk | ah | 04:06 |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Drop local fork of GitPython for 2.1.8 release https://review.openstack.org/527298 | 04:07 |
SpamapS | I'm guessing somebody poked them to ask for release/merge | 04:07 |
SpamapS | So I'm probably stealing their thunder. ;) | 04:08 |
jlk | I saw some issues with certain git features that if you cloned via gitpython, you got some bad stuff | 04:08 |
jlk | dunno, there was a bunch of activity around the Fedora packaging tools that I wrote a long time ago wrt new git and gitpython releases | 04:08 |
*** shunde has joined #zuul | 04:10 | |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Drop local fork of GitPython for 2.1.8 release https://review.openstack.org/527298 | 04:10 |
* SpamapS adds storyboard tags | 04:10 | |
SpamapS | jlk: I've seen some weirdness too | 04:11 |
shunde | Hi, is it possible to start zuulv3 without a connection? and create a job with cli tools? | 04:11 |
SpamapS | shunde: sort of, but it's not really effective. | 04:11 |
SpamapS | shunde: there's a "git only" driver in the works for that, but it's half done. | 04:11 |
SpamapS | https://storyboard.openstack.org/#!/story/2001336 | 04:12 |
SpamapS | leifmadsen: btw, how is the getting started guide going? I might have a little time to work on it next week. | 04:13 |
SpamapS | Hrm. I need a slack reporter so I can whine if a post job fails. Email is so gauche. | 04:15 |
jlk | lol | 04:15 |
clarkb | gitpython has its issues if you look in the merger git code you'll find ee found quite a few of them | 04:21 |
clarkb | my favroite is needing to ignore the first error on git fetch and try again tonsee if the error is real | 04:22 |
SpamapS | if you can't trust gitpython... who can you trust? | 04:24 |
SpamapS | oh fun, latest pull seems to have broken my status.json.. hrm | 04:32 |
SpamapS | gah broke all webhooks too hrm | 04:37 |
SpamapS | http://paste.openstack.org/show/628673/ | 04:43 |
SpamapS | hrm that doesn't look good | 04:43 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Remove unused setup_tables https://review.openstack.org/526970 | 04:50 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Support table prefix for sql reporter https://review.openstack.org/526835 | 04:50 |
SpamapS | ok scheduler apparently needed a hard restart | 04:54 |
SpamapS | but now I'm getting static html that requests /static/js/stuff but there is no /js/ dir in static | 04:55 |
SpamapS | oh look there's a README buried in there | 04:56 |
dmsimard | SpamapS: I was reading your k8s storage stuff | 05:07 |
dmsimard | We had the same "du" thing in OpenShift, we ended up killing it I think. It was for telemetry/monitoring and we didn't care. | 05:07 |
dmsimard | Also, emptydir doesn't survive pod destruction so it depends on the use case I guess, we use hostpath | 05:09 |
SpamapS | dmsimard: emptyDir is the only thing we alow on our internal k8s clusters. I am not sure if we care either. | 05:10 |
dmsimard | jlk: oh wow scaling executors in k8s would be wild (in a good way) | 05:11 |
dmsimard | In our context, maybe nodepool could provision ephemeral executors ?? | 05:12 |
dmsimard | Huh, that'd be kinda cool. Maybe. | 05:13 |
SpamapS | Yeah that's one possibility. | 05:13 |
SpamapS | Entirely doable now actually. | 05:14 |
dmsimard | Yeah I was thinking | 05:14 |
dmsimard | We even have load and disk governor | 05:14 |
* SpamapS translates the zul-web static README into some ansible for his far-flung fork of hoist | 05:14 | |
dmsimard | We could autoscale automatically | 05:15 |
*** shunde has quit IRC | 05:15 | |
SpamapS | dmsimard: well I kind of think we'd stop relying on that actually | 05:15 |
SpamapS | dmsimard: let a normal k8s autoscaler do it with heapster | 05:15 |
SpamapS | if you want to do it with nodepool tho, yeah, the load/disk thing would be one possibility. Whenever you trigger the "stop accepting jobs" you could also tell nodepool you wish you had another node. | 05:16 |
dmsimard | k8s and heapster, that's a complicated stack for something we can already do :p | 05:16 |
SpamapS | but it's also a lighter weight solution if you already have k8s :) | 05:20 |
* SpamapS didn't necessarily want to be building ansible to minify javascript this evening.. but it seems worth it to get the new pretty zuul-web | 05:34 | |
tobiash | SpamapS: so you're running zuul on k8s? | 05:36 |
tobiash | I'm running it also on k8s (openshift in our case) | 05:37 |
SpamapS | no, I would like to, but for now, I have a VM with our old BonnyCI hoist pointed at it. | 05:37 |
tobiash | ah | 05:37 |
SpamapS | and many many local changes to suit GoDaddy's specialness | 05:37 |
SpamapS | mostly CentOS 7 :-P | 05:37 |
tobiash | the lovely centos :-P | 05:38 |
SpamapS | Yeah honestly I don't get why folks use CentOS. Just go Fedora or pay up for the real RHEL ;) | 05:38 |
SpamapS | tobiash: I recall you were running in Alpine, yes? | 05:39 |
tobiash | the container yes | 05:39 |
tobiash | My openshift now runs on centos atomic | 05:39 |
*** robcresswell has quit IRC | 05:39 | |
SpamapS | We have k8s running on CentOS VMs running on OpenStack. I'm using Zuul to CI the automation that deploys that k8s and I keep thinking "why don't we.. like.. run the zuul there?" | 05:41 |
SpamapS | tobiash: for zuul-web and the like, are you running an nginx/apache in a container too? | 05:41 |
tobiash | yes | 05:41 |
SpamapS | Would be nice if we could just use the built-in k8s LB/ingress. | 05:41 |
SpamapS | but I don't think it's smart enough | 05:42 |
tobiash | well that doesn't support url rewriting | 05:42 |
tobiash | I use route - apache - zuul-web | 05:43 |
tobiash | Because my zuul runs under /zuul | 05:44 |
tobiash | And zuul-web doesn't support that | 05:44 |
SpamapS | yeah I've got nginx in front of mine | 05:45 |
SpamapS | also github hooks still live in the scheduler | 05:45 |
*** fungi has quit IRC | 05:45 | |
tobiash | the github hooks are close to being moved to zuul-web | 05:46 |
SpamapS | Yeah that should make my nginx pretty stupid at that point. :) | 05:46 |
*** fungi has joined #zuul | 05:48 | |
clarkb | how do you do data storage for the zk if you only have ephemeral disk? just hope you dont lose all computes running zk at same time? | 05:52 |
clarkb | this is specifically for SpamapS based on earlier disk storage discussion) | 05:53 |
tobiash | clarkb: currently I don't | 05:53 |
tobiash | clarkb: but planning to use persistent volumes | 05:53 |
SpamapS | clarkb: have a bunch in your ensemble and use node-anti-affinity would be the elegant answer. :) | 05:54 |
tobiash | as a workaround as my log storage (will be artifactory) is not yet available I host the logs on the executor on a persistent volume | 05:54 |
SpamapS | zomg it works! | 05:56 |
SpamapS | this new dashboard is sooooo nice | 05:56 |
tobiash | :) | 05:56 |
tobiash | yay, the sql table prefix merged :) | 05:56 |
tobiash | so now I also can start pushing results into sql :) | 05:57 |
SpamapS | assuming console streaming works as I expect... I can probably delete half my nginx conf. | 05:57 |
SpamapS | tobiash: nice | 05:57 |
tristanC | tobiash: i'm surprise you didn't picked it in your staging container ;) | 05:58 |
clarkb | SpamapS: at least losing nodepools zk isnt the end of the world but not sure I would rely on that for data I really cared about | 05:58 |
clarkb | I suppose you could "offsite" backup too | 05:58 |
tristanC | tobiash: are you running nodepool with the new quota? | 05:58 |
tobiash | tristanC: actually this was a saturday morning idea that I want this and I had no time to add this to my staging branch ;) | 05:59 |
tobiash | tristanC: I have the nodepoo quota patches in my staging branch since they exist | 05:59 |
tristanC | tobiash: lucky you to get a patch merged in 3 days :-) | 06:00 |
tobiash | well the quota patches are still not merged, but I think they will be soon | 06:01 |
SpamapS | clarkb: Agreed. The word on k8s for stateful services is "soon" | 06:01 |
SpamapS | there _are_ volume drivers | 06:01 |
SpamapS | My users desperately want us to hook k8s up to cinder. | 06:02 |
SpamapS | Which is apparently possible.. but.. for now emptyDir, a good ensemble (n+1, so 5 or 7) and AZ anti-affinity should do OK. Maybe also ship the snapshots off to object storage too | 06:03 |
tobiash | SpamapS: that works great if rwo volumes are sufficient for the use case | 06:03 |
tobiash | remove build ssh key failed on https://review.openstack.org/#/c/526971/ | 06:06 |
tobiash | I'll do a recheck | 06:06 |
tobiash | tristanC: re https://review.openstack.org/#/c/503838/18/nodepool/driver/openstack/handler.py@347 | 06:12 |
*** xinliang has quit IRC | 06:13 | |
tobiash | this is currently a private method within the openstack handler | 06:13 |
tobiash | what would change with the refactoring you mentioned? | 06:13 |
tristanC | tobiash: the run_handler would be part of the generic code, and a driver would implement the checkquota method to verify early if it can create a specific node label | 06:15 |
tristanC | tobiash: see line 359 of https://review.openstack.org/#/c/526325/3/nodepool/driver/__init__.py | 06:16 |
tristanC | tobiash: where i had to take into account if a provider has a max-servers or not... with your quota change, this refactor would be easier, we would just call the driver checkProviderQuota | 06:17 |
tobiash | ah I see | 06:17 |
tobiash | so you're moving and generalizing the run_handler out of the openstack driver | 06:18 |
*** tflink has quit IRC | 06:18 | |
tobiash | tristanC: I understand this now, so the question is should this be renamed upfront or during the refactor? | 06:20 |
tristanC | tobiash: as you want, i don't mind either way :) | 06:21 |
tristanC | tobiash: maybe keep it private for now, it will be simple to remove the '_' prefix when it's part of the driver api | 06:21 |
tobiash | tristanC: because like the codebase currently is it should be a private function which would need a rename during the refactor anyway | 06:22 |
tobiash | ok | 06:22 |
*** tflink has joined #zuul | 06:23 | |
*** xinliang has joined #zuul | 06:26 | |
*** xinliang has quit IRC | 06:26 | |
*** xinliang has joined #zuul | 06:26 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Add cloud quota handling https://review.openstack.org/503838 | 06:28 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Don't fail on quota exceeded https://review.openstack.org/503051 | 06:28 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Make max-servers optional https://review.openstack.org/504282 | 06:28 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support cores limit per pool https://review.openstack.org/504283 | 06:28 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support ram limit per pool https://review.openstack.org/504284 | 06:28 |
*** sshnaidm|off is now known as sshnaidm|ruck | 06:49 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Add cloud quota handling https://review.openstack.org/503838 | 06:51 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Don't fail on quota exceeded https://review.openstack.org/503051 | 06:51 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Make max-servers optional https://review.openstack.org/504282 | 06:51 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support cores limit per pool https://review.openstack.org/504283 | 06:51 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support ram limit per pool https://review.openstack.org/504284 | 06:51 |
*** hashar has joined #zuul | 06:52 | |
*** myoung|rover|bbl is now known as myoung|rover | 06:53 | |
*** xinliang has quit IRC | 07:34 | |
SpamapS | hrm, something in recent commits has made my zuul hit our GHE rate limits... or our GHE has ratcheted them down | 07:39 |
*** robcresswell has joined #zuul | 07:39 | |
*** jpena|off is now known as jpena | 07:39 | |
tobiash | hrm, something seems to be broken, that shouldn't timeout/post_fail https://review.openstack.org/#/c/526971 | 07:47 |
*** xinliang has joined #zuul | 07:47 | |
*** xinliang has quit IRC | 07:47 | |
*** xinliang has joined #zuul | 07:47 | |
tobiash | looks like some connection issues http://logs.openstack.org/72/526972/1/check/build-openstack-sphinx-docs/54eb9b1/job-output.txt.gz#_2017-12-12_06_24_37_243356 | 07:48 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Fix line wrapping in github docs https://review.openstack.org/526971 | 08:15 |
SpamapS | You know.. getting nginx to match /{{tenant}}/console-stream is really annoyingly hard | 08:19 |
tobiash | SpamapS: can you match */console-stream? | 08:24 |
tobiash | SpamapS: isn't nginx able to transparently forward websockets? | 08:25 |
tobiash | apche is not but I think I read somewhere sometime that nginx can do this | 08:25 |
SpamapS | tobiash: It has to know which things to forward. | 08:25 |
SpamapS | http://paste.openstack.org/show/628681/ | 08:27 |
SpamapS | that seems to work | 08:27 |
tobiash | this is my forwarding (apache based): http://paste.openstack.org/show/628682/ | 08:27 |
SpamapS | actually nope, this one doesn't work | 08:27 |
SpamapS | if I statically put in location /tenant/console-stream it works fine (and doesn't need the rewrite line) | 08:28 |
SpamapS | but nginx has something weird about regex+proxy_pass | 08:28 |
tobiash | SpamapS: what about proxying all stuff to zuul-web (except webhook)? | 08:32 |
tobiash | and don't handle websockets separately? | 08:32 |
tobiash | this might also work for static content in nginx | 08:33 |
SpamapS | tobiash: You have to upgrade to a websocket at the nginx/apache layer. | 08:33 |
SpamapS | asyncio can't do it. | 08:33 |
tobiash | SpamapS: yeah, I mean upgrade regardless | 08:33 |
SpamapS | Oh I dunno if that works | 08:33 |
tobiash | could be worth a try | 08:34 |
tobiash | SpamapS: do you need url rewriting? | 08:34 |
tobiash | if not you might also try out haproxy which doesn't care about websockets | 08:34 |
SpamapS | No I think you might be on to something, static URLs are working fine with a catch-all that upgrades everything | 08:35 |
SpamapS | hm but zuul-web doesn't seem to be handling it well | 08:35 |
SpamapS | Like once the stream starts it's fine, but otherwise there are some tracebacks | 08:36 |
tobiash | with haproxy websockets just work without any special config | 08:36 |
SpamapS | http://paste.openstack.org/show/628683/ | 08:36 |
SpamapS | That may be unrelated | 08:36 |
SpamapS | tobiash: thanks, I don't actually need a special rule for streaming. ;) | 08:37 |
tobiash | I think I saw that also in my environment | 08:37 |
SpamapS | 'twas an assumption based on past experience | 08:37 |
SpamapS | so my nginx config has gotten even dumber. :) | 08:37 |
tobiash | yeah, that's the advantage over apache where this doesn't work (one needs a special websocket proxy module for websockets) | 08:38 |
tobiash | hm, can't get gearman encryption to work | 08:41 |
SpamapS | So now my nginx config is just this: | 08:41 |
SpamapS | http://paste.openstack.org/show/628686/ | 08:41 |
tobiash | looks pretty minimal now :) | 08:42 |
SpamapS | /connection being the github webhook | 08:42 |
tobiash | that also can be removed soon | 08:42 |
SpamapS | Yeah that will be sweet :) | 08:42 |
tobiash | I get a gearman connect loop: http://paste.openstack.org/show/628685/ | 08:42 |
tobiash | config is http://paste.openstack.org/show/628684/ | 08:43 |
tobiash | cert, key, ca are existing and cat'able | 08:43 |
SpamapS | That sort of looks to me like it's saying the other side of the conn didn't return a cert | 08:44 |
SpamapS | does openssl s_client show a cert? | 08:44 |
tobiash | checking, need to get openssl into the container | 08:46 |
SpamapS | hrm... now getting 'Can "Upgrade" only to "WebSocket" in zuul-web logs | 08:50 |
tobiash | so upgrade all doesn't really work :( | 08:52 |
SpamapS | might not.. I'm not really sure what's up | 08:52 |
SpamapS | it did work once | 08:52 |
SpamapS | but now zuul-web is unhappy | 08:52 |
SpamapS | ah ok I think I actually messed it up while consolidating | 08:55 |
SpamapS | has to be "proxy_pass http://127.0.0.1:9000" without a trailing slash | 08:56 |
*** jappleii__ has quit IRC | 09:10 | |
tobiash | ok, got one step further with the ssl stuff, the gear server seems to return a valid cert to openssl s_client | 09:15 |
*** dmellado has quit IRC | 09:17 | |
*** flepied_ has quit IRC | 09:27 | |
*** dmellado has joined #zuul | 09:48 | |
*** jhesketh has joined #zuul | 10:04 | |
*** dmellado has quit IRC | 10:07 | |
*** flepied_ has joined #zuul | 10:16 | |
*** dmellado has joined #zuul | 10:20 | |
*** rcarrillocruz has quit IRC | 10:46 | |
*** dmellado has quit IRC | 10:50 | |
*** rcarrillocruz has joined #zuul | 10:52 | |
*** dmellado has joined #zuul | 10:59 | |
*** rcarrillocruz has quit IRC | 11:02 | |
*** dmellado has quit IRC | 11:16 | |
*** dmellado has joined #zuul | 11:18 | |
*** dmellado has quit IRC | 11:20 | |
*** dmellado has joined #zuul | 11:24 | |
*** jkilpatr has quit IRC | 11:32 | |
*** dmellado has quit IRC | 11:32 | |
*** dmellado has joined #zuul | 11:33 | |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul feature/zuulv3: Git driver https://review.openstack.org/525614 | 11:38 |
*** jkilpatr has joined #zuul | 12:05 | |
*** hashar is now known as hasharAway | 12:22 | |
*** jpena is now known as jpena|lunch | 12:32 | |
*** rcarrillocruz has joined #zuul | 12:58 | |
*** baiyi has quit IRC | 13:07 | |
*** weshay|sickday is now known as weshay | 13:15 | |
*** jpena|lunch is now known as jpena | 13:28 | |
*** hasharAway is now known as hashar | 13:33 | |
dmsimard | tobiash: does the new zuul-web query the scheduler somehow ? | 14:04 |
dmsimard | or how does it get it's data ? | 14:04 |
*** hashar has quit IRC | 14:10 | |
*** toabctl has quit IRC | 14:10 | |
*** toabctl has joined #zuul | 14:11 | |
Shrews | dmsimard: gearman is usually used for communication between the things | 14:12 |
Shrews | dmsimard: the websocket portion of zuul-web queries the scheduler via gearman to find which executor is executing a particular build (for example) | 14:13 |
dmsimard | Shrews: zuulv3 memory is nuts and something definitely happened yesterday: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=63979&rra_id=all | 14:14 |
dmsimard | It would be odd to be zuul-web | 14:14 |
dmsimard | but trying to understand how it could've spiked so bad after being stable for a long while | 14:14 |
*** dkranz has joined #zuul | 14:14 | |
Shrews | yeah, that's totes nuts. i don't know what changes have gone in recently to those though (been heads down in other things) | 14:14 |
Shrews | i saw jeblair restarted something yesterday after a force merge of a fix | 14:15 |
odyssey4me | this is all still on a single host though isn't it? IIRC it was a v4 thing to allow horizontal scaling | 14:15 |
odyssey4me | perhaps a bigger instance needs to be scheduled to give more headroom | 14:16 |
Shrews | dmsimard: why do you suspect zuul-web? | 14:16 |
dmsimard | Shrews: no particular reason, looking for things that have landed recently | 14:16 |
dmsimard | Shrews: jeblair sent an email yesterday to openstack-dev about the dashboard and maybe caused a bunch of people to start looking at it | 14:16 |
dmsimard | the zuul-web process itself doesn't use a lot of memory | 14:17 |
Shrews | zuul-scheduler is currently taking 87.7% of total MEM. next is zuul-web with 0.4% | 14:17 |
*** hashar has joined #zuul | 14:28 | |
jtanner | I heard a rumor that zuul has hacked in a way to get realtime output from an ansible module call ... is that true? where could I go to find the implementation code? | 14:48 |
jtanner | jlk: ^ | 14:51 |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul feature/zuulv3: Git driver https://review.openstack.org/525614 | 14:52 |
rcarrillocruz | I guess you mean the zuul stream stuff for streaming shell tasks | 14:53 |
clarkb | jtanner: the two pieces are in https://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/library/command.py which writes to a file and then https://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/library/zuul_console.py reads it back and streams it to you | 14:53 |
clarkb | and ya I think its currently limited to command and things implemented with command | 14:54 |
jtanner | ah, thank you | 14:57 |
dmsimard | jtanner: mordred would surely love to talk to you about it | 15:03 |
jtanner | not for me, for someone else | 15:03 |
jtanner | code refs were good enough for now | 15:04 |
tobiash | dmsimard: yes, zuul-web gets the status.json via gearman from the scheduler | 15:28 |
dmsimard | tobiash: ok, thanks | 15:29 |
tobiash | I don't know if there is some caching | 15:29 |
tobiash | dmsimard: zuul-web caches the status.json | 15:32 |
tobiash | Just double checked | 15:32 |
dmsimard | ok, so it doesn't need to poke it every time then -- just making sure (especially around searching jobs, builds) | 15:33 |
tobiash | dmsimard: maybe the duplicate key check which merged yesterday is somehow related | 15:36 |
dmsimard | tobiash: we're troubleshooting in #openstack-infra-incident | 15:37 |
dmsimard | Jim is poking at it | 15:37 |
tobiash | Ah, ok, then all will be good soon :) | 15:38 |
gundalow | Anyone know how the RDO cloud work is going, seems like it's still down | 15:56 |
dmsimard | gundalow: probably not the right channel to ask this in but I have an answer for you ayway :D | 15:57 |
dmsimard | gundalow: yes, there's still ongoing upgrades | 15:57 |
dmsimard | rcarrillocruz, gundalow: this is affecting you guys I guess ? | 15:58 |
rcarrillocruz | yah | 15:58 |
rcarrillocruz | gundalow: there'a channel on internal redhat IRC | 15:58 |
rcarrillocruz | go to mojo, connect to it | 15:59 |
rcarrillocruz | #rhos-ops | 15:59 |
rcarrillocruz | dmsimard: yup, we have our CI there | 15:59 |
dmsimard | rcarrillocruz: ok, yeah that channel is best to stay up to date | 15:59 |
gundalow | dmsimard: yup, completly abusing this channel, though I wasn't aware of where else | 16:02 |
gundalow | Thanks :) | 16:02 |
rbergeron | jeez, it's like an ansible party in here | 16:24 |
rcarrillocruz | :D | 16:30 |
gundalow | WOOT WOOT | 16:32 |
gundalow | rbergeron: you said that like it's a bad thing | 16:32 |
pabelanger | <3 ansible | 16:36 |
*** jpena is now known as jpena|off | 16:44 | |
rbergeron | gundalow: def. not a bad thing :) was just funny to switch into this channel and see like... alllll the ansipeeps chatting | 16:46 |
*** JasonCL has joined #zuul | 16:47 | |
gundalow | rbergeron: oh that's my fault as I'm not on internal IRC, so I came here to cause trouble as I knew smart kids in here would know the answers | 16:47 |
*** hashar has quit IRC | 17:00 | |
*** JasonCL has quit IRC | 17:01 | |
*** JasonCL has joined #zuul | 17:01 | |
*** JasonCL has quit IRC | 17:03 | |
*** JasonCL has joined #zuul | 17:06 | |
*** JasonCL has quit IRC | 17:10 | |
*** flepied__ has joined #zuul | 17:33 | |
*** flepied_ has quit IRC | 17:35 | |
rbergeron | gundalow: i avoid that thing. mostly because it means i have to like... get on the vpn and be sad | 17:38 |
Shrews | grrr, our common code for zuul daemon's does not allow for setting uid/gid/umask. going to have to change that for the finger gateway | 18:08 |
Shrews | oh poo. that affects grabbing the privileged listening socket | 18:09 |
Shrews | hrm, i have a chicken and egg thing here with pid files and sockets | 18:10 |
clarkb | Shrews: because your pid changes when you fork the child to remove privileges? | 18:11 |
Shrews | clarkb: the user does, which means it cannot remove the pid file on exit | 18:12 |
Shrews | but if i let the daemon module change the user, i can't grab port 79 | 18:12 |
clarkb | can you chown before you switch users? | 18:12 |
Shrews | clarkb: oh, that might be the simplest solution | 18:13 |
*** electrofelix has quit IRC | 18:20 | |
SpamapS | Ooo I just hit a weird bug with Github | 18:31 |
SpamapS | scenario: PR against the wrong base (so I made a patch against a backport/stable branch)... | 18:31 |
SpamapS | Zuul immediate is like "hey this is broken t3h broke, can't merge FYI" and marks check as failed | 18:32 |
SpamapS | I go "oops" close the PR, and reopen against the right base (backport branch) | 18:32 |
SpamapS | the backport branch doesn't have a 'check' pipeline because it's old 'n busted | 18:32 |
SpamapS | but the _hash_ is already marked with a check-failed | 18:32 |
SpamapS | so the PR now looks like it failed tests | 18:32 |
gundalow | SpamapS: FYI, In GitHub if you click the edit button (like you editing the title of the PR) you can alter the branch | 18:32 |
SpamapS | gundalow: that I did not know. But either way, same problem. I need to clear that failed status. | 18:33 |
gundalow | ah. yes, I see | 18:33 |
SpamapS | it's happening because with the github status API, statuses are against source commit hashes, not merge results/PR's | 18:33 |
clarkb | that seems liek a data model bug | 18:34 |
SpamapS | so we might need to make zuul able to clear statuses it pushed if a PR is closed. | 18:34 |
SpamapS | clarkb: It _SO_ is | 18:34 |
SpamapS | like, 100% | 18:34 |
SpamapS | but we're kind of working around GH's weird data model already | 18:34 |
SpamapS | so we might have to work around it more. | 18:34 |
SpamapS | I think the right thing is for closed PR's to cause status reports to be cleared. | 18:35 |
SpamapS | so Zuul would need to see a closed PR, look at the commits in it, and clear any statuses it set as a result of that PR. I'm not 100% sure it can do that.. because I'm not 100% sure we have enough metadata in the statuses to know for sure that we set them as a result of said PR. | 18:36 |
*** Wei_Liu has quit IRC | 18:36 | |
SpamapS | BTW this also manifests in the rare case where the git merge method results in a "backwards" merge parent/child relationship. We set status on the last commit hash, but git goes "oh hey this changed so much, it's easier to flip the parent/child order" and then we end up with the PR showing "pending" on the gate status. | 18:37 |
SpamapS | (that one is fixable if we ever teach zuul to push merges though) | 18:37 |
*** Wei_Liu has joined #zuul | 18:58 | |
*** myoung|rover is now known as myoung|rover|bia | 18:58 | |
*** myoung|rover|bia is now known as myoung|rover|bbl | 19:00 | |
*** JasonCL has joined #zuul | 19:41 | |
*** JasonCL has quit IRC | 19:43 | |
*** JasonCL has joined #zuul | 19:44 | |
*** JasonCL has quit IRC | 19:49 | |
kklimonda | jeblair: when you mention "repl" on -incident, is this a local patch not yet merged? the only mention of REPL I can find is https://review.openstack.org/#/c/508793/ | 19:51 |
jeblair | kklimonda: yes, that patch, which i have applied locally | 19:52 |
kklimonda | mhm | 19:52 |
*** JasonCL has joined #zuul | 19:59 | |
*** JasonCL has quit IRC | 20:02 | |
*** JasonCL has joined #zuul | 20:02 | |
*** myoung|rover|bbl is now known as myoung|rover | 20:05 | |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul feature/zuulv3: Git driver https://review.openstack.org/525614 | 20:17 |
*** flepied__ has quit IRC | 20:17 | |
openstackgerrit | Merged openstack-infra/zuul-jobs master: releasenotes: Remove package install https://review.openstack.org/526850 | 20:19 |
*** dmellado has quit IRC | 20:32 | |
*** umbarkar has joined #zuul | 20:33 | |
*** jkilpatr has quit IRC | 20:33 | |
*** openstackgerrit has quit IRC | 20:34 | |
*** dmellado has joined #zuul | 20:34 | |
*** patriciadomin has quit IRC | 20:35 | |
*** tflink has quit IRC | 20:35 | |
*** tumbarka has quit IRC | 20:35 | |
*** _ari_ has quit IRC | 20:35 | |
*** myoung|rover has quit IRC | 20:35 | |
*** myoung has joined #zuul | 20:39 | |
*** tflink has joined #zuul | 20:39 | |
*** patriciadomin has joined #zuul | 20:40 | |
*** _ari_ has joined #zuul | 20:40 | |
*** adam_g has quit IRC | 20:45 | |
*** kmalloc has quit IRC | 20:45 | |
*** clarkb has quit IRC | 20:45 | |
*** clarkb has joined #zuul | 20:45 | |
*** kmalloc has joined #zuul | 20:45 | |
*** adam_g has joined #zuul | 20:46 | |
*** myoung is now known as myoung|rover | 20:47 | |
*** patriciadomin has quit IRC | 20:47 | |
*** JasonCL has quit IRC | 20:47 | |
*** JasonCL has joined #zuul | 20:48 | |
*** patriciadomin has joined #zuul | 20:50 | |
SpamapS | FYI, I submitted my github issues as a story here; https://storyboard.openstack.org/#!/story/2001403 | 20:50 |
*** JasonCL has quit IRC | 20:52 | |
*** openstackgerrit has joined #zuul | 21:00 | |
openstackgerrit | Jeremy Stanley proposed openstack-infra/zuul-base-jobs master: Add generic base and base-test jobs/playbooks https://review.openstack.org/526140 | 21:00 |
*** JasonCL has joined #zuul | 21:07 | |
*** qwc has quit IRC | 21:07 | |
*** qwc has joined #zuul | 21:09 | |
*** JasonCL has quit IRC | 21:20 | |
*** JasonCL has joined #zuul | 21:21 | |
*** jappleii__ has joined #zuul | 21:23 | |
*** JasonCL has quit IRC | 21:24 | |
*** jappleii__ has quit IRC | 21:24 | |
*** JasonCL has joined #zuul | 21:25 | |
*** jappleii__ has joined #zuul | 21:25 | |
*** jappleii__ has quit IRC | 21:26 | |
*** JasonCL has quit IRC | 21:26 | |
*** jappleii__ has joined #zuul | 21:27 | |
*** flepied__ has joined #zuul | 21:27 | |
*** JasonCL has joined #zuul | 21:27 | |
*** jappleii__ has quit IRC | 21:27 | |
*** jappleii__ has joined #zuul | 21:28 | |
*** JasonCL has quit IRC | 21:34 | |
*** dkranz has quit IRC | 21:40 | |
*** JasonCL has joined #zuul | 21:50 | |
*** JasonCL has quit IRC | 21:51 | |
*** JasonCL has joined #zuul | 21:52 | |
*** JasonCL has quit IRC | 21:59 | |
*** jkilpatr has joined #zuul | 22:04 | |
*** JasonCL has joined #zuul | 22:24 | |
*** JasonCL has quit IRC | 22:26 | |
*** JasonCL has joined #zuul | 22:26 | |
*** JasonCL has quit IRC | 22:31 | |
*** JasonCL has joined #zuul | 22:46 | |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: WIP: Revert "Revert "Add sphinx_python variable to sphinx role and job"" https://review.openstack.org/526666 | 22:49 |
*** JasonCL has quit IRC | 22:50 | |
*** JasonCL has joined #zuul | 22:56 | |
*** JasonCL has quit IRC | 23:00 | |
*** JasonCL has joined #zuul | 23:08 | |
*** JasonCL has quit IRC | 23:12 | |
*** JasonCL has joined #zuul | 23:25 | |
*** JasonCL has quit IRC | 23:30 | |
*** JasonCL has joined #zuul | 23:54 | |
*** JasonCL has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!