*** sdake has quit IRC | 00:01 | |
*** tranzemc_ has joined #zuul | 00:02 | |
*** sdake has joined #zuul | 00:06 | |
*** sdake has joined #zuul | 00:06 | |
*** tranzemc has quit IRC | 00:06 | |
*** jhesketh has quit IRC | 00:06 | |
*** sshnaidm|off has quit IRC | 00:06 | |
*** adam_g has quit IRC | 00:06 | |
*** frickler has quit IRC | 00:06 | |
*** mhu has quit IRC | 00:06 | |
*** eandersson has quit IRC | 00:06 | |
tristanC | mordred: it seems like angular is eol: https://blog.angular.io/stable-angularjs-and-long-term-support-7e077635ee9c | 00:09 |
---|---|---|
*** aspiers[m] has quit IRC | 00:10 | |
*** austinsun[m] has quit IRC | 00:10 | |
tristanC | oops, that is angularjs, i totally misread this, nevermind that previous query | 00:12 |
*** jhesketh has joined #zuul | 00:15 | |
*** sshnaidm|off has joined #zuul | 00:15 | |
*** adam_g has joined #zuul | 00:15 | |
*** frickler has joined #zuul | 00:15 | |
*** mhu has joined #zuul | 00:15 | |
*** eandersson has joined #zuul | 00:15 | |
*** threestrands has joined #zuul | 00:23 | |
*** threestrands has quit IRC | 00:23 | |
*** threestrands has joined #zuul | 00:23 | |
*** austinsun[m] has joined #zuul | 00:50 | |
*** robled has quit IRC | 00:50 | |
*** robled has joined #zuul | 00:54 | |
*** harlowja has quit IRC | 01:09 | |
*** aspiers[m] has joined #zuul | 01:14 | |
*** rlandy|bbl is now known as rlandy | 02:17 | |
pabelanger | Oh, hmm... | 02:21 |
pabelanger | I _think_ if you define a role in a trusted project, say openstack-zuul-jobs, then in a child job also add a role for openstack-zuul-jobs, the child jobs version of openstack-role-jobs is from the trusted playbook, not untrusted | 02:22 |
pabelanger | I don't think I've every tested that before, but would have expected the child job role path to include the untrusted version | 02:23 |
pabelanger | since the child job is also untrusted | 02:23 |
pabelanger | https://pastebin.com/raw/4aTGNsEm for those interested | 02:26 |
pabelanger | 2018-07-20 02:14:25,742 DEBUG zuul.AnsibleJob: [build: 1a634e2a0b7d4fa3972f1418305d94c2] Using existing repo rdo-jobs@master in trusted space /tmp/1a634e2a0b7d4fa3972f1418305d94c2/untrusted/project_0 seems to be the key piece of info | 02:27 |
pabelanger | For now, i think I can remove rdo-jobs from config project, as it is no longer needed. But just caught me by surprise | 02:31 |
pabelanger | OMG | 02:53 |
pabelanger | git add fail | 02:53 |
pabelanger | my patchset didn't have role included | 02:53 |
pabelanger | I'll retest above in the morning | 02:53 |
*** threestrands has quit IRC | 02:54 | |
*** threestrands has joined #zuul | 03:10 | |
*** threestrands has quit IRC | 03:10 | |
*** threestrands has joined #zuul | 03:10 | |
*** bhavik1 has joined #zuul | 03:42 | |
*** rapex has joined #zuul | 04:11 | |
rapex | hh | 04:11 |
*** rapex has left #zuul | 04:12 | |
*** harlowja has joined #zuul | 04:30 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: add /{tenant}/job/{job_name} route https://review.openstack.org/550978 | 04:34 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: add /{tenant}/projects and /{tenant}/project/{project} routes https://review.openstack.org/550979 | 04:34 |
*** bhavik1 has quit IRC | 04:38 | |
*** threestrands has quit IRC | 04:40 | |
*** jiapei has joined #zuul | 04:44 | |
*** jimi|ansible has quit IRC | 04:49 | |
*** harlowja has quit IRC | 04:52 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: web: add /{tenant}/projects and /{tenant}/project/{project} routes https://review.openstack.org/550979 | 05:00 |
*** pcaruana has joined #zuul | 05:09 | |
openstackgerrit | Ian Wienand proposed openstack-infra/zuul master: Add variables to project https://review.openstack.org/584230 | 05:25 |
*** quiquell|off is now known as quiquell | 05:31 | |
*** pawelzny has joined #zuul | 05:33 | |
openstackgerrit | Ian Wienand proposed openstack-infra/zuul master: Add variables to project https://review.openstack.org/584230 | 05:48 |
*** dmellado has quit IRC | 05:48 | |
*** gouthamr has quit IRC | 05:49 | |
*** nchakrab_ has joined #zuul | 06:25 | |
*** dmellado has joined #zuul | 06:29 | |
*** gouthamr has joined #zuul | 06:31 | |
*** rlandy has quit IRC | 06:45 | |
*** hashar has joined #zuul | 07:11 | |
*** jiapei has quit IRC | 07:24 | |
*** jamielennox has quit IRC | 07:31 | |
*** dvn has quit IRC | 07:32 | |
*** electrofelix has joined #zuul | 07:56 | |
*** jamielennox has joined #zuul | 08:05 | |
*** elyezer has quit IRC | 08:26 | |
*** hashar is now known as hasharAway | 08:59 | |
*** austinsun[m] has quit IRC | 09:42 | |
*** aspiers[m] has quit IRC | 09:42 | |
*** austinsun[m] has joined #zuul | 09:50 | |
*** sshnaidm|off has quit IRC | 09:50 | |
*** dvn has joined #zuul | 09:51 | |
*** aspiers[m] has joined #zuul | 10:12 | |
*** quiquell is now known as quiquell|brb | 10:26 | |
*** quiquell|brb is now known as quiquell | 10:29 | |
*** nchakrab_ has quit IRC | 10:54 | |
*** nchakrab has joined #zuul | 10:55 | |
*** nchakrab has quit IRC | 10:59 | |
*** nchakrab has joined #zuul | 11:18 | |
*** bhavik1 has joined #zuul | 11:57 | |
*** pawelzny has quit IRC | 11:58 | |
*** bhavik1 has quit IRC | 12:08 | |
*** elyezer has joined #zuul | 12:19 | |
*** rlandy has joined #zuul | 12:31 | |
*** quiquell is now known as quiquell|lunch | 12:44 | |
*** sshnaidm|off has joined #zuul | 12:48 | |
*** samccann has joined #zuul | 12:50 | |
*** quiquell|lunch is now known as quiquell | 13:05 | |
rcarrillocruz | folks, in the zuul tenant file there's no way to have a pattern glob or regex for untrusted projects? | 13:33 |
rcarrillocruz | like, any new project thta needs to be put under zuul management it seems it's required to be put under the untrusted projectrs list, i wonder if there's a way to say 'all under the org' or all following blah regex | 13:33 |
pabelanger | don't believe we support regex in main.yaml today | 13:35 |
pabelanger | but agree should be pushing all projects to be untrusted when possible | 13:35 |
tobiash | rcarrillocruz: not yet | 13:40 |
*** TheJulia is now known as needssleep | 13:40 | |
*** EmilienM is now known as EvilienM | 13:41 | |
openstackgerrit | Monty Taylor proposed openstack-infra/nodepool master: Build container images using pbrx https://review.openstack.org/582732 | 13:58 |
*** quiquell is now known as quiquell|off | 14:03 | |
*** nchakrab has quit IRC | 14:39 | |
*** goern__ is now known as goern | 14:46 | |
*** elyezer has quit IRC | 14:46 | |
pabelanger | okay, ignore everything I say last night about rdo-job example, I was way too tired to be working on zuul | 14:46 |
pabelanger | looking to bounce an idea of people, if anybody is up for job design. Today in tripleo, there is the legacy hardware called te-broker, it is basically a server that does node provisioning via heat stacks. Not wanting to add heat into nodepool, I've been toying with 2 ideas: A) use the static node driver to just SSH into te-broker and run commands via ansible, this could work if maybe use semaphore. B) use | 14:49 |
pabelanger | ansible OS module to run heat stacks directly from executor, I think this model is more like how we are going to do k8s? | 14:49 |
pabelanger | and I guess C is drop heat, and try to move things into nodepool-builder / nodepool-launcher + zuul job to provision node | 14:50 |
*** acozine1 has joined #zuul | 14:53 | |
mordred | pabelanger: I'd vote for C | 14:56 |
mordred | B might be a bit harder because you'd have to eitherhave that bit be trusted or you'd have to whitelist the os modules - and you'dneed to make sure sdk was installed on the executor such that it could be accessible | 14:57 |
mordred | however - we're gonna need to instlal openstacksdk in infra anyway for the logs-in-swift, so that's not super terrible | 14:58 |
mordred | pabelanger: for A - you shouldn't need a semaphore, since nodepool will only hand access to the node to one job at a time | 14:58 |
mordred | you could also do an add_host with the te-broker machine - but then you WOULD need a semaphore | 14:58 |
pabelanger | yah, in fact, B1?, is testing ansible and os_heat from a nodepool node. But, I'm pretty reluctant of doing that since a job would have access to a openstack tenant where heat runs. Fear of password leak would be high to me | 14:59 |
pabelanger | mordred: ah, right | 14:59 |
mordred | pabelanger: but honestly- are the stacks being used from heat complicated? or are they just some multi-node sets of things? | 15:00 |
pabelanger | I _think_ the issue for C, is need to figure out provider network bits that heat might be doing. I think there is a case where overlay network isn't going to work, due to the way they are testing ironic and virtual-bmc things | 15:01 |
mordred | like - if they were "make 4 machines, 2 neutorn networks, 5 cinder volumes and a magnum cluster" - I'd say "yup, that's heat's job" - but if it's just "I need 3 machines" ... nodepool is already good at that and multi-node base job is already good at settingup networking | 15:01 |
pabelanger | mordred: not sure, that is what I am looking into today | 15:01 |
mordred | nod | 15:01 |
mordred | so - I think that should still be possible to solve with a base job - maybe an alternate implementation of what we're doing upstream in multinode | 15:01 |
mordred | but stiching in provider networks instead or something | 15:01 |
pabelanger | I've already port one heat stack with simple multinode job: https://review.rdoproject.org/r/14768/ hard parts now is hooking up to the provider network that te-broker created. Having meeting today to talk more about the network | 15:02 |
pabelanger | yah | 15:02 |
*** rcarrill1 has joined #zuul | 15:12 | |
*** rcarrillocruz has quit IRC | 15:12 | |
*** electrofelix has quit IRC | 15:47 | |
*** sshnaidm|off has quit IRC | 15:53 | |
clarkb | pabelanger: fwiw we totally do ironic mulitnode jobs upstream with virtual baremetal and overlays | 15:54 |
clarkb | there should really be only two problems with using overlays like we do, Low mtus and if you add enough nodes throughput will likely fall off as its all in cpu | 15:55 |
clarkb | otherwise its just like any other network interface | 15:55 |
pabelanger | clarkb: I'd be interested in show that, I'd be happy to do all the leg work too. | 15:55 |
clarkb | (also if you look under the hood its basically what the cloud isdoing for you too) | 15:56 |
pabelanger | in tripleo, there is some legacy reason for this te-broker I don't really understand, but happy to discuss more with you | 15:56 |
clarkb | pabelanger: the history around that was we needed a way for jenkins to talk to baremetal test environments in 2012/2013 ish time frame and lifeless threw that idea out and it worked and people went with it | 15:57 |
clarkb | this was before ironic existed iirc | 15:57 |
pabelanger | yah | 15:58 |
clarkb | but then no real hardware envs with switches were ever used anyway | 15:59 |
clarkb | and everything was virtual baremetal | 15:59 |
mordred | Shrews: I updated https://review.openstack.org/#/c/582732/ since you reviewed it (removed python3-dev which is not needed) | 16:03 |
*** panda has joined #zuul | 16:05 | |
*** elyezer has joined #zuul | 16:24 | |
*** hasharAway has quit IRC | 16:32 | |
*** sshnaidm|off has joined #zuul | 16:38 | |
panda | mordred: reading at the eavesdrop log, one of the main challenges of replicating the OVB stack for tripleo is that one node acts as dhcp/bootp server to the other, so they would need to be in the same broadcast domain. SO it's not just creating n nodes and connecting them together | 16:39 |
clarkb | mordred: does pbrx-build-container-images publish them somewhere too? (maybe that is what the pbrx prefix is for?) | 16:39 |
clarkb | panda: yes, that is what our overlay network role is for | 16:39 |
clarkb | panda: puts everything on the same l2 so that neutron can have similar behavior | 16:39 |
clarkb | (actually it was nova net that really needed it, but we (openstack) do it for neutron too) | 16:40 |
clarkb | mordred: why do you need libffi on alpine but not rpm/deb? | 16:41 |
clarkb | mordred: I'm guessing the rpm/deb list is/was incomplete? | 16:41 |
pabelanger | clarkb: panda: Is there an example today how to create the overlay network in openstack-infra? I'm struggling a little to see how it would work, if we kept using the ipxe image from https://git.ipxe.org/ipxe.git today | 16:43 |
clarkb | pabelanger: its a role in zuul-jobs iirc | 16:43 |
clarkb | roles/multi-node-bridge | 16:44 |
pabelanger | that would mean the ipxe node needs to suppot vxlan, is that right? | 16:46 |
clarkb | pabelanger: it depends on where ipxe is running, are you nova booting a pxe image? | 16:47 |
clarkb | or is this running on top of qemu/kvm on a "proper" instance? | 16:48 |
pabelanger | clarkb: yah, I would imagine that nodepool would boot the ipxe image, so that in was part of the ansible inventory | 16:48 |
pabelanger | that is the general idea now | 16:48 |
clarkb | gotcha so thats the disconnect you are directly managing a nova instance as if it weren't managed by nova | 16:48 |
clarkb | I was assuming you were running on top of that | 16:49 |
pabelanger | right, in this case, te-broker uploads the ipxe image into nodepool tenant, then runs heat to boot it, with a provider network | 16:49 |
clarkb | (this is how the ironic tests work with the virtual baremetal stuff, nodepool provides "proper" instance, then we networking those together and run "baremetal" on qemu/kvm | 16:49 |
pabelanger | yah, I think in the case of tripleo, they would likely need nested-virt for how much they are doing with the baremetal node | 16:51 |
panda | pabelanger: the ipxeboot image is currently in the tenant we create the stack on | 16:52 |
*** myoung is now known as myoung|lunch | 16:52 | |
*** myoung|lunch is now known as myoung | 16:53 | |
pabelanger | right, getting the ipxboot image into nodepool and update I don't think is an issue, having it attach to the vxlan is what I think is. And I think this is the main reason provider networks are used | 16:53 |
clarkb | are they provider networks or dynamically created neutron networks? | 16:54 |
pabelanger | let me find code again, I might be using wrong name | 16:54 |
clarkb | (it doesn't matter a whole lot, except that if they are preprovisioned hard coded provider networks then nodepool can probably hack around that and have a pool per network) | 16:55 |
pabelanger | http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/prepare-ovb-cloud.sh is how they tenant is bootstrapped | 16:55 |
pabelanger | http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/te-broker/create-env is how the nodes come online | 16:56 |
*** myoung is now known as myoung|lunch | 16:58 | |
panda | pabelanger: that is probably how rh1 was created, it's quite old | 16:58 |
pabelanger | ack | 16:58 |
pabelanger | could you link the newer way for provision network being created today | 16:59 |
pabelanger | clarkb: like the idea of per-pool, could be a first step. Was trying to think of a way a nodeset could express the network isolation from tenant | 17:00 |
panda | pabelanger: it's a set of heat templates | 17:00 |
clarkb | pabelanger: I think it would likely be very hacky, you'd have to always request a nodeset that could only fit into a single pool and you'd hvae to hardcode networks per pool | 17:01 |
clarkb | not likely to be a good long term option | 17:01 |
mordred | clarkb: yeah - the rpm/deb list is incomplete - we should maybe just put the libff-dev/libffi-devel lines back how they were efore that patch | 17:02 |
openstackgerrit | Monty Taylor proposed openstack-infra/nodepool master: Build container images using pbrx https://review.openstack.org/582732 | 17:03 |
mordred | clarkb: like that ^^ | 17:03 |
pabelanger | clarkb: agree, but could be a good POC to start with until we figured out how to better make the creating of networks more dynamic in nodepool | 17:04 |
clarkb | mordred: do you need python3 dev on alpine too? I'm assuming that its available on the python alpine image already but not necessarily on every alpine image? also linux headres surprises me | 17:06 |
mordred | clarkb: yeah - linux headers is definitely needed | 17:06 |
panda | pabelanger: it's a set of heat templates combined together, from these https://github.com/cybertron/openstack-virtual-baremetal/tree/master/environments | 17:06 |
clarkb | mordred: huh | 17:06 |
mordred | clarkb: you do need python3-dev on alpine but not on python:alpine - installing it in python:alpine doesn't seem to explicitly hurt - but it does install headers that don't match th epython that'll be in the resulting image | 17:07 |
mordred | clarkb: I think I'm currently imagining that the only likely users of the alpine profile in the zuul/nodepool bindep files is python:alpine container images | 17:07 |
clarkb | oh because python is building their own python on top of alpine, hrm | 17:08 |
mordred | yah | 17:08 |
mordred | clarkb: you know though ... I *could* put in a filter in pbrx | 17:08 |
mordred | clarkb: to remove any mention of python3-dev or python-dev from the bindep list | 17:08 |
mordred | since pbrx knows what it's doing there | 17:08 |
mordred | and you're right- we do technically want python3-dev in there for alpine in general | 17:08 |
clarkb | that might be a good compromise | 17:09 |
pabelanger | panda: thanks, I'll see what that would like using shade | 17:09 |
clarkb | pabelanger: panda fwiw I think a big reason we haven't already added support for similar in nodepool is that many clouds don't support managed networking so we've had to deal with them in another manner anyway and quota limits on those resources when you do have access to them tend to be very low | 17:10 |
clarkb | obviously if you control the cloud that is of less concern | 17:11 |
pabelanger | Yah, I'd be keen to solve it outside of nodepool, but think in this case, means zuul job access to tenant password to create them. | 17:12 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Fix indentation of executor monitoring docs https://review.openstack.org/584448 | 17:14 |
*** myoung|lunch is now known as myoung | 17:25 | |
*** panda is now known as panda|off | 17:27 | |
pabelanger | with launch-retries for nodepool: https://zuul-ci.org/docs/nodepool/configuration.html#openstack-driver could we allow for that to be disabled, say launch-retires: -1 to have nodepool keep trying forever to bring a node online? To me, this would be kinda how nodepoolv2 works if a cloud was having issues | 17:36 |
*** myoung is now known as myoung|bbl | 17:39 | |
Shrews | pabelanger: that seems like a bad way to handle cloud issues. a node request could sit forever, never getting fulfilled, even if another cloud could fulfill it if it were resubmitted | 17:42 |
Shrews | but maybe for a user of a single cloud, you'd want that? dunno | 17:43 |
pabelanger | right, I'm personally okay with that, since that is how it worked before | 17:44 |
pabelanger | today, we are finding with rdo-cloud, NODE_FAILURE reported by zuul, have increased recheck commands | 17:44 |
pabelanger | I guess this is the man different of launch-retries, I think old nodepool did that across all provider, nodepoolv3 will pin to single for lifetime of value | 17:46 |
pabelanger | which, in this case, nodepool only has once cloud :( | 17:46 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool master: Allow launch-retries to indefinitely retry for openstack https://review.openstack.org/584488 | 19:02 |
*** rcarrillocruz has joined #zuul | 19:04 | |
*** rcarrill1 has quit IRC | 19:07 | |
*** rcarrill1 has joined #zuul | 19:08 | |
*** rcarrillocruz has quit IRC | 19:10 | |
*** rcarrillocruz has joined #zuul | 19:11 | |
*** acozine1 has quit IRC | 19:13 | |
*** rcarrill1 has quit IRC | 19:13 | |
*** acozine1 has joined #zuul | 19:13 | |
mordred | pabelanger, Shrews: we definitely haven't optimized things for the single-cloud case | 19:24 |
pabelanger | yah, I'd really love to say adding a 2nd cloud is the right solution, but there is a financial cost to first deal with | 19:25 |
tobiash | pabelanger: I just came across a shortcoming of the hdd governor | 19:26 |
tobiash | it checks the state_dir if it has enough disk space | 19:27 |
tobiash | however we have more options than the statedir like git_dir and job_dir where the last one might be the one we want to check | 19:27 |
pabelanger | Ah, right. I assumed everything was in the state_dir | 19:28 |
pabelanger | git_dir can be outside of that | 19:29 |
tobiash | (like me ;) ) | 19:29 |
pabelanger | tobiash: what is your directory setup? | 19:29 |
tobiash | statedir=/var/lib/zuul, git_dir=/var/cache/zuul-executor/git, job_dir=/var/cache/zuul-executor/jobs | 19:30 |
tobiash | where I have a persistent volume mounted at /var/cache/zuul-executor | 19:30 |
pabelanger | yah, I guess we'd need to some how update the sensor to check all three sizes | 19:31 |
tobiash | I don't have the state_dir on that volume because I didn't want to persist the ansible modules zuul is preparing at startup (just as a precaution) | 19:31 |
tobiash | probably (and also send stats of them) | 19:31 |
pabelanger | yah | 19:31 |
tobiash | I came across that because I wanted to add the avail_hdd_pct to my monitoring and asked myself what this would be checking in my env | 19:32 |
pabelanger | yah, in the case of rdoproject and openstack, everything is under state_dir, as seen at http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1 | 19:33 |
pabelanger | (openstack) | 19:33 |
pabelanger | but I do agree, we should maybe consider all 3 dirs | 19:34 |
pabelanger | clarkb: ^ grafana percentages are nice! | 19:34 |
tobiash | pabelanger: that grafana tells me that you might need more executors | 19:34 |
tobiash | queued executor jobs are wasted build nodes because the jobs queued there already have a locked fulfulled nodeset | 19:35 |
pabelanger | maybe, I haven't really looked | 19:37 |
pabelanger | our RAM and HDD sensors look okay | 19:37 |
pabelanger | maybe loadavg | 19:37 |
tobiash | probably | 19:38 |
pabelanger | tobiash: okay, I now understand what you said about wasted | 19:42 |
tobiash | :) | 19:43 |
pabelanger | yah, would be interesting to try another executor and see what happened | 19:43 |
pabelanger | yah, 20 would be the limit of load_multiplier | 19:45 |
pabelanger | so agree! | 19:45 |
corvus | pabelanger: well, that's a change | 19:45 |
pabelanger | corvus: bump it up you mean? | 19:46 |
corvus | look further back in the history and you'll see we geverally haven't had many jobs waiting. things seem to have changed starting a few days ago. | 19:46 |
corvus | pabelanger: it may be worth considering that the hdd check is causing us to run fewer jobs | 19:46 |
pabelanger | yah, I can look | 19:47 |
corvus | (or maybe something else) | 19:47 |
pabelanger | we also got 100 more nodes from packet on 2018-07-17 | 19:48 |
corvus | but yes, all other things being equal, queued jobs mean more executors needed | 19:48 |
corvus | pabelanger: ah, that could be a contributing factor :) | 19:48 |
tobiash | looks like it started 7 weeks ago and is worse since a few days: http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1&panelId=24&fullscreen&from=now-90d&to=now | 19:49 |
pabelanger | also looking to see when we switched to ansible 2.5, maybe something there too | 19:49 |
corvus | i guess we probably started using the hdd governor around the same time we got more nodes. yay multi-variables :) | 19:49 |
pabelanger | I think hdd governor default to 5%, so looking at graphs, I'd expect we still have a lot of disk space | 19:50 |
corvus | pabelanger: then the really recent spike is probably mostly added quota | 19:51 |
pabelanger | yah, think so too | 19:51 |
pabelanger | I'll move this to openstack-infra and see if we want to launch another | 19:52 |
corvus | tobiash: it turns out one of our "swifts" is a ceph-radosgw. i'm writing the swift upload role to work with both that and real-swift. looks like they both should work (with some minor tweaks, hopefully temporary). we should both be able to use the same role. :) | 19:54 |
tobiash | corvus: cool | 19:54 |
tobiash | corvus: will it create a bucket for every job or will this be configurable? | 19:55 |
corvus | tobiash: i expect it to be configurable. each container can hold "a lot" of files, so we'd probably just shard by change id (maybe 100 containers total) | 19:57 |
tobiash | I think in our case we'd need a bucket per tenant | 19:57 |
corvus | shouldn't be a problem | 19:57 |
tobiash | we need to put authenticated apache's in front as a reverse proxy, hopefully that works with swift as a backend | 19:59 |
*** myoung|bbl is now known as myoung | 20:01 | |
corvus | that i can't answer :) | 20:04 |
tobiash | we will just try that :) | 20:05 |
tobiash | and if it works fine, if not we have to go with a file server for now | 20:05 |
tobiash | but I think it will work | 20:05 |
corvus | it seems like it should | 20:25 |
Shrews | mordred: corvus: i think we should plan to merge remaining nodepool changes and restart in OS infra next week so we can keep an eye on the shade-to-sdk changes. Next week is the last week I will be around for quite a while (at least a week) so can't help after that. | 20:52 |
Shrews | a few outstanding reviews just need a second +2 | 20:52 |
corvus | Shrews: sounds like a plan | 20:53 |
Shrews | i think tristanC is also wanting a release | 20:54 |
corvus | we all want a release :) | 21:03 |
*** samccann has quit IRC | 21:09 | |
mordred | Shrews: ++ | 21:16 |
*** austinsun[m] has quit IRC | 21:16 | |
*** aspiers[m] has quit IRC | 21:17 | |
*** hashar has joined #zuul | 21:32 | |
*** acozine1 has quit IRC | 21:43 | |
*** acozine2 has joined #zuul | 21:49 | |
*** acozine2 has quit IRC | 21:55 | |
*** acozine1 has joined #zuul | 22:01 | |
*** acozine1 has quit IRC | 22:04 | |
*** acozine1 has joined #zuul | 22:04 | |
*** austinsun[m] has joined #zuul | 22:06 | |
*** acozine1 has quit IRC | 22:11 | |
*** acozine1 has joined #zuul | 22:11 | |
*** weshay is now known as weshay_PTO | 22:15 | |
*** acozine1 has quit IRC | 22:22 | |
*** aspiers[m] has joined #zuul | 22:28 | |
*** hashar has quit IRC | 22:29 | |
*** myoung has quit IRC | 22:29 | |
openstackgerrit | James E. Blair proposed openstack-infra/zuul-jobs master: WIP: Add upload-logs-swift role https://review.openstack.org/584541 | 23:41 |
corvus | clarkb, mordred, tobiash: ^ that's not even remotely ready. I would not have pushed it up at all if it weren't for the fact that it's the end of the week. | 23:42 |
corvus | however, i think the top 3 classes in the file are about ready. they pass unit tests. the rest should go pretty quickly. | 23:43 |
*** rlandy has quit IRC | 23:43 | |
clarkb | mordred: reviewing https://review.openstack.org/#/c/582732/4 and it apepars that if you set a prefix you get prefixed and unprefixed images in the docker image listing | 23:43 |
corvus | it's based on my 'return file comments' change in order to get the unit testing infrastructure. i can move that around before it's all done. | 23:43 |
clarkb | they have the same image id so its just a logical labeling but that sort of pollution and doubling up of labels might confuse/irritate users | 23:44 |
corvus | jhesketh: https://review.openstack.org/584541 fyi | 23:44 |
openstackgerrit | Clark Boylan proposed openstack-infra/nodepool master: Use detail listing in integration testing https://review.openstack.org/584542 | 23:50 |
clarkb | pabelanger: I left comments on your change to retry indefinitely for single cloud nodepool | 23:52 |
openstackgerrit | Merged openstack-infra/nodepool master: Build container images using pbrx https://review.openstack.org/582732 | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!