*** spredzy has quit IRC | 03:58 | |
*** spredzy has joined #softwarefactory | 03:59 | |
*** jpena|off is now known as jpena | 07:49 | |
*** jpena is now known as jpena|lunch | 11:28 | |
rcarrillocruz | heya folks | 11:38 |
---|---|---|
rcarrillocruz | can i get +3 on https://softwarefactory-project.io/r/#/c/13520/1/resources/tenant-ansible.yaml | 11:38 |
rcarrillocruz | tenant change | 11:38 |
rcarrillocruz | add and remove few untrusted projects | 11:39 |
rcarrillocruz | tristanC ^ if you are around | 11:39 |
tristanC | rcarrillocruz: yep, done | 11:51 |
rcarrillocruz | thx mate | 11:51 |
*** jpena|lunch is now known as jpena | 12:28 | |
rcarrillocruz | tristanC: can you pls review https://softwarefactory-project.io/r/#/c/13473/ ? linters jobs fail if the repo in question has rst, as doc8 is not present in nodes by default | 13:31 |
*** matburt has joined #softwarefactory | 13:39 | |
* gundalow waves to matburt | 13:39 | |
gundalow | tristanC: rcarrillocruz So following on from our discussions of single/multiple-tenants for Ansible, matburt (Ansible AWX/Tower) is interested in using Zuul for testing and gating | 13:43 |
dmsimard | matburt: ohai | 13:44 |
matburt | dmsimard: greetings! | 13:44 |
matburt | looking to set up/use zuul for ansible awx/runner... I was pointed in t his direction | 13:45 |
matburt | I've tried setting up my own zuul system but it's.... not going well, and I'm pretty green on zuul configuration | 13:46 |
tristanC | matburt: Hey, welcome! | 13:46 |
matburt | thanks! | 13:46 |
matburt | Looks like I should read through some documentation on the site to see about setting up a configuration for us | 13:48 |
tristanC | matburt: sure, i guess we could add https://github.com/ansible/awx to the current 'ansible' tenant, this would be a change on this file: https://softwarefactory-project.io/cgit/config/tree/resources/tenant-ansible.yaml#n40 | 13:48 |
matburt | yep and https://github.com/ansible/ansible-runner | 13:48 |
matburt | should I go through the process of submitting a patch to get that in? | 13:49 |
gundalow | matburt: If you wanted to do that, the process is explained at the end of https://github.com/ansible/community/blob/master/group-network/roles_development_process.rst#adding-enabling-zuul | 13:49 |
matburt | excellent! | 13:49 |
gundalow | (ie what to checkout and how to raise a PR) | 13:49 |
rcarrillocruz | tristanC: i really think we should have a separate tenant | 13:49 |
gundalow | rcarrillocruz: I was about to see if you were around :) | 13:49 |
rcarrillocruz | i'm on a call | 13:49 |
gundalow | I'm leaning towards separate tenants as well | 13:49 |
gundalow | ah, OK | 13:49 |
tristanC | rcarrillocruz: yes, but the story hasn't been groomed or planned yet... | 13:50 |
matburt | I'm okay with that also, if we want to set up an `ansible-awx` tenant I can stick awx and runner in there | 13:51 |
rcarrillocruz | well, you just said to add their repo on our tenant | 13:51 |
tristanC | i don't think it's an issue to add those 2 new project on the existing tenant until ansible-network is resplit | 13:51 |
rcarrillocruz | i'm fine for playing around | 13:51 |
rcarrillocruz | but don't want to give the impression it will be the way going forward | 13:51 |
rcarrillocruz | ok | 13:51 |
gundalow | matburt: You happy to use the existing ansible tenant, though aware that we will most likely split it out into per use-case tenants later on? | 13:52 |
rcarrillocruz | in the end , my understanding we didn't agree on resplitting tenants, we said to rename current to ansible-network | 13:52 |
rcarrillocruz | and consolidate two trusted projects into one | 13:52 |
rcarrillocruz | now it's other thing, if other teams come aboard | 13:52 |
rcarrillocruz | different reqs | 13:52 |
rcarrillocruz | different jobs | 13:52 |
rcarrillocruz | different gating strategies | 13:52 |
rcarrillocruz | etc | 13:52 |
rcarrillocruz | different secrets | 13:52 |
rcarrillocruz | etc | 13:52 |
matburt | it's neither here no there for me... what's the logic behind splitting it up? | 13:53 |
matburt | on the ansible side we're all the same group? | 13:53 |
tristanC | matburt: it's because the ansible-network want full control on the base job and the check/gate pipeline | 13:54 |
rcarrillocruz | matburt: zuul has a concept of inheritance jobs | 13:54 |
matburt | I can understand the control of the base job... but the pipeline jobs are just dictated by the projects themselves? | 13:54 |
rcarrillocruz | we have a very specific needs, like most of our jobs will have a controller | 13:54 |
rcarrillocruz | and an appliance | 13:55 |
rcarrillocruz | each tenant will be able to have their own pipelines | 13:55 |
rcarrillocruz | if you want a gate one | 13:55 |
rcarrillocruz | or not | 13:55 |
matburt | gotcha | 13:55 |
matburt | well, I'd definitely like to move forward in a stable manner... so if yall are thinking about splitting then we should probably just live in our own tenant | 13:55 |
rcarrillocruz | also, the tenants hold secrets | 13:55 |
rcarrillocruz | like | 13:56 |
rcarrillocruz | my team has its own aws account | 13:56 |
rcarrillocruz | core has their, plus azure, etc | 13:56 |
matburt | I thought all of us on the ansible team shared an aws/gce/azure account? | 13:56 |
tristanC | rcarrillocruz: you can lock job using secret to specific project too | 13:56 |
rcarrillocruz | not really | 13:56 |
rcarrillocruz | we have our own network aws account, matt didn't want to reuse core CI one | 13:57 |
rcarrillocruz | which is ok, different accounts for different CIs i guess | 13:57 |
matburt | so could it end up being core + awx + runner and then networking has their own separate tenant or am I misunderstanding? | 13:57 |
tristanC | matburt: yes, that's the current plan, however it's not in action yet | 13:58 |
rcarrillocruz | it depends on what yo utalk with matt, i haven't seen him involved in zuul , not sure what his plans are | 13:58 |
rcarrillocruz | but we will have our own tenant yes | 13:58 |
tristanC | i mean, on sf zuul side | 13:58 |
matburt | So it sounds like we put awx, runner in the current tenant and at some point in the future network splits out? | 13:59 |
tristanC | matburt: yes, it seems like there won't be much difference after the split, it will be like a copy of the ansible/zuul-config files back to ansible-network/zuul-config | 14:01 |
*** nijaba has quit IRC | 14:01 | |
matburt | okay excellent... I'll work on getting this submitted. I appreciate that help/clarity :) | 14:01 |
*** pabelanger has joined #softwarefactory | 14:08 | |
matburt | How is the nodepool configured? | 14:14 |
tristanC | matburt: you can find it's configuration in https://softwarefactory-project.io/cgit/config/tree/nodepool | 14:14 |
tristanC | matburt: there are runC slave that can run fast job on fedora or centos system | 14:14 |
tristanC | matburt: and there are openstack instances, you can see the list of available labels here: | 14:15 |
tristanC | https://ansible.softwarefactory-project.io/zuul/labels.html | 14:15 |
rcarrillocruz | tristanC: are you kosher with https://softwarefactory-project.io/r/#/c/13473/1/roles/linters/tasks/lint_doc8.yaml | 14:17 |
matburt | gotcha... so if we need some dependencies, should we set up our own images/configuration? | 14:17 |
matburt | or make that part of this test pre? | 14:17 |
rcarrillocruz | matburt: depends on your tests | 14:18 |
rcarrillocruz | like | 14:18 |
rcarrillocruz | what targets platforms you want to test on | 14:18 |
rcarrillocruz | the nodepool labels are really base images | 14:18 |
rcarrillocruz | fedora | 14:18 |
rcarrillocruz | centos | 14:18 |
rcarrillocruz | etc | 14:18 |
tristanC | matburt: yes, custom slaves can be created, either using disk-image-builder, or using the runC customize playbook here: https://softwarefactory-project.io/cgit/config/tree/nodepool/runC/_linters-packages.yaml | 14:18 |
rcarrillocruz | if you want to test on something that is not there, then another image should be built with nodepool, there are ways to test that | 14:18 |
pabelanger | I wouldn't create a per image per project, we've written some tooling upstream to make it easy for projects to install missing things via bindep | 14:18 |
tristanC | rcarrillocruz: i commented on the review, i think you should remove doc8 from the linters list | 14:18 |
rcarrillocruz | s/test/build/that | 14:18 |
pabelanger | baking in things into base images just creates more work in the long run, having them minimal and installing dependencies at runtime works much better between multiple projects and usually add minimal time to jobs | 14:19 |
matburt | starting with runner... it's deps are pretty light, but AWX is going to be a little different | 14:19 |
rcarrillocruz | usually you layer on top by installing software on base images | 14:20 |
rcarrillocruz | unless you really need to test on a base OS that is not available in nodepool | 14:20 |
pabelanger | matburt: https://docs.openstack.org/infra/bindep/readme.html for more information | 14:20 |
pabelanger | you'd create a .bindep.txt file and it will list all the dependencies needed | 14:20 |
pabelanger | for OS packages | 14:20 |
rcarrillocruz | tristanC: thing is if the linters job test rst, and i have rst, i would like it ot test rst. The issue is that doc8 is not available in the linters node | 14:21 |
tristanC | rcarrillocruz: yes, that's because the job is missing install task, but to be consistent, you should then add the install task to all linters | 14:21 |
pabelanger | rcarrillocruz: I'd much rather seen the linters job use tox, then we can setup requirements files for each env, then backing that into jobs | 14:21 |
rcarrillocruz | i'm ok with that | 14:22 |
pabelanger | then you can do tox -edocs locally, and it also works | 14:22 |
rcarrillocruz | pabelanger: sure, but for that we need to conslidate having a tox.ini on our roles, i'm all over it | 14:22 |
matburt | awx installation may rely on deploying into docker and running smoke tests... is that something we can do? | 14:22 |
rcarrillocruz | i just need a fix for what we have now | 14:22 |
tristanC | rcarrillocruz: please submit change to the upstream review: https://review.openstack.org/530682 | 14:22 |
pabelanger | rcarrillocruz: +1, we also use cookie-cutter to copypasta the tox.ini file across multiple repos | 14:23 |
rcarrillocruz | matburt: if the awx are docker based, then they can run just fine on a VM, e.g. fedora | 14:23 |
matburt | eg... `docker run` is the tooling there? | 14:23 |
pabelanger | matburt: yup, you can. I've been doing a POC with molecule and docker in zuul, works well. Just a little slow because of nested things | 14:23 |
matburt | excellent | 14:24 |
rcarrillocruz | matburt: yeah, i did a POC for house exactly that | 14:24 |
rcarrillocruz | an ansile role with had a tox, which used molecule, which used docker | 14:24 |
rcarrillocruz | let me link you | 14:24 |
rcarrillocruz | https://github.com/ansible-network/chouse-test/pull/5 | 14:24 |
rcarrillocruz | that repo we basicly copied over a community role | 14:25 |
rcarrillocruz | and wrote a basic job | 14:25 |
rcarrillocruz | that just called tox | 14:25 |
rcarrillocruz | with installed molecule | 14:25 |
rcarrillocruz | which called docker | 14:25 |
rcarrillocruz | docker in the VM was fine | 14:25 |
matburt | nice | 14:25 |
matburt | I'll dive in on this this afternoon, yall bear with me while I figure out how to do things | 14:26 |
rcarrillocruz | doh | 15:40 |
rcarrillocruz | is there an RDO issues ongoing? | 15:40 |
rcarrillocruz | cant jump on rh irc | 15:40 |
rcarrillocruz | tristanC: ? | 15:40 |
pabelanger | rcarrillocruz: I can look, what are you seeing | 15:41 |
rcarrillocruz | https://ansible.softwarefactory-project.io/zuul/status.html | 15:41 |
rcarrillocruz | 13min, all queued up | 15:41 |
rcarrillocruz | dib-fedora-27 | 15:41 |
rcarrillocruz | anything suspicious? | 15:54 |
rcarrillocruz | 26min and counting | 15:54 |
rcarrillocruz | sigh, can't jump on my bouncer to fire up vpn for RH server to ask rhos-ops | 15:54 |
pabelanger | rcarrillocruz: I think tenant is under capacity | 15:56 |
pabelanger | so a backlog | 15:56 |
pabelanger | rcarrillocruz: yah | 15:57 |
pabelanger | nodepool is a capacity | 15:58 |
pabelanger | openstack.exceptions.HttpException: HttpException: 403: Client Error for url: https://phx2.cloud.rdoproject.org:13774/v2.1/servers, {"forbidden": {"message": "Quota exceeded for ram: Requested 8192, but already used 393216 of 396000 ram", "code": 403}} | 15:58 |
rcarrillocruz | Weee | 15:58 |
*** jpena is now known as jpena|off | 16:11 | |
jruzicka | jpena, btw I noticed you have dlrn.tests as opposed to my projes that have tests alongside the module directory... I was wondering if that has any advantages? | 16:11 |
tristanC | rcarrillocruz: gundalow: pabelanger: how does this story sounds, and when can we proceed with the split: https://tree.taiga.io/project/morucci-software-factory/us/1647 | 17:18 |
gundalow | TristanC given step 2.1 does that mean fully separate configuration for a/a and a-n/*? | 17:29 |
tristanC | gundalow: i just updated the story with more detail | 17:29 |
tristanC | gundalow: we can't keep a-n/z-c in both project during the copy of base job and pipeline, otherwise this will cause conflict | 17:30 |
tristanC | in both tenants* | 17:30 |
tristanC | so i think it's easier if we do a clear-cut and move everything in one-shot | 17:30 |
tristanC | another solution is to move a-n/z-c content to a/z-c during the split, like that we can re-work a-n/z-c independently | 17:31 |
gundalow | tristanC: would need to confirm though I'm. Fairly sure we can live with a bit of (announced) downtime as long as its not Monday or Tuesday (alternate Tuesdays are our release day for a-n) | 17:31 |
tristanC | it should go fairly quickly, please let us know when is the best time to this | 17:32 |
tristanC | to do this* | 17:33 |
pabelanger | if zuul needs to be stopped / started then outage affects everybody | 17:34 |
pabelanger | so, think best to see what works for SF and keep to min | 17:34 |
pabelanger | but just announce the downtime | 17:34 |
pabelanger | going to get harder as we onboard more people / projects | 17:35 |
tristanC | pabelanger: there shouldn't be a zuul restart, or am i missing something? | 17:35 |
pabelanger | tristanC: I want to say, I ran into some zuul issues with zuul not unloading a project from memory before | 17:36 |
pabelanger | when doing windmill testing | 17:36 |
pabelanger | not something we've really done upstream yet | 17:36 |
pabelanger | only way to fix was to stop / start zuul | 17:36 |
pabelanger | but maybe safer just to say zuul may stop / start during window | 17:37 |
tristanC | pabelanger: i never heard of that behavior, is there a bug report with more detail? | 17:37 |
pabelanger | if it doesn't creat | 17:37 |
pabelanger | great* | 17:37 |
pabelanger | tristanC: no bug report | 17:37 |
pabelanger | would need to look for discussion again | 17:37 |
tristanC | we do add and remove project in zuul config during sf-ci, and it doesn't seems to be an issue | 17:37 |
pabelanger | k | 17:37 |
tristanC | similarly, when we moved ansible-network tenant to ansible, zuul was not restarted | 17:38 |
pabelanger | great, just saying it might be needed, if not awesome. Better to set expectations, just in case | 17:40 |
gundalow | +1 to setting expectation | 17:44 |
gundalow | tristanC: regarding https://tree.taiga.io/project/morucci-software-factory/us/1647 a) Lets go for downtime and simple step 2.1 b) Could you please add a "we will end up with" section at the end stating that the two GH orgs will be totally separate (no shared config) c) based on matburt's requirements, will that go into gh/ansible tenant, or it's own tenant? | 18:08 |
gundalow | mattclay: no action just FYI ^ | 18:08 |
*** jpena|off has quit IRC | 18:33 | |
*** jpena|off has joined #softwarefactory | 18:33 | |
rcarrillocruz | i'm off on Friday for a week | 19:08 |
rcarrillocruz | tristanC: so would be good to do so in Thu if possible, or tomorrrow ? | 19:08 |
gundalow | Works for me | 19:54 |
*** sshnaidm is now known as sshnaidm|afk | 22:39 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!