jakeyip | hi all meeting in around 20 mins, if anyone is around | 08:42 |
---|---|---|
jakeyip | dalees / mnasiadka around? | 08:47 |
*** open10k8s_ is now known as open10k8s | 08:57 | |
*** ShadowJonathan_ is now known as ShadowJonathan | 08:57 | |
*** ricolin_ is now known as ricolin | 08:57 | |
*** snbuback_ is now known as snbuback | 08:57 | |
*** mnasiadka_ is now known as mnasiadka | 08:57 | |
*** ravlew is now known as Guest5411 | 09:00 | |
jakeyip | #startmeeting magnum | 09:01 |
opendevmeet | Meeting started Wed Nov 1 09:01:24 2023 UTC and is due to finish in 60 minutes. The chair is jakeyip. Information about MeetBot at http://wiki.debian.org/MeetBot. | 09:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 09:01 |
opendevmeet | The meeting name has been set to 'magnum' | 09:01 |
jakeyip | Agenda: | 09:01 |
jakeyip | #link https://etherpad.opendev.org/p/magnum-weekly-meeting | 09:01 |
jakeyip | #topic Roll Cally | 09:01 |
jakeyip | #topic Roll Call | 09:01 |
lpetrut | o/ | 09:05 |
jakeyip | hi lpetrut | 09:08 |
lpetrut | hi | 09:08 |
jakeyip | looks like there isn't others here today :) is there anything youw ant to talk about? | 09:08 |
lpetrut | I work for Cloudbase Solutions and we've been trying out the CAPI drivers | 09:09 |
lpetrut | there's something that I wanted to bring out | 09:09 |
lpetrut | up* | 09:09 |
lpetrut | the need for a manaegment cluster | 09:09 |
jakeyip | cool, let's start then | 09:09 |
jakeyip | #topic clusterapi | 09:09 |
jakeyip | go on | 09:10 |
lpetrut | we had a few concerns about the mangement cluster, not sure if it was already discussed | 09:10 |
jakeyip | what's the concern? | 09:10 |
lpetrut | for example, the need of keeping it around for the lifetime of the workload cluster | 09:10 |
lpetrut | and having to provide an existing cluster can be an inconvenient in multi-tenant environments | 09:11 |
jakeyip | just to clarify, by 'workload cluster' do you mean the magnum clusters created by capi? | 09:13 |
lpetrut | yes | 09:13 |
jakeyip | can you go more in detail how having a cluster in multi-tenant env is an issue? | 09:14 |
lpetrut | other projects tried a different approach: spinning up a cluster from scratch using kubeadm and then deploying CAPI and have it manage itself, without the need of a separate management cluster. that's something that we'd like to experiment and I was wondering if it was already considered | 09:14 |
jakeyip | by other projects yiou emaN? | 09:14 |
jakeyip | mean? | 09:15 |
lpetrut | this one specifically wasn't public but I find their approach interesting | 09:15 |
johnthetubaguy | lpetrut: I would like to understand your worry more on that | 09:16 |
jakeyip | hio johnthetubaguy | 09:16 |
lpetrut | about the multi-tenant env, we weren't sure if it's safe for multiple tenants to use the same management cluster, would probably need one for each tenant, which would then have to be managed for the lifetime of the magnum clusters | 09:16 |
johnthetubaguy | do you mean you want each tenant cluster to have a separate management cluster? | 09:16 |
johnthetubaguy | lpetrut: I think that is where magnum's API and quota come in, that gives you a bunch of protection | 09:17 |
jakeyip | lpetrut: just to confirm you are testing the StackHPC's contributed driver that's in review ? | 09:18 |
johnthetubaguy | each cluster gets thier own app creds, so there is little crossover, except calling openstack APIs | 09:18 |
johnthetubaguy | jakeyip: both drivers do the same thing, AFAIK | 09:18 |
lpetrut | yes, I'm working with Stefan Chivu (schivu), who tried out both CAPI drivers and proposed the Flatcar patches | 09:18 |
johnthetubaguy | jakeyip: sorry, my calendar delt with the time change perfectly, my head totally didn't :) | 09:18 |
lpetrut | and we had a few ideas about the management clusters, wanted to get some feedback | 09:20 |
lpetrut | one of those ideas was the one that I mentioned: completely avoiding a management cluster by having CAPI manage itself. I know people have already tried this, was wondering if it's something safe and worth considering | 09:20 |
lpetrut | if that's not feasible, another idea was to use Magnum (e.g. a different, possibly simplified driver) to deploy the management cluster | 09:21 |
johnthetubaguy | lpetrut: but then mangum has to reach into every cluster directly to manage it? That seems worse (although it does have the creds for that) | 09:21 |
lpetrut | yes | 09:21 |
johnthetubaguy | lpetrut: FWIW, we use helm wrapped by ansible to deploy the management cluster, using the same helm we use from inside the magnum driver | 09:22 |
johnthetubaguy | lpetrut: its interesting I hadn't really considered that approach before now | 09:22 |
lpetrut | just curious, why would it be a bad idea for magnum to reach the managed cluster directly? | 09:23 |
johnthetubaguy | I think you could do that with the helm charts still, and "just" change the kubectl | 09:23 |
johnthetubaguy | lpetrut: I like the idea of magnum not getting broken by what users do within their clusters, and the management is separately managed outside, but its a gut reaction, needs more thought. | 09:24 |
lpetrut | I see | 09:24 |
jakeyip | I'm afraid I don't understand how CAPI works without a management cluster | 09:25 |
johnthetubaguy | its a trade off of course, there is something nice about only bootstraping from the central cluster, and the long running management is inside each clsuter | 09:25 |
jakeyip | might need some links if you have them handy? | 09:25 |
lpetrut | right, so the idea was to deploy CAPI directly against the managed cluster and have it manage itself | 09:25 |
johnthetubaguy | jakeyip: its really about the CAPI controllers being moved inside the workload cluster after the intiial bootstrap, at least we have spoken about that for the management cluster its-self | 09:25 |
johnthetubaguy | lpetrut: you still need a central managment cluster to do the initial bootstrap, but then it has less responsbility longer term | 09:26 |
johnthetubaguy | (in my head at least, which is probably not the same thing as reality) | 09:26 |
lpetrut | right, we'd no longer have to keep the management cluster around | 09:27 |
johnthetubaguy | well that isn't quite true right | 09:27 |
johnthetubaguy | ah, wait a second... | 09:28 |
johnthetubaguy | ah, you mean a transient cluster for each bootstrap | 09:28 |
lpetrut | yes | 09:28 |
jakeyip | johnthetubaguy: do you mean, 1. initial manageent cluster 2. create A workload cluster (not created in Magnum) 3. move it into this workload cluster 4. point Magnum to this cluster ? | 09:28 |
lpetrut | exactly | 09:28 |
johnthetubaguy | honestly that bit sounds like an operational headache, debugging wise, I prefere a persistent management cluster for the bootstrap, but transfer control into the cluster once its up | 09:29 |
johnthetubaguy | ... but this goes back to what problem we are trying to solve I guess | 09:29 |
lpetrut | and I think we might be able to take this even further, avoiding the initial management altogether, using userdata scripts to deploy a minimal cluster using kubeadm, then deploy CAPI | 09:29 |
johnthetubaguy | I guess you want magnum to manage all k8s clusters? | 09:29 |
johnthetubaguy | lpetrut: I mean you can use k3s for that, which I think we do for our "seed" cluster today: https://github.com/stackhpc/ansible-collection-azimuth-ops/blob/main/playbooks/provision_capi_mgmt.yml | 09:30 |
lpetrut | yeah, we were hoping to avoid the need of an external management cluster | 09:30 |
johnthetubaguy | ... yeah, there is something nice about that for sure. | 09:31 |
lpetrut | right now, we were hoping to get some feedback, see if it make sense and if there's anyone interested, then we might prepare a POC | 09:31 |
johnthetubaguy | lpetrut: in my head I would love to see helm being used to manage all the resources, to keep the consisnet, and keep the manifests out of the magnum code base, so its not linked to your openstack upgrade cycle so strongly (but I would say that!) | 09:32 |
lpetrut | one approach would be to extend one of the CAPI drivers and customize the bootstrap phase | 09:33 |
johnthetubaguy | I guess my main worry is that is a lot more complicated in magnum, but hopefully the POC would prove me wrong on that | 09:33 |
jakeyip | I think it's an interesting concept | 09:33 |
johnthetubaguy | jakeyip: +1 | 09:34 |
johnthetubaguy | lpetrut: can you describe more what that would look like please? | 09:34 |
lpetrut | sure, so the idea would be to deploy a Nova instance, spin up a cluster using kubeadm or k3s, deploy CAPI on top so that it can manage itself and from then on we could use the standard CAPI driver workflow | 09:35 |
johnthetubaguy | at the moment the capi-helm driver "just" does a helm install, after having injected app creds and certs into the management cluster, I think in your case you would first wait to create a bootstrap cluster, then do all that injecting, then bring the cluster up, then wait for that to finish, then migrate into the deployed clusters, including injecting all the secrts into that, etc. | 09:35 |
lpetrut | exactly | 09:36 |
johnthetubaguy | lpetrut: FWIW, that could "wrap" the existing capi-helm driver, I think, with the correct set of util functions, there is a lot of shared code | 09:36 |
lpetrut | exactly, I'd just inherit it | 09:36 |
johnthetubaguy | now supporting both I like, let me describe... | 09:36 |
johnthetubaguy | if we get the standalone managenet cluster in first, from a mangum point of view that is simpler | 09:37 |
johnthetubaguy | second, I could see replacement of the shared managenet cluster, with a VM with k3s on for each cluster | 09:37 |
johnthetubaguy | then third, you move from VM into the main cluster, after the cluster is up, then tear down the VM | 09:38 |
johnthetubaguy | then we get feedback from operators on which proves nicer in production, possibly its both, possibly we pick a winner and deprecate the others | 09:38 |
johnthetubaguy | ... you can see a path to migrate between those | 09:38 |
johnthetubaguy | lpetrut: is that sort of what you are thinking? | 09:39 |
lpetrut | sounds good | 09:39 |
johnthetubaguy | one of the things that was said at the PTG is relevant here | 09:39 |
johnthetubaguy | I think it was jonathon from the BBC/openstack-ansible | 09:39 |
lpetrut | yes, although I was wondering if we could avoid the initial bootstrap vm altogether | 09:39 |
johnthetubaguy | magnum can help openstack people not have to understand so much of k8s | 09:39 |
johnthetubaguy | lpetrut: your idea here sure helps with that | 09:40 |
johnthetubaguy | lpetrut: sounds like magic, but I would to see it, although I am keen we make things possible with vanilla/mainstream cluster api approaches | 09:40 |
lpetrut | before moving further, I'd like to check withe the CAPI maintainers to see if there's anything wrong with CAPI managing the cluster that it runs on | 09:40 |
johnthetubaguy | lpetrut: I believe that is a supported use case | 09:41 |
lpetrut | that would be great | 09:41 |
jakeyip | lpetrut: I think StackHPC implementation is just _one_ CAPI implementation. Magnum can and should support multiple drivers | 09:41 |
jakeyip | as long as we can get maintainers haha | 09:41 |
lpetrut | thanks a lot for feedback, we'll probably come back in a few weeks with a PoC :) | 09:42 |
johnthetubaguy | you can transfer from k3s into your created ha cluster, so it manages itself... we have a plan to do that for our shared management cluster, but have not got around to it yet (too busy doing magnum code) | 09:42 |
jakeyip | I think important from my POV is that all the drivers people want to implement not clash with one another. | 09:42 |
johnthetubaguy | jakeyip: that is my main concern, diluting an already shrinking community | 09:42 |
jakeyip | about that I need to chat with you johnthetubaguy | 09:42 |
johnthetubaguy | jakeyip: sure thing | 09:42 |
jakeyip | about the hardest problem in computer science - naming :D | 09:43 |
johnthetubaguy | lol | 09:43 |
johnthetubaguy | foobar? | 09:43 |
jakeyip | everyone loves foobar :) | 09:43 |
johnthetubaguy | which name are you thinking about? | 09:43 |
jakeyip | johnthetubaguy: mainly the use of 'os' tag and config section | 09:44 |
jakeyip | if we can have 1 name for all of them, different drivers won't clash | 09:44 |
johnthetubaguy | so about the os tag, the ones magnum use don't work with nova anyways | 09:44 |
johnthetubaguy | i.e. "ubuntu" isn't a valid os_distro tag, if my memory is correct on that | 09:44 |
johnthetubaguy | in config you can always turn off any in tree "clashing" driver anyways, but granted its probably better not to clash out of the box | 09:45 |
jakeyip | yeah, is it possible to change them all to 'k8s_capi_helm_v1' ? so driver name, config section, os_distro tag is the same | 09:45 |
johnthetubaguy | jakeyip: I think I went for capi_helm in the config? | 09:45 |
jakeyip | yeah I want to set rulesss | 09:45 |
johnthetubaguy | jakeyip: I thought I did that all already? | 09:46 |
johnthetubaguy | I don't 100% remember though, let me check | 09:46 |
jakeyip | now is driver=k8s_capi_helm_v1, config=capi_helm, os_distro=capi-kubeadm-cloudinit | 09:46 |
johnthetubaguy | ah, right | 09:47 |
johnthetubaguy | so capi-kubeadm-cloudinit was chosen as to matches what is in the image | 09:47 |
johnthetubaguy | and flatcar will be different (its not cloudinit) | 09:48 |
jakeyip | just thinking if lpetrut wants to develop something they can choose a name for driver and use that for config section and os_distro and it won't clash | 09:48 |
johnthetubaguy | it could well be configuration options in a single driver, to start with | 09:48 |
schivu | hi, I will submit the flatcar patch on your github repo soon and for the moment I used capi-kubeadm-ignition | 09:49 |
johnthetubaguy | schivu: sounds good, I think dalees was looking at flatcar too | 09:49 |
jakeyip | yeah I wasn't sure how flatcar will work with this proposal | 09:49 |
johnthetubaguy | a different image will trigger a different boostrap driver being selected in the helm chart | 09:50 |
johnthetubaguy | at least that is the bit I know about :) there might be more? | 09:50 |
schivu | yep, mainly with CAPI the OS itself is irrelevant, what matters is which bootstrapping format does the image use | 09:50 |
johnthetubaguy | schivu: +1 | 09:51 |
johnthetubaguy | I was trying to capture that in the os-distro value I chose, and operator config can turn the in tree implementation off if they want a different out of tree one? | 09:51 |
johnthetubaguy | (i.e. that config already exists today, I believe) | 09:51 |
johnthetubaguy | FWIW, different drivers can probably use the same image, so it seems correct they share the same flags | 09:52 |
johnthetubaguy | (I wish we didn't use os_distro though!) | 09:53 |
johnthetubaguy | jakeyip: I am not sure if that helped? | 09:54 |
johnthetubaguy | what were you thinking for the config and the driver, I was trying to copy the pattern with the heat driver and the [heat] config | 09:54 |
johnthetubaguy | to be honestly, I am happy with whatever on the naming of the driver and the config, happy to go with what seems normal for Magnum | 09:55 |
jakeyip | hm, ok am I right that for stackhpc, the os_distro tag in glance will be e.g. for ubuntu=capi-kubeadm-cloudinit and flatcar=capi-kubeadm-ignition (as schivu said) | 09:55 |
johnthetubaguy | I am open to ideas, that is what we seem to be going for right now | 09:56 |
johnthetubaguy | it seems semantically useful like that | 09:56 |
johnthetubaguy | (we also look for a k8s version property) | 09:56 |
johnthetubaguy | https://github.com/stackhpc/magnum-capi-helm/blob/6726c7c46d3cac44990bc66bbad7b3dd44f72c2b/magnum_capi_helm/driver.py#L492 | 09:57 |
johnthetubaguy | kube_version in the image properties is what we currently look for | 09:58 |
johnthetubaguy | jakeyip: what was your preference for os_distro? | 10:00 |
jakeyip | I was under the impression the glance os_distro tag needs to fit 'os' part of driver tuple | 10:00 |
johnthetubaguy | so as I mentioned "ubuntu" is badly formatted for that tag anyways | 10:00 |
johnthetubaguy | I would rather not use os_distro at all | 10:01 |
johnthetubaguy | "ubuntu22.04" would be the correct value, for the nova spec: https://github.com/openstack/nova/blob/master/nova/virt/osinfo.py | 10:01 |
jakeyip | see https://opendev.org/openstack/magnum/src/branch/master/magnum/api/controllers/v1/cluster_template.py#L428 | 10:02 |
jakeyip | which gets used by https://opendev.org/openstack/magnum/src/branch/master/magnum/drivers/common/driver.py#L142 | 10:02 |
johnthetubaguy | yep, understood | 10:03 |
johnthetubaguy | I am tempted to register the driver as None, which might work | 10:04 |
jakeyip | when your driver declares `{"os": "capi-kubeadm-cloudinit"}`, it will only be invoked if glance os_distro tag is `capi-kubeadm-cloudinit` ? it won't load for flatcar `capi-kubeadm-ignition` ? | 10:04 |
johnthetubaguy | yeah, agreed | 10:05 |
jakeyip | I thought the decision was based on values passed to your driver | 10:05 |
johnthetubaguy | I think there will be an extra driver entry added for flatcar, that just tweaks the helm values, but I haven't see a patch for that yet | 10:05 |
jakeyip | that's what I gathered from over https://github.com/stackhpc/capi-helm-charts/blob/main/charts/openstack-cluster/values.yaml#L124 | 10:07 |
jrosser | johnthetubaguy: regarding the earlier discussion about deploying the capi managment k8s cluster - for openstack-ansible i have a POC doing that using an ansible collection, so one managment cluster for * workload clusters | 10:08 |
johnthetubaguy | capi_bootstrap="cloudinit|ingnition" would probably be better, but yeah, I was just trying hard not to clash with the out of tree driver | 10:08 |
johnthetubaguy | jrosser: cool, that is what we have done in here too I guess, reusing the helm charts we use inside the driver: https://github.com/stackhpc/ansible-collection-azimuth-ops/blob/main/playbooks/provision_capi_mgmt.yml and https://github.com/stackhpc/azimuth-config/tree/main/environments/capi-mgmt | 10:10 |
jakeyip | will the flatcar patch be something that reads CT label and sets osDistro=ubuntu / flatcar? is this a question for schivu ? | 10:10 |
johnthetubaguy | jrosser: the interesting idea about lpetrut 's idea is that magnum could manage the management cluster(s) too, which would be neat trick | 10:10 |
jrosser | johnthetubaguy: i used this https://github.com/vexxhost/ansible-collection-kubernetes | 10:11 |
johnthetubaguy | jrosser: ah, cool, part of the atmosphere stuff, makes good sense. I haven't looked at atmosphere (yet). | 10:12 |
jrosser | yeah, though it doesnt need any atmosphere stuff to use the collection standalone, ive used the roles directly in OSA | 10:13 |
jakeyip | jrosser: curious how do you maintain the lifecycle of the cluster deployed with ansible ? | 10:14 |
johnthetubaguy | jrosser: I don't know what kolla-ansible/kayobe are planning yet, right now we just add in the kubeconfig and kept the CD pipelines separate | 10:14 |
jrosser | i guess i would be worried about making deployment of the management cluster using magnum itself much much better than the heat driver | 10:14 |
schivu | jakeyip: the flatcar patch adds a new driver entry; the ignition driver inherits the cloudinit one and provides "capi-kubeadm-ignition" as the os-distro value within the tuple | 10:15 |
jrosser | jakeyip: i would have to get some input from mnaser about that | 10:15 |
johnthetubaguy | I wish we were working on this together at the PTG to design a common approach, that was my hope for this effort, but it hasn't worked out that way I guess :( | 10:15 |
jakeyip | schivu: thanks. in that case can you use the driver nme for os_distro? which is the question I asked johnthetubaguy initially | 10:17 |
johnthetubaguy | jrosser: they key difference with the heat driver is most of the work is in cluster API, with all of these approaches. In the helm world, we try to keep the mainfests in a single place, helm charts, so the test suite for the helm charts helps across all the different ways we stamp out k8s, be it via magnum, or ansible, etc. | 10:17 |
johnthetubaguy | jakeyip: sorry, I miss understood /miss read your question | 10:17 |
johnthetubaguy | jakeyip: we need the image properties to tell us cloudinit vs ignition, any that happens is mostly fine with me | 10:19 |
johnthetubaguy | having this converstation in gerrit would be my preference | 10:19 |
jakeyip | sure I will also reply to it there | 10:21 |
johnthetubaguy | I was more meaning on the flatcar I guess, its easier when we see what the code looks like I think | 10:22 |
johnthetubaguy | there are a few ways we could do it | 10:22 |
johnthetubaguy | jakeyip: I need to discuss how much time I have left now to push this upstream, I am happy for people to run with the patches and update them, I don't want us to be a blocker for what the community wants to do here. | 10:24 |
jakeyip | yeah I guess what I wanted to do was quickly check if using os_distro this way is a "possible" or a "hard no" | 10:24 |
jakeyip | as I make sure drivers don't clash | 10:24 |
johnthetubaguy | well I think the current proposed code doesn't clash right? and you can configure any drivers to be disabled as needed if any out of tree driver changes to match? | 10:25 |
jakeyip | johnthetubaguy: I am happy to take over your patches too, I have it running now in my dev | 10:25 |
johnthetubaguy | open to change the value to something that feels better | 10:25 |
jakeyip | cool, great to have that understanding sorted | 10:26 |
johnthetubaguy | jakeyip: that would be cool, although granted that means its harder for you to +2 them, so swings and roundabouts there :) | 10:26 |
jakeyip | johnthetubaguy: well... one step at a time :) | 10:27 |
* johnthetubaguy nods | 10:27 | |
jakeyip | I need to officially end this cos it's over time, but feel free to continue if people have questions | 10:28 |
johnthetubaguy | jakeyip: are you aware of the tooling we have for the management cluster, that reuses the helm charts? | 10:28 |
johnthetubaguy | jakeyip: +1 | 10:28 |
jakeyip | #endmeeting | 10:28 |
opendevmeet | Meeting ended Wed Nov 1 10:28:31 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 10:28 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/magnum/2023/magnum.2023-11-01-09.01.html | 10:28 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/magnum/2023/magnum.2023-11-01-09.01.txt | 10:28 |
opendevmeet | Log: https://meetings.opendev.org/meetings/magnum/2023/magnum.2023-11-01-09.01.log.html | 10:28 |
jakeyip | maybe not, link? | 10:28 |
jakeyip | I basically build mine from devstack's function you contributed | 10:29 |
johnthetubaguy | that doesn't give you an HA cluster though | 10:29 |
johnthetubaguy | great for dev though | 10:29 |
jakeyip | yeah I plan to use it to create a cluster in our undercloud and move to it | 10:30 |
johnthetubaguy | jakeyip: trying to find a good set of breadcrumbs, its ansble based | 10:30 |
johnthetubaguy | yeah, we have CD pipelines to maintain that using the same helm charts we use from magnum | 10:30 |
johnthetubaguy | I guess this is the pipelines: https://stackhpc.github.io/azimuth-config/deployment/automation/ | 10:31 |
johnthetubaguy | this is an example config for the ansible: https://github.com/stackhpc/azimuth-config/tree/main/environments/capi-mgmt-example | 10:32 |
johnthetubaguy | and the ansible lives in here I think: https://github.com/stackhpc/ansible-collection-azimuth-ops/blob/main/playbooks/provision_capi_mgmt.yml | 10:32 |
johnthetubaguy | Matt is at a conference today, he would probably describe that better than me | 10:32 |
johnthetubaguy | essentially you git it a clouds.yaml for a project you want to run the management cluster in, and it will get that running using the helm charts | 10:33 |
johnthetubaguy | (ish) | 10:33 |
johnthetubaguy | ish = there are bits on top of the helm charts, like installation of the management cluster controllers, etc. | 10:34 |
jakeyip | does the ansible code also do the final move from the k3s to the final management cluster? | 10:35 |
johnthetubaguy | no, right now we keep both separate | 10:36 |
jakeyip | ok | 10:36 |
johnthetubaguy | so you get a seed VM to control the HA managment cluster | 10:36 |
jakeyip | we are approaching this part with pure capi, not using your helm chart. is there something we will be missing with pure capi ? | 10:36 |
johnthetubaguy | we haven't had time to implement the enxt step | 10:36 |
johnthetubaguy | jakeyip: it depends if you want to manage all those manifests multiple times or not | 10:37 |
jakeyip | it's a lot of yaml, but it's a one time effort hopefully | 10:37 |
johnthetubaguy | well what about upgrade? | 10:37 |
johnthetubaguy | when you need to update all the addons, like the CNI, update k8s, etc | 10:37 |
johnthetubaguy | that is what the helm chat is trying to help with | 10:37 |
jakeyip | yeah haven't thought of that :P | 10:38 |
jakeyip | good point, wondering how we should approach it from user docs | 10:39 |
johnthetubaguy | so the idea is you sync in the latest version of the config, then redeploy, and the upgrade to the new helm chart, and update k8s images happens via a github pipeline, option of staging first then production, if wanted | 10:39 |
jakeyip | or... create new cluster and move to it? :D | 10:40 |
johnthetubaguy | (we have been running k8s clusters in production for almost two years with these helm charts and ansible, and getting the pipelines and monitoring alerts better, and testing, etc, has taken some time!) | 10:40 |
johnthetubaguy | jakeyip: you could do backup and restore with valero, but cluster api does a rolling update for you quite nicely | 10:40 |
jakeyip | ok | 10:41 |
jakeyip | thanks for the tips, we are barely getting started. | 10:42 |
johnthetubaguy | (we are getting the valreo ansible scripts added, but the cluster api stuff works nicely with valero!) | 10:42 |
jakeyip | perviously I didn't had much time to do this in prod, just got a breather that's why | 10:43 |
jakeyip | didn't had much time last 6 months | 10:43 |
johnthetubaguy | jakeyip: it would be good to share more of this tooling as a community, obvsiouly we are building assuming kolla-ansible / kayobe managing openstack, but so far the ansible to do the management cluster is quite separate | 10:43 |
jakeyip | yeah getting it to work is one thing but Day 2 ops is another big thing | 10:44 |
johnthetubaguy | yeah, there is lots of stuff out there to help at least | 10:44 |
johnthetubaguy | we are looking at brining ArgoCD into the mix here, although interestingly that is probably the opposite direction to some of the discussions here, so that probably needs discussion at some point | 10:45 |
johnthetubaguy | it being optional, is probably the way forward anyways, for a bunch of reasons | 10:45 |
johnthetubaguy | ... OK, I need to run, it was good to catch up | 10:45 |
jakeyip | may be a place for it, but current concern is getting it streamlined so we can land something | 10:46 |
jakeyip | thakns, I need to go too. it's almost bedtime | 10:46 |
jakeyip | thanks everyone for coming I will stay till end of the hour if anyone has questions | 10:46 |
johnthetubaguy | jakeyip: yep, its certainly working, keen it is useful for more people too! | 10:51 |
lpetrut | you were mentioning using ansible to deploy the management cluster. would that be another magnum driver? if so, what's nice about it is that it would show up as a regular magnum cluster, which magnum could manage. | 10:53 |
jakeyip | lpetrut: who is this q for ? may need to ping their nick as they may not be watching here since meeting has ended | 10:58 |
lpetrut | I think I saw johnthetubaguy and jrosser mention it | 10:59 |
opendevreview | Jake Yip proposed openstack/magnum master: Add beta property to magnum-driver-manage details https://review.opendev.org/c/openstack/magnum/+/892729 | 10:59 |
jrosser | lpetrut: personally i would like the managment cluster to be part of my openstack control plane | 11:04 |
jrosser | so i am really not sure what i feel about magnum deploying that | 11:04 |
lpetrut | got it | 11:04 |
jrosser | i expect lots of deployments will have lots of opinions about how it should be, structurally | 11:05 |
jakeyip | jrosser: hm, did you mentioned you need airgap in PTG? | 11:09 |
jrosser | yes | 11:09 |
jrosser | yesterday i built a docker container which has a registry with enough stuff in it to deploy the control plane k8s with no internet | 11:10 |
jakeyip | nice! | 11:11 |
jakeyip | which driver will you be testing? | 11:12 |
jrosser | jakeyip: currently i only have experience with the vexxhost driver | 11:56 |
jakeyip | jrosser: ok | 11:57 |
opendevreview | Jake Yip proposed openstack/magnum master: Add feature flag for beta drivers https://review.opendev.org/c/openstack/magnum/+/899530 | 13:53 |
opendevreview | Jake Yip proposed openstack/magnum master: Add beta property to magnum-driver-manage details https://review.opendev.org/c/openstack/magnum/+/892729 | 13:53 |
opendevreview | Jake Yip proposed openstack/magnum master: Fix magnum-driver-manage for drivers without template path. https://review.opendev.org/c/openstack/magnum/+/892728 | 13:53 |
opendevreview | Dale Smith proposed openstack/magnum master: Add beta property to magnum-driver-manage details https://review.opendev.org/c/openstack/magnum/+/892729 | 20:19 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!