*** ltomasbo has joined #openstack-kuryr | 06:10 | |
*** opendevreview has joined #openstack-kuryr | 08:07 | |
opendevreview | Roman Dobosz proposed openstack/kuryr-kubernetes master: Add golang dependency for devstack. https://review.opendev.org/c/openstack/kuryr-kubernetes/+/794159 | 08:07 |
---|---|---|
*** digitalsimboja has joined #openstack-kuryr | 08:55 | |
digitalsimboja | hello! | 08:56 |
digitalsimboja | good morning all | 08:56 |
maysams | hello, good morning | 08:56 |
digitalsimboja | @ltomasbo, @maysams: Please check I am not able to ssh to the VM | 08:56 |
maysams | digitalsimboja: have you tried to access the vm today? | 08:56 |
digitalsimboja | since last night | 08:56 |
ltomasbo | digitalsimboja, good morning! | 08:56 |
digitalsimboja | I have not been able to | 08:56 |
maysams | yeah, we noticed some memory leak issue | 08:56 |
digitalsimboja | so it will be fixed? | 08:57 |
ltomasbo | digitalsimboja, yeah... I see the VM is stuck, some memory leak or something... I can try to reboot it... but we will need to unstack and stack again | 08:57 |
ltomasbo | digitalsimboja, can I restart it? | 08:57 |
digitalsimboja | sure thats not a problem | 08:57 |
digitalsimboja | yeah please restart | 08:57 |
*** dulek has joined #openstack-kuryr | 08:58 | |
digitalsimboja | let me know the moment its done | 08:59 |
ltomasbo | digitalsimboja, done already | 08:59 |
digitalsimboja | am in | 09:02 |
digitalsimboja | is the tmux session still '0' | 09:03 |
digitalsimboja | ? | 09:03 |
ltomasbo | digitalsimboja, join the tmux, I triggered the unstack/clean, so that you can trigger the installation as soon as it finish | 09:03 |
ltomasbo | digitalsimboja, you can check the tmux session number with "tmux ls" | 09:03 |
digitalsimboja | ubuntu@sunday-vm:~$ tmux ls 0: 1 windows (created Wed Jun 2 08:59:38 2021) (attached) | 09:03 |
digitalsimboja | tmux a -t 0 | 09:03 |
ltomasbo | yeap | 09:03 |
digitalsimboja | joined | 09:04 |
digitalsimboja | did you ./clean.sh as well? | 09:04 |
ltomasbo | it is running at the moment | 09:04 |
ltomasbo | once it finish you will need to trigger the ./stack.sh again | 09:05 |
digitalsimboja | so yesterday I was able to deploy two pods, replicate 1 and exposed a service on one | 09:05 |
digitalsimboja | I retrieved the CRD | 09:05 |
digitalsimboja | what command would enable me to delete a field on the CRD | 09:05 |
digitalsimboja | ? | 09:05 |
ltomasbo | commands are the same for all k8s objects... you can do | 09:06 |
ltomasbo | kubectl edit OBJECT_TYPE OBJECT_NAME | 09:06 |
ltomasbo | so, you can do kubectl edit klb XXX | 09:06 |
digitalsimboja | Great! | 09:07 |
ltomasbo | digitalsimboja, the clean finished... feel free to start the stacking when you want | 09:11 |
digitalsimboja | running... | 09:12 |
maysams | digitalsimboja: Kubernetes documentation is always handy for figuring out the right commands https://kubernetes.io/docs/reference/kubectl/cheatsheet/#editing-resources | 09:24 |
digitalsimboja | Thanks! | 09:25 |
digitalsimboja | stack failed again | 09:26 |
digitalsimboja | Please take a look | 09:27 |
ltomasbo | digitalsimboja, what is the problem? | 09:31 |
ltomasbo | gryf, it looks like a problem with kubeadm | 09:33 |
gryf | ltomasbo, could you show me the issue? | 09:35 |
ltomasbo | gryf, http://paste.openstack.org/show/806261/ | 09:37 |
gryf | > curl: command not found | 09:37 |
gryf | how on earth that happen? | 09:37 |
ltomasbo | gryf, curl is definitely installed on the VM... | 09:38 |
gryf | yet it states that it's not a command… | 09:38 |
gryf | on which OS are you in? | 09:39 |
ltomasbo | gryf, http://paste.openstack.org/show/806262/ | 09:39 |
ltomasbo | interesting... | 09:39 |
gryf | that's strange | 09:40 |
gryf | how's you PATH looks like? | 09:40 |
ltomasbo | gryf, going to re-install it | 09:41 |
gryf | ltomasbo, ack. | 09:41 |
ltomasbo | gryf, now it appears there... interesting... | 09:42 |
ltomasbo | maybe something messed around with the /usr/bin or something | 09:42 |
gryf | not sure, although it should just work™. | 09:43 |
ltomasbo | yep. lets see this time | 09:46 |
ltomasbo | digitalsimboja, ^ | 09:46 |
digitalsimboja | Great! | 09:48 |
digitalsimboja | http://paste.openstack.org/show/806263/ | 09:49 |
digitalsimboja | error with etcd/passwd | 09:50 |
digitalsimboja | etc/passwd | 09:50 |
ltomasbo | digitalsimboja, probably some devstack leftovers | 09:51 |
ltomasbo | lets clean it up, reboot the VM, and try again | 09:51 |
*** digitalsimboja has quit IRC | 09:53 | |
*** digitalsimboja has joined #openstack-kuryr | 09:58 | |
*** digitalsimboja has quit IRC | 10:21 | |
*** digitalsimboja has joined #openstack-kuryr | 10:22 | |
*** digitalsimboja has quit IRC | 10:30 | |
*** digitalsimboja has joined #openstack-kuryr | 10:34 | |
ltomasbo | digitalsimboja, hi | 10:34 |
digitalsimboja | Hi | 10:34 |
ltomasbo | I created a different VM, can you join the tmux on it? | 10:34 |
ltomasbo | digitalsimboja, it already has devstack installed, so it is ready to use | 10:35 |
ltomasbo | digitalsimboja, only different with your previous one is that kuryr is running non-containerized, which can be handy for you for debugging purposes when you write code changes | 10:35 |
digitalsimboja | same ssh link? | 10:36 |
*** digitalsimboja has quit IRC | 10:38 | |
*** digitalsimboja has joined #openstack-kuryr | 10:39 | |
opendevreview | Roman Dobosz proposed openstack/kuryr-kubernetes master: Fixes for latest changes on Neutron devstack. https://review.opendev.org/c/openstack/kuryr-kubernetes/+/794200 | 10:45 |
digitalsimboja | @ltomasbo: | 11:27 |
digitalsimboja | so after deleting the status on klb | 11:28 |
digitalsimboja | Kuryr, recreated them? | 11:28 |
ltomasbo | digitalsimboja, no | 11:28 |
digitalsimboja | readded them? | 11:28 |
ltomasbo | digitalsimboja, if you just delete the status, kuryr will re-discover the previously created loadbalancer and add its information back to the status | 11:28 |
digitalsimboja | got it! | 11:29 |
ltomasbo | digitalsimboja, if you delete the loadbalancer, and then delete the status, then kuryr will recreate it (and add the new loadbalancer information to the status | 11:29 |
digitalsimboja | wait a minute | 11:29 |
ltomasbo | digitalsimboja, try that, do "openstack loadbalancer delete --cascade default/demo" | 11:29 |
digitalsimboja | HANG ON | 11:29 |
ltomasbo | digitalsimboja, check that the loadbalancer is gone with "openstack loadbalancer list" | 11:29 |
ltomasbo | then remove the status on the CRD | 11:30 |
ltomasbo | and check that a new loadbalancer default/demo will be created | 11:30 |
ltomasbo | and the crd status information will be filled in with the information from the new loadbalancer | 11:30 |
ltomasbo | digitalsimboja, that exact problem is the one you want to fix... you want kuryr to realize about that loadbalancer to be gone... and automatically recreate it, without you having to manually remove the status | 11:32 |
ltomasbo | so, basically, you need something that detect the loadbalancer is gone, and then remove the status so that kuryr recreates it | 11:32 |
ltomasbo | digitalsimboja, try now the CRD status removal | 11:32 |
digitalsimboja | Wait a second | 11:33 |
digitalsimboja | before I remove the status, let me run through the steps | 11:34 |
digitalsimboja | first | 11:34 |
digitalsimboja | I deleted the openstack side of the resource with 'openstack delete loadbalancer...' | 11:34 |
digitalsimboja | then, I checked and realsied that its no longer available on openstack | 11:34 |
digitalsimboja | then but its still avaialble on the kubernetes api server | 11:35 |
digitalsimboja | so I need to delete the status of the loadbalancer on k8s apiserver | 11:36 |
ltomasbo | digitalsimboja, yep, the kuryrloadbalancer CRD object is only modified by Kuryr... so there is nothing removing the status there | 11:37 |
ltomasbo | digitalsimboja, this may be handy: https://www.geeksforgeeks.org/vi-editor-unix/ | 11:38 |
digitalsimboja | digitalsimboja, that exact problem is the one you want to fix... you want kuryr to realize about that loadbalancer to be gone... and automatically recreate it, without you having to manually remove the status | 11:44 |
digitalsimboja | Now I UNDERSTAND | 11:44 |
digitalsimboja | So without removing the status field of the loadbalancer manually, Kuryr won't be able to detect that its gone and recreate it? | 11:45 |
*** digitalsimboja has quit IRC | 12:13 | |
*** digitalsimboja has joined #openstack-kuryr | 12:29 | |
digitalsimboja | Hello | 12:29 |
digitalsimboja | @ltomasbo: I need the credentials to login to the stack user for the new VM | 12:33 |
digitalsimboja | Okay I joined the session and am in | 12:34 |
ltomasbo | digitalsimboja, what credentials? | 12:35 |
ltomasbo | you key is already added to the VM and you were connected before... I changed nothing | 12:35 |
*** digitalsimboja_ has joined #openstack-kuryr | 12:37 | |
digitalsimboja | I joined the sessiion and its good | 12:37 |
*** digitalsimboja_ has quit IRC | 13:30 | |
ltomasbo | digitalsimboja, seems there is a problem getting the images | 13:43 |
digitalsimboja | yeah | 13:43 |
ltomasbo | digitalsimboja, did you have problems downloading them/creating pods before? | 13:43 |
ltomasbo | this was working this morning | 13:43 |
digitalsimboja | I logged out and logged in back and decided to create fresh pods, reporting imagePull error | 13:44 |
digitalsimboja | so i deleted initial deployments to start afresh | 13:44 |
digitalsimboja | before you joined in | 13:44 |
ltomasbo | digitalsimboja, it cannot reach quay, nor docker it seems | 13:49 |
digitalsimboja | hmmm | 13:49 |
ltomasbo | digitalsimboja, so, if you cannot download images from docker/quay... you cannot create containers... | 13:54 |
digitalsimboja | does this depend on my side? | 13:55 |
digitalsimboja | what of the image busybox? | 13:55 |
ltomasbo | try with different images yes... you can use whatever you want for the dummy pod | 13:55 |
ltomasbo | seems the dns is broken | 13:56 |
ltomasbo | digitalsimboja, ok... fixed | 13:56 |
digitalsimboja | alright | 13:57 |
ltomasbo | for some reason... /etc/resolv.conf is getting overwritten... and the current OpenStack dns where the VM is running seems to be flaky | 13:58 |
ltomasbo | digitalsimboja, all yours | 13:59 |
digitalsimboja | Thanks hugely! | 13:59 |
ltomasbo | digitalsimboja, note I created the pod on a namespace named sunday | 13:59 |
digitalsimboja | sure I know | 13:59 |
digitalsimboja | I need to expose a service on same nnamespace | 14:00 |
ltomasbo | yep | 14:01 |
digitalsimboja | check again | 14:08 |
ltomasbo | digitalsimboja, you need to replacec the resolv.conf with the trick I did | 14:09 |
digitalsimboja | how do I make that permanent | 14:09 |
ltomasbo | arr | 14:10 |
digitalsimboja | The nameserver keeps changing I guess? | 14:10 |
ltomasbo | I stopped the systemctl service that manages that | 14:11 |
ltomasbo | and forced it to 8.8.8.8 | 14:11 |
ltomasbo | let me know if it breaks again | 14:11 |
ltomasbo | digitalsimboja, all yours | 14:12 |
digitalsimboja | Thanks very much~ | 14:12 |
digitalsimboja | Now here is the thing, I replicated a pod and exposed a service. Now I try to delete the KuryrLB but its not allowing becuase of the replica pods | 14:29 |
digitalsimboja | How do I go about this? | 14:30 |
digitalsimboja | delete the replica pod? then try? | 14:30 |
ltomasbo | digitalsimboja, what do you mean | 14:32 |
digitalsimboja | Lets join the tmux session so I can show you | 14:32 |
ltomasbo | digitalsimboja, you have a service, pointing to a deployment, right? | 14:32 |
digitalsimboja | sure | 14:32 |
ltomasbo | digitalsimboja, and you have the kuryrlb | 14:33 |
digitalsimboja | yeah | 14:33 |
ltomasbo | and the loadbalancer | 14:33 |
ltomasbo | and you want to remove the status of the klb? | 14:33 |
ltomasbo | ahh, you wanted to delete the loadbalancer on openstack side? | 14:34 |
ltomasbo | digitalsimboja, ^ | 14:34 |
digitalsimboja | yeah | 14:34 |
ltomasbo | digitalsimboja, to remove the loadbalancer you need to first remove the members, then the listeners and pools and finaly the loadbalancer | 14:34 |
ltomasbo | but you can also use the "--cascade" flag | 14:34 |
ltomasbo | and that will remove everything for you | 14:35 |
ltomasbo | try that | 14:35 |
ltomasbo | digitalsimboja, note KuryrLB relates to the k8s crd object, the oneyou wanted to delete is the OpenStack loadbalancer | 14:35 |
digitalsimboja | yeah now I have removed the openstack LB, I need to delete the status on KuryrLB CRD | 14:36 |
ltomasbo | digitalsimboja, if you do that... you will see that a new loadbalancer will be automatically created by Kuryr and its information will be added to the KLB CRD status section | 14:37 |
digitalsimboja | wait a minute, by new, the loadbalancer_id will be different? | 14:38 |
*** opendevreview has quit IRC | 14:38 | |
ltomasbo | digitalsimboja, of course, you deleted the previous one, so it will be a new one (same service in k8s side, new loadbalancer on openstack side, with the same configuration as the previous one) | 14:39 |
digitalsimboja | Let me warp my head around this slowly | 14:42 |
digitalsimboja | I deleted the openstack LB | 14:42 |
ltomasbo | digitalsimboja, yes, and that changes nothing on the K8s side | 14:43 |
digitalsimboja | but it got recreated by kuryr with new credentials | 14:43 |
ltomasbo | digitalsimboja, so, neither kubernetes, nor kuryr knows about it | 14:43 |
ltomasbo | digitalsimboja, you can check that if you try to curl the svc IP now, it won't work | 14:43 |
ltomasbo | digitalsimboja, not recreated, unless you remove the status on the CRD | 14:43 |
digitalsimboja | digitalsimboja, so, neither kubernetes, nor kuryr knows about it: I have a question here | 14:44 |
ltomasbo | when you remove the status on the CRD, kuryrloadbalancer handler detects there has been a modification on the CRD, and process the on_present function. Then, as there is no OpenStack loadbalancer with the needed modification, it calls the lbaasv2 driver to create another one, and writes its information on the CRD status section | 14:44 |
digitalsimboja | so if I "openstack loadbalancer delete --cascade sunday/demo | 14:47 |
digitalsimboja | what happens on the KuryrLB CRD side and K8s side? | 14:47 |
digitalsimboja | nothing? | 14:47 |
ltomasbo | nothing | 14:47 |
ltomasbo | taht is what you need to work on | 14:47 |
digitalsimboja | perfect!!!!!!!!! | 14:48 |
ltomasbo | ensuring kuryr detects the loadbalancer is missing in openstack side | 14:48 |
ltomasbo | and then triggers the needed actions, which will be as simple as removing the loadbalancer information from the CRD status | 14:48 |
digitalsimboja | so until I go manually to affect the status side on the kuryrLb crd? | 14:48 |
ltomasbo | go and check what happens when you modify that! | 14:49 |
digitalsimboja | so my task is for KUryr to detect that openstack resource has been deleted, then go into the CRD status side and delete the members right there which would trigger the recreation | 14:49 |
ltomasbo | digitalsimboja, we can discuss the specifics to make it better on the next call | 14:50 |
ltomasbo | but for now you can assume it will be enough to simply delete the complete status section | 14:50 |
ltomasbo | just set it to {} | 14:50 |
digitalsimboja | yeah | 14:50 |
digitalsimboja | so I need to get the CRD.status and set it to {} | 14:51 |
digitalsimboja | Just a little clarity: loadbalancer.py monitors the openstack side of things correct? | 14:52 |
maysams | digitalsimboja: in that file it's specified which resource it's watching | 14:53 |
maysams | digitalsimboja: the sync_lbaas_members method providers more details | 14:55 |
ltomasbo | digitalsimboja, yep, check what is monitoring/watching (tip: no handlers are watching openstack side of thing, there is no OpenStack API for that) | 14:55 |
digitalsimboja | Understood!! | 14:56 |
maysams | digitalsimboja: remember that if the docs do have the infos you're looking for, some improvement might be welcomed | 14:56 |
digitalsimboja | definitely | 14:57 |
maysams | digitalsimboja: good, let us know if you find any that needs improvement | 14:59 |
maysams | for example more details about the CRDs | 15:00 |
maysams | maybe you assist with those | 15:00 |
maysams | you can* | 15:00 |
digitalsimboja | yes sure, there is. I doubt if there is a doc link to the CRDs specifically in details yet? | 15:01 |
digitalsimboja | https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html | 15:02 |
maysams | this is not related to the main crds | 15:03 |
digitalsimboja | yeah thats the only information I could find yet about the main CRD | 15:04 |
maysams | have you checked the architecture of kuryr? | 15:05 |
maysams | https://docs.openstack.org/kuryr-kubernetes/latest/devref/kuryr_kubernetes_design.html? | 15:06 |
digitalsimboja | Yeah, have checked severaly before now, but I have deeper understanding right now looking at the interconnections | 15:09 |
digitalsimboja | I will take a look more intently again | 15:10 |
maysams | this one might also be handy https://docs.openstack.org/kuryr-kubernetes/latest/devref/service_support.html | 15:11 |
digitalsimboja | Great! | 15:17 |
*** digitalsimboja has quit IRC | 15:24 | |
*** digitalsimboja has joined #openstack-kuryr | 15:24 | |
*** ltomasbo has quit IRC | 16:05 | |
*** digitalsimboja has quit IRC | 16:17 | |
*** dulek has quit IRC | 17:08 | |
*** digitalsimboja has joined #openstack-kuryr | 18:44 | |
*** digitalsimboja has quit IRC | 19:00 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!