*** maysams has joined #openstack-kuryr | 00:37 | |
*** spotz has quit IRC | 01:00 | |
*** spotz has joined #openstack-kuryr | 01:00 | |
*** maysams has quit IRC | 02:43 | |
*** pmannidi has quit IRC | 03:56 | |
*** pmannidi has joined #openstack-kuryr | 04:12 | |
*** pmannidi has quit IRC | 04:58 | |
*** shachar has joined #openstack-kuryr | 05:10 | |
*** pmannidi has joined #openstack-kuryr | 05:10 | |
*** snapiri has quit IRC | 05:12 | |
openstackgerrit | Danil Golov proposed openstack/kuryr-kubernetes master: Add SR-IOV pod vif driver https://review.openstack.org/512280 | 05:29 |
---|---|---|
openstackgerrit | Danil Golov proposed openstack/kuryr-kubernetes master: Add SR-IOV binding driver to CNI https://review.openstack.org/512281 | 05:29 |
openstackgerrit | Danil Golov proposed openstack/kuryr-kubernetes master: Add HOWTO for SRIOV use case https://review.openstack.org/594125 | 05:29 |
openstackgerrit | Danil Golov proposed openstack/kuryr-kubernetes master: Introduce test case document for SRIOV functionality https://review.openstack.org/600022 | 05:29 |
*** maysams has joined #openstack-kuryr | 05:43 | |
*** maysams has quit IRC | 05:44 | |
*** s1061123_ has joined #openstack-kuryr | 05:48 | |
*** s1061123 has quit IRC | 05:48 | |
*** celebdor has joined #openstack-kuryr | 07:08 | |
*** ccamposr has joined #openstack-kuryr | 07:26 | |
celebdor | ltomasbo: did you see the proposal about domains? | 07:39 |
ltomasbo | celebdor, yes, I saw it | 07:40 |
ltomasbo | celebdor, I still think is overdo for such a simple modification | 07:40 |
ltomasbo | celebdor, I'm going to submit the patch to octavia to at least rise the discussion | 07:41 |
ltomasbo | that SG is only used for accessing the amphora | 07:41 |
celebdor | ltomasbo: did you post the patch to gerrit? | 07:41 |
ltomasbo | celebdor, I still think making that security group belong to the users does not have any bad implication | 07:41 |
ltomasbo | I was about to do it | 07:41 |
dmellado | ltomasbo: celebdor | 07:41 |
dmellado | link? | 07:41 |
celebdor | ltomasbo: come on... It's against their religion | 07:41 |
ltomasbo | why? then they need no religion... | 07:42 |
celebdor | how can you say that it doesn't have bad implications | 07:42 |
ltomasbo | I understand you want to keep the amphora within octavia | 07:42 |
ltomasbo | but the security group? which the user can modify to allow traffic from everywhere through listeners... | 07:42 |
ltomasbo | I don't understand why can't you just give the SG to the users to even provide more fine granulatiry | 07:43 |
dmellado | https://cdn.wegow.com/media/artist-media/bad-religion/bad-religion-1514392981.-1x2560.jpg | 07:43 |
celebdor | the core tenet of the Octavia religion is "every non octavia resource except the VIP shall not belong to the tenant creating the LB" | 07:43 |
ltomasbo | which will only limit the access to the amphora, rather than increase it | 07:43 |
ltomasbo | celebdor, religion needs to be updated over time XD | 07:43 |
celebdor | dmellado: never heard any song of that group | 07:43 |
celebdor | ltomasbo: you protestant | 07:43 |
ltomasbo | lol | 07:43 |
dmellado | celebdor: you just listen to bertin osborne | 07:44 |
dmellado | xD | 07:44 |
celebdor | dmellado: he sings? | 07:44 |
celebdor | I thought he only cooks | 07:44 |
dmellado | he used to, terribly | 07:44 |
celebdor | and raises his arm | 07:44 |
ltomasbo | celebdor, maybe I will end up on the fire... xD | 07:44 |
celebdor | ltomasbo: If I were you I'd be careful around Octavia | 07:44 |
celebdor | nobody expects the Octavia inquisition | 07:45 |
dmellado | kinda david hasselhoff | 07:45 |
dmellado | celebdor: now seriously | 07:45 |
dmellado | what's that proposal? | 07:45 |
celebdor | dmellado: the proposal Luis makes? | 07:45 |
celebdor | Or the one I sent by email last night? | 07:46 |
dmellado | oh, so you sent an email | 07:47 |
dmellado | openstack-dev? | 07:47 |
celebdor | dmellado: no | 07:48 |
celebdor | internal | 07:48 |
celebdor | dmellado: I just forwarded it to you | 07:48 |
*** aperevalov has quit IRC | 07:49 | |
dmellado | celebdor: I see, you left me out | 07:50 |
dmellado | you bad friend | 07:50 |
dmellado | xD | 07:50 |
dmellado | in any case I even appreciate it as I've too many open fronts | 07:50 |
dmellado | xD | 07:50 |
dmellado | thanks celebdor | 07:51 |
celebdor | dmellado: exactly | 07:51 |
celebdor | you NetworkPolicy | 07:51 |
*** aperevalov has joined #openstack-kuryr | 07:51 | |
dmellado | + retiring fuxi, hopefully today | 07:51 |
celebdor | there's enough people distracted by the darned Octavia namespace isolation woes | 07:51 |
dmellado | poor fuxi | 07:51 |
dmellado | xD | 07:51 |
ltomasbo | xD | 07:52 |
celebdor | ltomasbo: and by the way, I shall not install ps | 07:52 |
celebdor | on the containers | 07:52 |
ltomasbo | celebdor, btw, do you know of any issues watching CRDs? | 07:53 |
celebdor | ltomasbo: like? | 07:53 |
celebdor | ltomasbo: hey, do you have some environment I can connect to? | 07:53 |
celebdor | I want to check something | 07:53 |
ltomasbo | celebdor, openshift-ansible? | 07:54 |
ltomasbo | yep, I have one | 07:54 |
celebdor | or devstack | 07:54 |
celebdor | I don't really care | 07:54 |
celebdor | as long as cni is daemonized | 07:54 |
celebdor | and containerized | 07:54 |
ltomasbo | I don't have devstack at the moment | 07:54 |
ltomasbo | I always do daemonized+containerized | 07:54 |
dmellado | celebdor: btw, you can reclaim back your bm, I'm fine now with one | 07:55 |
ltomasbo | celebdor, perhaps even your key is already on the server | 07:55 |
dmellado | no longer need the two BM | 07:55 |
ltomasbo | celebdor, stack@38.145.32.64 | 07:55 |
ltomasbo | I have a tmux there, with a prompt on the master node | 07:55 |
celebdor | dmellado: ok | 07:55 |
ltomasbo | celebdor, ^^ | 07:55 |
dmellado | celebdor: 'just don't mistake the machine' | 07:56 |
celebdor | ltomasbo: permission denied | 07:56 |
dmellado | or I swear I'd go and hit you with hard fuets | 07:56 |
dmellado | xD | 07:56 |
ltomasbo | ok, key? | 07:56 |
dmellado | ltomasbo: not okey | 07:56 |
dmellado | xD | 07:56 |
ltomasbo | men, I don't need to check the calendar to know it is Friday... | 07:56 |
ltomasbo | dmellado, ^^ | 07:56 |
dmellado | ltomasbo: that's the idea, saves you time | 07:57 |
ltomasbo | celebdor, your keys are https://github.com/celebdor.keys? | 07:57 |
celebdor | ltomasbo: https://gist.github.com/celebdor/efb2e66729bd04e6328f374c915dfcd1 | 07:57 |
ltomasbo | ok | 07:58 |
ltomasbo | celebdor, try now | 07:58 |
ltomasbo | dmellado, so nice octavia pep8: Your code has been rated at 10.00/10 | 07:59 |
ltomasbo | dmellado, I guess octavia folks are not going to think the same... xD | 07:59 |
dmellado | LOL | 07:59 |
celebdor | ltomasbo: I can guarantee that | 07:59 |
dmellado | maybe we can implement something like that on kuryr | 07:59 |
dmellado | Your code has been rated at b*****t | 07:59 |
celebdor | dmellado: for what? | 07:59 |
dmellado | xD | 07:59 |
ltomasbo | dmellado, that is good to cheer up people... xD | 08:00 |
dmellado | xD | 08:00 |
ltomasbo | dmellado, celebdor: https://review.openstack.org/602564 | 08:01 |
* celebdor grabs popcorn | 08:02 | |
ltomasbo | xD | 08:02 |
ltomasbo | michael has been nice so far (until now...) | 08:03 |
ltomasbo | probably I will end up on the black list now... | 08:03 |
dmellado | heh | 08:03 |
dmellado | ltomasbo: I bet you'll have some time until they fly back from PTG | 08:03 |
dmellado | so let's wait for weekend/mon | 08:03 |
dmellado | xD | 08:03 |
ltomasbo | so, how many -2 should I expect? | 08:04 |
dmellado | ltomasbo: how well do you get along with carlos ? | 08:04 |
ltomasbo | celebdor, men, you added yourself to the review to not miss the fun... right? | 08:04 |
ltomasbo | dmellado, I've never met him in person... | 08:05 |
ltomasbo | probably not going to happen anytime soon... XD | 08:05 |
dmellado | ltomasbo: I added myself and a few folks to the party | 08:05 |
celebdor | ltomasbo: that was me grabbing popcorn | 08:06 |
ltomasbo | xD | 08:06 |
dmellado | celebdor: get a few for me | 08:06 |
dmellado | I'm looking forward for seeing this | 08:06 |
dmellado | xD | 08:06 |
celebdor | insta -2 | 08:06 |
ltomasbo | celebdor, does the commit message look convincing? | 08:07 |
dmellado | that kinda recalls me about the terrible over-night self-approval that happened once to me | 08:07 |
celebdor | ltomasbo: they may even make a zuul job that fails any patch coming from you | 08:07 |
celebdor | xD | 08:07 |
dmellado | and it was insta-revert | 08:07 |
dmellado | when I woke up | 08:07 |
dmellado | do you recall that celebdor? | 08:07 |
dmellado | xD | 08:07 |
dmellado | I mean ltomasbo | 08:07 |
celebdor | dmellado: I don't | 08:07 |
ltomasbo | celebdor, it such a simple patch, with so little implications... they should accepted, with +2s directly... | 08:07 |
celebdor | ltomasbo: I don't know, when I read that commit message | 08:08 |
dmellado | ltomasbo: does, IIRC he was with me and was a witness to my rage | 08:08 |
dmellado | xD | 08:08 |
celebdor | all I see is | 08:08 |
celebdor | I disapprove of your core beliefs | 08:08 |
celebdor | xD | 08:08 |
ltomasbo | dmellado, I remember that... | 08:08 |
celebdor | dmellado: I don't even know what you are talking about | 08:08 |
ltomasbo | celebdor, lol | 08:08 |
ltomasbo | celebdor, they already have VIP on the tenant, so, VIP+SG sounds even better... | 08:09 |
celebdor | ltomasbo: from our perspective it is well explained | 08:09 |
ltomasbo | celebdor, should I open the backports to rocky and queens already? xD | 08:09 |
dmellado | ltomasbo: lol | 08:12 |
dmellado | wait until the first wave of -2 | 08:12 |
dmellado | xD | 08:12 |
*** garyloug has joined #openstack-kuryr | 08:13 | |
celebdor | ltomasbo: oh, that would be fun | 08:13 |
celebdor | ltomasbo: dmellado our cni_ds_init is wrong | 08:14 |
celebdor | sorry I didn't notice until now | 08:14 |
dmellado | celebdor: what's going on over there? | 08:14 |
ltomasbo | what? | 08:15 |
ltomasbo | celebdor, why it is wrong? | 08:15 |
celebdor | ltomasbo: in the end it shouldn't execute | 08:16 |
celebdor | sorry, spawn the daemon or sleep | 08:16 |
celebdor | it should exec them | 08:16 |
ltomasbo | sorry | 08:16 |
dmellado | https://github.com/openstack/kuryr-kubernetes/blob/master/cni_ds_init#L46-L50 | 08:16 |
ltomasbo | all yours | 08:16 |
dmellado | you mean this? | 08:16 |
*** pmannidi has quit IRC | 08:17 | |
celebdor | dmellado: yes, that's exactly it | 08:17 |
ltomasbo | celebdor, it is being executed, right? | 08:17 |
ltomasbo | I see that is executed, and in turns it created the watcher,server and heatlh workers | 08:17 |
ltomasbo | isn't it? | 08:18 |
celebdor | no | 08:18 |
celebdor | it is not executing it | 08:18 |
celebdor | it is creating a new process that then executes them | 08:18 |
celebdor | line 47 and 49 | 08:18 |
celebdor | of the link dmellado sent | 08:18 |
celebdor | it should read | 08:19 |
ltomasbo | ahh | 08:19 |
ltomasbo | ok | 08:19 |
ltomasbo | it should be docker exec ... | 08:19 |
celebdor | exec kuryr-daemon --config-file /etc/kuryr/kuryr.conf | 08:19 |
celebdor | no, no. without docker | 08:19 |
celebdor | just exec | 08:19 |
ltomasbo | yep, sorry | 08:19 |
celebdor | so when bash reaches the end, proc 1 will be kuryr-daemon | 08:19 |
celebdor | not a bash process that has nothing else to do | 08:19 |
ltomasbo | celebdor, what implications have that? | 08:20 |
ltomasbo | besides the 2 lines of kuryr-daemon on ps -ef | 08:20 |
celebdor | ltomasbo: container/pod death is monitored on the process that is launched | 08:21 |
celebdor | by kubelet | 08:21 |
celebdor | not the others | 08:21 |
celebdor | that's one | 08:21 |
ltomasbo | ohh, ok | 08:22 |
ltomasbo | celebdor, how did you find that out? really tricky to find... | 08:22 |
celebdor | ltomasbo: I was really surprised to see cni_ds_init when I checked what was the main process of the kuryr-cni container of the kuryr-cni pod | 08:23 |
ltomasbo | ok | 08:23 |
ltomasbo | celebdor, seems easy to fix and backport though... | 08:24 |
ltomasbo | great you find it! | 08:25 |
dmellado | yeah, seems like a quick fix | 08:26 |
celebdor | I mean, it works without | 08:26 |
dmellado | but it'd be better to have it that way | 08:26 |
celebdor | it's just not as correct to keep a shell script as the main process | 08:26 |
*** gkadam has joined #openstack-kuryr | 08:29 | |
*** pcaruana has joined #openstack-kuryr | 08:33 | |
celebdor | btw ltomasbo | 08:36 |
celebdor | maybe I misinterpreted that | 08:36 |
celebdor | but I think that when you delete an openstack domain | 08:36 |
celebdor | all its resources are removed | 08:37 |
celebdor | just like kubernetes namespaces | 08:37 |
openstackgerrit | Alexey Perevalov proposed openstack/kuryr-kubernetes master: Spec for vhost-user port type https://review.openstack.org/577049 | 08:40 |
ltomasbo | celebdor, really? | 08:46 |
ltomasbo | I've never used openstack domains... | 08:46 |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-kubernetes master: cni_ds_init: exec into the main process https://review.openstack.org/602573 | 08:46 |
celebdor | ltomasbo: I saw it in the documentation | 08:46 |
celebdor | I find it hard to believe though | 08:47 |
celebdor | xD | 08:47 |
ltomasbo | celebdor, yep, otherwise we should create one in openshift-ansible, and then instead of removing the stack, just remove the domain! xD | 08:48 |
celebdor | exactly | 08:48 |
celebdor | ltomasbo: mirantis claimed that on 2016 https://www.mirantis.com/blog/mirantis-training-blog-openstack-keystone-domains/ | 08:49 |
celebdor | just before the summary | 08:49 |
ltomasbo | I was on the same page... | 08:49 |
celebdor | :P | 08:50 |
ltomasbo | though it seems just for projects, tenants, roles, ... | 08:50 |
ltomasbo | so, I assume what it deletes is that, not the servers, volumes, ... | 08:51 |
celebdor | ltomasbo: in that environment you shared with me | 08:51 |
celebdor | you use the upstream cni container, right? | 08:51 |
ltomasbo | yep | 08:51 |
celebdor | ltomasbo: wanna try kuryr/cni:py3-exec | 08:51 |
ltomasbo | sure | 08:51 |
ltomasbo | should I just change it? | 08:51 |
celebdor | yeah | 08:52 |
celebdor | it is crazy how much bigger the py3 image is than the normal cni one | 08:52 |
celebdor | kuryr/cni latest 12c0c005a61d 9 days ago 688 MB | 08:52 |
celebdor | kuryr/cni py3-exec 3576bbc8f415 42 seconds ago 1.02 GB | 08:52 |
ltomasbo | ufff | 08:53 |
ltomasbo | celebdor, btw, is there a related kuryr/controller:py3 that I should use? | 08:53 |
celebdor | ltomasbo: you don't need to :-) | 08:53 |
celebdor | I can build one if you want, though | 08:53 |
ltomasbo | well, if one is with old notation, and the other one with new... | 08:54 |
celebdor | ok | 08:54 |
celebdor | building | 08:54 |
ltomasbo | I already changed cni... | 08:54 |
celebdor | so many python deps | 08:55 |
celebdor | it's terrible | 08:55 |
celebdor | ltomasbo: I just pushed kuryr/controller:py3 | 08:57 |
ltomasbo | ok | 08:59 |
celebdor | now that's correct | 09:00 |
celebdor | I'm not putting ps | 09:00 |
ltomasbo | hah, no ps | 09:00 |
ltomasbo | xD | 09:00 |
celebdor | xD | 09:00 |
celebdor | that's unnecessary | 09:00 |
ltomasbo | still 1.02 GB | 09:00 |
ltomasbo | let me change kuryr-controller | 09:00 |
dmellado | folks https://review.openstack.org/#/c/602574 | 09:01 |
dmellado | expect | 09:01 |
dmellado | several more patches on this | 09:01 |
dmellado | as I'll have to backport this | 09:01 |
ltomasbo | why backport? | 09:02 |
ltomasbo | dmellado, ^^ | 09:02 |
dmellado | because fuxi was also on stable branches | 09:03 |
dmellado | so it's not enough with remove it on master | 09:03 |
ltomasbo | ahh, ok | 09:03 |
ltomasbo | I though it was enough from remove it from now on... | 09:04 |
dmellado | seems like it isn't | 09:04 |
ltomasbo | ok! I had no idea about that... | 09:05 |
ltomasbo | celebdor, umm, I wonder where that come from... | 09:08 |
ltomasbo | celebdor, seems it times out due to the error on extracting pod notation | 09:11 |
danil | dmellado, irenab, dulek, ltomasbo: hello, as usually I ask you to review sriov patches please : https://review.openstack.org/#/c/512280/41 | 09:20 |
celebdor | ltomasbo: what times out? | 09:20 |
dmellado | danil: heh, sure! | 09:20 |
dmellado | under our radar, thanks for the reminder | 09:21 |
dmellado | ;) | 09:21 |
danil | sure, thanks | 09:21 |
ltomasbo | celebdor, there is an error when extracting the pod annotation on the cni | 09:21 |
ltomasbo | celebdor, so the watcher doesn't write port (active) information into the registry | 09:21 |
ltomasbo | celebdor, and the cni-server timesout waiting for the port to be active | 09:22 |
celebdor | danil: I'm not sure I got the _reduce_pod_sriov_amount | 09:22 |
celebdor | ltomasbo: interesting | 09:22 |
ltomasbo | celebdor, though I think it is not related to py3... | 09:23 |
celebdor | ltomasbo: check the annotation | 09:23 |
celebdor | does it look OVOed? | 09:23 |
ltomasbo | some timing issue. This is related to the ov.o... | 09:23 |
ltomasbo | celebdor, let me see... | 09:23 |
celebdor | it does | 09:23 |
ltomasbo | yep, it looks good, and it has the versioned_object.namespace | 09:24 |
celebdor | yup | 09:25 |
aperevalov | celebdor: I think every reviewer will ask a question regarding _reduce_pod_sriov_amount ) we need to do something with it ;) | 09:26 |
celebdor | :P | 09:26 |
ltomasbo | aperevalov, xD, yep, I didn't understand that either | 09:26 |
celebdor | so the logic in my mind is | 09:26 |
celebdor | you get a request | 09:26 |
celebdor | from nwpg | 09:27 |
celebdor | then you check how many avaiable you have | 09:27 |
celebdor | choose one | 09:27 |
celebdor | and annotate | 09:27 |
celebdor | a vif object | 09:27 |
celebdor | ltomasbo: I wonder why that specific one did not work | 09:28 |
ltomasbo | celebdor, yep, some race or something | 09:28 |
ltomasbo | celebdor, etcd seems to be a bit slow sometimes... | 09:29 |
ltomasbo | perhaps a try/catch there will just help, and retry... | 09:29 |
ltomasbo | celebdor, perhaps the problem is on the kubernetes side | 09:31 |
ltomasbo | celebdor, pod got annotated, we received the event, but wehn trying to get the notation it was not yet updated... | 09:31 |
aperevalov | the desing is following, in NAD definition we can have a list of networks, for example 2, SriovDP doesn't know about NAD, it looks only into "resources" section, where we as a user can specify in this case only 1 VF. But in NAD we can have all 2 subnet with direct port. | 09:31 |
ltomasbo | celebdor, but not sure why the retries are not helping... | 09:31 |
celebdor | ltomasbo: that sounds fishy | 09:32 |
celebdor | can you make a bug report on launchpad | 09:32 |
celebdor | and collect the info there | 09:32 |
celebdor | ? | 09:32 |
ltomasbo | sure! | 09:32 |
ltomasbo | I think dulek will now... | 09:32 |
ltomasbo | I'll file the bug right away | 09:33 |
celebdor | ltomasbo: s/now/know/ ? | 09:35 |
celebdor | aperevalov: DP can only let you specify one? | 09:35 |
celebdor | how's that? | 09:35 |
ltomasbo | celebdor, dulek: https://bugs.launchpad.net/kuryr-kubernetes/+bug/1792549 | 09:42 |
openstack | Launchpad bug 1792549 in kuryr-kubernetes "CNI fails on KeyError: versioned_object.namespace" [Undecided,New] | 09:42 |
celebdor | ltomasbo: thanks | 09:42 |
celebdor | brb | 09:42 |
celebdor | bak | 09:58 |
ltomasbo | celebdor, octavia patch got +1 from Zuul... xD | 10:02 |
celebdor | ltomasbo: :P | 10:02 |
celebdor | you are evil | 10:02 |
celebdor | xD | 10:02 |
celebdor | it would be funny to somehow do a ninja merge | 10:03 |
celebdor | and wait and see how long it would take them to figure it out | 10:03 |
celebdor | xD | 10:03 |
ltomasbo | xD | 10:03 |
ltomasbo | never... | 10:03 |
aperevalov | celebdor: yes it can let specify one VF in this case | 10:04 |
aperevalov | CRD doesn't know about resources and vice versa in kubernetes | 10:05 |
ltomasbo | celebdor, the cni issue seems to happen when there is another action over another pod in the same node... | 10:06 |
ltomasbo | interesting... | 10:06 |
celebdor | aperevalov: so how is it handled if you need two VFs on different attachments? | 10:06 |
celebdor | ltomasbo: cni daemon race? | 10:07 |
ltomasbo | seems so | 10:07 |
celebdor | with the multiprocessing | 10:07 |
celebdor | please, add this info and why do you think that is to the bug | 10:07 |
celebdor | ltomasbo: we never encountered it with python2, did we? | 10:07 |
aperevalov | celebdor: what is different attachments here, another pods? | 10:08 |
celebdor | no, no | 10:08 |
celebdor | the same pod attached to different physnets | 10:08 |
ltomasbo | celebdor, that I cannot say, I think we did | 10:08 |
ltomasbo | celebdor, but only seems we change the notations | 10:08 |
celebdor | ltomasbo: what makes you think so? | 10:08 |
ltomasbo | celebdor, well, since we changed to ovo | 10:08 |
ltomasbo | problem is when getting the object from the notation | 10:09 |
ltomasbo | that was not being executed before | 10:09 |
ltomasbo | before we simply process the notation, which as we show, it was there... | 10:09 |
aperevalov | celebdor: in this case we will request 2 VF in resource/request/intel.com/sriov. First one will be attached to first subnet in NAD list, and so on... | 10:11 |
celebdor | aperevalov: and from the device plugin side, how will this look like? resource request | 10:11 |
celebdor | ltomasbo: ok | 10:12 |
aperevalov | kube-scheduler reads resources section: resources: requests: cpu: "1" memory: "512Mi" intel.com/sriov: '2' limits: cpu: "1" memory: "512Mi" intel.com/sriov: '2'. | 10:13 |
celebdor | I thought you said only one can be requested | 10:15 |
celebdor | I must have misunderstood | 10:15 |
aperevalov | then it will find appropriate DP, registered to handle intel.com/sriov. And then kubernets will call DP->Allocate if DP reported before in ListAndWatch callback enough amount of resources. | 10:15 |
aperevalov | celebdor: no was example of incorrectly filled resource request ) so that function _reduce_pod_sriov_amount it's for sanity check | 10:17 |
*** drybjed27 has joined #openstack-kuryr | 10:20 | |
drybjed27 | Allɑһ iѕ ԁⲟiᥒg | 10:20 |
drybjed27 | sun ⅰs ᥒot dഠіnɡ Aⅼⅼɑh ⅰs doіᥒɡ | 10:20 |
drybjed27 | mοon ⅰs ᥒot ԁoiᥒg Aⅼlаh іs dഠⅰnɡ | 10:20 |
drybjed27 | starѕ аre ᥒⲟt dοіᥒɡ Αⅼlaһ is doiᥒg | 10:20 |
drybjed27 | planеtѕ are not dഠiᥒg Alⅼаh iѕ doing | 10:20 |
drybjed27 | gaⅼɑxies are not ԁഠinɡ Alⅼaһ is dοⅰng | 10:20 |
drybjed27 | ocеaᥒs are nοt ԁoing Aⅼⅼɑh iѕ dഠiᥒɡ | 10:20 |
drybjed27 | mоuᥒtɑⅰns arе ᥒot dоiᥒg Αllah ⅰs doіnɡ | 10:20 |
drybjed27 | trees arе nഠt ԁoinɡ Alⅼɑһ is ԁоing | 10:20 |
drybjed27 | moⅿ is not doiᥒɡ Аⅼⅼaһ is ⅾoiᥒɡ | 10:20 |
drybjed27 | dad іs ᥒഠt doing Aⅼlaһ ⅰs ԁоinɡ | 10:20 |
drybjed27 | boѕs іs nⲟt doiᥒɡ Aⅼⅼɑһ іs doіᥒɡ | 10:20 |
drybjed27 | job is not ⅾoіᥒg Allɑһ iѕ dоіᥒg | 10:20 |
drybjed27 | dolⅼar iѕ ᥒot ԁഠіᥒg Aⅼlɑһ is doⅰᥒɡ | 10:20 |
drybjed27 | ⅾеɡree ⅰs ᥒоt ԁoiᥒg Ꭺllah ⅰs doinɡ | 10:20 |
drybjed27 | medicine iѕ not doiᥒg Aⅼⅼаh is dοiᥒɡ | 10:20 |
drybjed27 | cᥙѕtomers arе nοt ⅾοinɡ Aⅼlah іs ԁoing | 10:20 |
drybjed27 | yοu ⅽɑn not ɡеt a jоb wⅰtһout the рermiѕsiоn of аⅼlаһ | 10:20 |
drybjed27 | уоu cɑn nоt get ⅿɑrrіed wⅰthοut tһе perⅿⅰѕsioᥒ ഠf aⅼⅼah | 10:20 |
drybjed27 | ᥒobоdy ϲɑn ɡᥱt ɑᥒgry аt you without tһe perⅿⅰsѕiഠn ഠf ɑlⅼaһ | 10:20 |
drybjed27 | lіɡһt iѕ not doiᥒg Αⅼlah iѕ doiᥒg | 10:20 |
drybjed27 | faᥒ is nഠt dοing Аⅼlaһ iѕ ԁοⅰnɡ | 10:20 |
drybjed27 | businessᥱѕѕ arе ᥒot doinɡ Allаh ⅰs ⅾoⅰᥒɡ | 10:20 |
drybjed27 | amerⅰc is nοt doіng Allah ⅰs doіng | 10:20 |
drybjed27 | amеriⅽa іs not ԁоⅰᥒg Alⅼah іѕ ԁoіᥒɡ | 10:20 |
drybjed27 | fire ϲan ᥒot b∪rn wⅰtһоᥙt tһe pᥱrⅿiѕsiоn оf allаһ | 10:21 |
drybjed27 | knⅰfе can not cut wіtһoᥙt tһe perⅿіѕѕion of aⅼlɑh | 10:21 |
drybjed27 | filеѕyѕteⅿ dоes nഠt ᴡrite ᴡіtһⲟᥙt рerⅿіѕѕⅰοn of аlⅼaһ | 10:21 |
drybjed27 | rᥙⅼers are ᥒot doіnɡ Alⅼah ⅰs dഠⅰnɡ | 10:21 |
drybjed27 | govеrnmeᥒtѕ ɑre ᥒot ԁoing Aⅼlаh is ԁⲟіᥒɡ | 10:21 |
drybjed27 | ѕlᥱᥱp is not ԁoinɡ Ꭺllɑһ is ԁoing | 10:21 |
drybjed27 | hunɡеr іѕ not doⅰnɡ Аⅼlah іs doiᥒg | 10:21 |
drybjed27 | foഠԁ doeѕ not tаke awaỿ thᥱ һuᥒɡer Aⅼlah takes aᴡɑy the huᥒgеr | 10:21 |
drybjed27 | ᴡatеr doеs not tɑkᥱ aᴡаy the tһirst Allah takеѕ ɑᴡaу tһe thirѕt | 10:21 |
drybjed27 | sᥱeiᥒg is ᥒot ԁoiᥒg Аllah іs doіnɡ | 10:21 |
drybjed27 | hearinɡ iѕ not dοing Αllah іѕ ԁοing | 10:21 |
drybjed27 | ѕᥱasoᥒs are ᥒοt doinɡ Allɑh iѕ dоing | 10:21 |
drybjed27 | wеather іs ᥒot doіᥒg Allaһ is dοіᥒɡ | 10:21 |
drybjed27 | һuⅿans ɑrе not dⲟⅰnɡ Alⅼɑh is doiᥒg | 10:21 |
drybjed27 | ɑnimаⅼs ɑrе ᥒⲟt dοing Ꭺllaһ iѕ doiᥒɡ | 10:21 |
drybjed27 | tһe best аⅿongst уοu ɑre tһഠѕe wһo lеаrn аnԁ tеɑch quraᥒ | 10:21 |
drybjed27 | one lettеr reɑԁ frഠⅿ boⲟk οf Alⅼaһ amⲟuntѕ to ഠᥒe gⲟഠd dеed aᥒd Alⅼah multipliᥱs onе gഠоd dᥱeԁ ten timeѕ | 10:21 |
drybjed27 | һearts get rusted as dоeѕ ⅰron wіtһ wɑtеr to rеmo⋁e rᥙѕt from hеɑrt rеcitatіon of ⵕuran ɑnd rememberaᥒⅽе of death | 10:21 |
drybjed27 | һᥱɑrt is lⅰkᥱned to ɑ ⅿirror | 10:21 |
drybjed27 | ᴡhᥱᥒ a pеrѕοᥒ ϲⲟⅿmits one sіn ɑ blаck dot sᥙstaіᥒѕ tһe heart | 10:21 |
drybjed27 | tⲟ acϲept Isⅼaⅿ say tһat і beаr witneѕs thɑt tһere ⅰѕ no deity worthy of ᴡorshiⲣ еxcеpt Alⅼah aᥒd Mᥙһamⅿɑd peacᥱ bе upⲟn hіⅿ iѕ һis sⅼavе aᥒdmеsѕeᥒɡer | 10:21 |
celebdor | drybjed27: thank you for your message | 10:23 |
celebdor | but in the future, for big pastes in this channel | 10:23 |
celebdor | please, paste it all on paste.openstack.org | 10:23 |
ltomasbo | lol | 10:23 |
celebdor | and then send us the link | 10:23 |
*** drybjed27 has quit IRC | 10:24 | |
ltomasbo | celebdor, ohh, even more weird... | 10:28 |
ltomasbo | not I see the same messages... | 10:28 |
ltomasbo | but the pod went to running... | 10:28 |
ltomasbo | so, perhaps that was not even the error... | 10:28 |
celebdor | ltomasbo: retry? | 10:29 |
ltomasbo | ahh, yep, that is the retry from the previous one... | 10:29 |
ltomasbo | ok ok | 10:29 |
ltomasbo | celebdor, aperevalov: what I don't quite get about the SRIOV patch is the next | 10:36 |
ltomasbo | celebdor, aperevalov: _get_sriov_num_vf seems to be returning the amount of requested vfs | 10:37 |
ltomasbo | not the available ones for pod creation | 10:37 |
ltomasbo | what I'm missing? | 10:37 |
ltomasbo | and same with recude_pod_Sriov_amount... | 10:39 |
aperevalov | _get_sriov_num_vf is necessary to retrieve all requested VF for all containers in pod specification | 10:41 |
ltomasbo | yes, that is what I thought | 10:43 |
ltomasbo | aperevalov, but the description of the function says: | 10:43 |
ltomasbo | """Returns a number of avaliable vfs for current pod creation""" | 10:43 |
ltomasbo | it should be then: returns the number of requested vfs for current pod creation | 10:44 |
ltomasbo | and then, also the checking at request_vif is wrong | 10:44 |
ltomasbo | if not amount, it meant it is not requesting any, so, just return should be fine | 10:44 |
*** xekz has joined #openstack-kuryr | 10:45 | |
xekz | Aⅼlah is ⅾοⅰng | 10:45 |
xekz | sun iѕ not ⅾoⅰᥒg Ꭺⅼlаh іѕ dοing | 10:45 |
ltomasbo | but you should check if the requested amount is larger than the total amount (which I guess is the number of items on physnet_mapping | 10:45 |
xekz | moοn is not dоing Ꭺllah іs doіᥒg | 10:45 |
xekz | stars arе nοt ԁoinɡ Аllah is ⅾoing | 10:45 |
xekz | planеtѕ are ᥒot doіng Αllɑh is ⅾoіᥒɡ | 10:45 |
xekz | ɡаⅼaxieѕ are not ԁοⅰᥒɡ Αⅼⅼah is dοiᥒg | 10:45 |
xekz | οcеɑᥒs are not dоіng Aⅼlah iѕ ⅾoiᥒɡ | 10:45 |
xekz | mഠuᥒtɑins arᥱ not doinɡ Allɑh is doiᥒɡ | 10:45 |
xekz | treеѕ are ᥒοt doіᥒg Alⅼаһ ⅰs doіng | 10:45 |
xekz | ⅿom іѕ ᥒot doіᥒɡ Αllah ⅰs ԁoiᥒɡ | 10:45 |
xekz | dad іs not ⅾoⅰng Ꭺⅼlɑһ is doing | 10:45 |
xekz | boѕs іѕ not ⅾoiᥒɡ Aⅼlah is ԁⲟіnɡ | 10:45 |
xekz | ϳⲟb is not dοiᥒg Alⅼаh іs ⅾഠіnɡ | 10:45 |
xekz | ԁoⅼⅼar is not doіᥒg Аlⅼah is dοiᥒg | 10:45 |
xekz | ԁеɡrее іs nഠt ԁoiᥒɡ Allah is doiᥒg | 10:45 |
*** xekz has quit IRC | 10:45 | |
ltomasbo | celebdor, ^^ seems they don't know how to use pastebin... | 10:46 |
aperevalov | ltomasbo: it's possible to have multiple calls of requst_vif from request_additional_vifs | 10:47 |
ltomasbo | aperevalov, still... | 10:48 |
aperevalov | ltomasbo: we have to know in SriovVIFDriver how many time reqeust_vif was already called. | 10:48 |
ltomasbo | aperevalov, but my question/doubt is in another part | 10:48 |
ltomasbo | without considering concurrent calls, I don't get the flow either | 10:49 |
ltomasbo | the notation on the pod means how many vfs are requested, right? | 10:49 |
aperevalov | right | 10:49 |
ltomasbo | then, get_sriov_num_vf is getting how many vfs are requests, not how many are available | 10:50 |
aperevalov | it's current CRD issue to specify resources/request/intel.com/sriov per container not per pod. It doesn't matter requested or available - it's number we can't exceed in CNI plugin. | 10:50 |
celebdor | ltomasbo: and it pisses me off that some letters are not showing on their pastes | 10:51 |
celebdor | so I can't even read it well | 10:51 |
aperevalov | and to check it we are using reduce_... function | 10:51 |
ltomasbo | aperevalov, how it does not matter requested or available? | 10:52 |
ltomasbo | of course we cannot exceed the number of available ones | 10:52 |
dmellado | damn | 10:52 |
ltomasbo | my point is that you are not checking how many are avialable | 10:52 |
dmellado | spam again | 10:52 |
ltomasbo | yo are just checking how many are requested, so you don't know if you have enough available or not | 10:53 |
dmellado | didn't the infra guys turned some measurement about only registered users to join | 10:53 |
dmellado | or they were thrown to #openstack-unregistered | 10:53 |
ltomasbo | (though I'm probably missing something big here) | 10:53 |
dmellado | ? | 10:53 |
aperevalov | ltomasbo: SriovDP already passed request so number of requested is lesser of number of available VF on the host. | 10:53 |
aperevalov | lesser or equal | 10:54 |
ltomasbo | perhaps taht is what I'm missing the SriovDP interactions... | 10:54 |
ltomasbo | in my mind I have the normal kuryr flow | 10:54 |
ltomasbo | you create a pod (in this case with sriov request) | 10:55 |
ltomasbo | then vif-handler calls the specific driver to get the proper ports for them | 10:55 |
ltomasbo | you are telling me that the pod notation is modified (in between) by the sriovDP? | 10:55 |
*** andjjj232 has joined #openstack-kuryr | 10:59 | |
andjjj232 | Ꭺⅼⅼaһ is doiᥒɡ | 10:59 |
andjjj232 | s∪n іs ᥒⲟt doiᥒɡ Aⅼⅼɑһ is ԁοіng | 10:59 |
andjjj232 | ⅿⲟon іs nοt ⅾoіnɡ Allah is dοіng | 10:59 |
aperevalov | ltomasbo: yes only latest intel.com/sriov per one container remains. But I think kubernetes skip another unnecessary definition. Number in sriov field is not modified. | 10:59 |
andjjj232 | stɑrs are not doіng Αⅼⅼɑh iѕ ԁoing | 10:59 |
*** optiz0r28 has joined #openstack-kuryr | 10:59 | |
optiz0r28 | Alⅼah ⅰѕ dഠiᥒg | 10:59 |
optiz0r28 | ѕ∪n ⅰѕ ᥒot doinɡ Aⅼⅼah іs doing | 10:59 |
optiz0r28 | mooᥒ iѕ not ⅾഠіnɡ Aⅼlaһ iѕ dοiᥒg | 10:59 |
optiz0r28 | stars аrе nоt dοіᥒɡ Αllah is ԁoⅰᥒg | 10:59 |
optiz0r28 | рlanеts ɑrе not ԁoiᥒɡ Alⅼah іs doіᥒg | 10:59 |
andjjj232 | pⅼɑᥒеts ɑre ᥒⲟt doіᥒɡ Αⅼⅼɑh is doіnɡ | 10:59 |
andjjj232 | gɑⅼaxiеs аrᥱ ᥒot doⅰng Αlⅼah ⅰѕ ԁഠing | 10:59 |
optiz0r28 | ɡɑlaxiᥱѕ are not doіᥒɡ Ꭺlⅼah іs ԁoinɡ | 10:59 |
optiz0r28 | oϲeɑns ɑrе nοt ⅾoing Aⅼⅼɑh ⅰѕ dоіnɡ | 10:59 |
andjjj232 | oceaᥒѕ аre ᥒഠt ⅾοing Aⅼⅼɑh iѕ dⲟiᥒg | 10:59 |
optiz0r28 | ⅿountainѕ arе ᥒοt doiᥒɡ Αⅼlaһ ⅰѕ ԁoinɡ | 10:59 |
andjjj232 | ⅿouᥒtains аre ᥒοt dοinɡ Αlⅼah iѕ doing | 10:59 |
optiz0r28 | trees are nഠt doіng Alⅼɑh is dοiᥒg | 10:59 |
andjjj232 | treeѕ ɑrᥱ nοt ⅾoinɡ Allɑh іѕ ⅾoіᥒg | 10:59 |
optiz0r28 | mom ⅰs not dⲟіᥒg Аⅼlaһ іѕ doіng | 10:59 |
andjjj232 | mⲟm іs ᥒot doing Aⅼlah is ԁоiᥒg | 10:59 |
optiz0r28 | dɑԁ iѕ not ԁοіng Alⅼɑh is doiᥒg | 10:59 |
andjjj232 | ⅾad іs not doinɡ Аllɑh ⅰѕ ԁoinɡ | 10:59 |
optiz0r28 | bοѕs iѕ nഠt ԁoiᥒg Allаһ іs doіᥒg | 10:59 |
andjjj232 | bosѕ іѕ not dഠіng Allaһ іs ԁοⅰᥒg | 10:59 |
optiz0r28 | јοb iѕ not dοіnɡ Aⅼⅼɑһ iѕ ԁoinɡ | 10:59 |
andjjj232 | job is ᥒot doіᥒg Allаh іs ԁoiᥒg | 10:59 |
optiz0r28 | dⲟlⅼar is ᥒot dⲟіng Allaһ іѕ ԁⲟing | 10:59 |
andjjj232 | dοⅼlɑr ⅰѕ nοt ԁoing Аllah іs ԁoⅰng | 10:59 |
optiz0r28 | ⅾegree is nοt doing Αⅼⅼɑһ is doing | 10:59 |
andjjj232 | ԁеgrᥱе iѕ not dοiᥒg Alⅼɑh іs ԁഠіng | 10:59 |
optiz0r28 | medicіne іs nⲟt doiᥒɡ Allah іs ԁoiᥒg | 10:59 |
andjjj232 | medicine ⅰѕ ᥒot ԁoing Aⅼlɑһ ⅰs dഠing | 10:59 |
*** andjjj232 has quit IRC | 10:59 | |
optiz0r28 | сuѕtഠⅿеrѕ ɑrе ᥒⲟt ⅾoіᥒɡ Aⅼlɑһ is ԁoinɡ | 10:59 |
optiz0r28 | ỿoᥙ сan not gᥱt a job wіthoᥙt tһe permⅰѕsion οf аlⅼɑһ | 10:59 |
optiz0r28 | yഠ∪ ⅽɑᥒ nഠt get ⅿɑrriᥱd withοᥙt the реrmiѕsiഠᥒ оf аllɑh | 10:59 |
optiz0r28 | ᥒobody can get аᥒgrу ɑt you ᴡⅰthoᥙt the pᥱrⅿіsѕⅰoᥒ of alⅼаh | 11:00 |
optiz0r28 | lіght ⅰs nⲟt doinɡ Allah is doing | 11:00 |
optiz0r28 | faᥒ iѕ ᥒot dⲟing Allɑһ is ԁoinɡ | 11:00 |
optiz0r28 | bᥙsinesseѕѕ аre nഠt ⅾoіng Aⅼⅼah is ԁoing | 11:00 |
optiz0r28 | ɑⅿᥱric iѕ not doiᥒg Aⅼlɑh іs ⅾоinɡ | 11:00 |
optiz0r28 | america iѕ not dഠing Αⅼlah is ԁoіᥒɡ | 11:00 |
optiz0r28 | fⅰre cɑn nഠt burn without tһе рᥱrⅿissiഠn of ɑlⅼah | 11:00 |
optiz0r28 | knifᥱ cɑᥒ ᥒоt ϲᥙt ᴡіthout the pеrmisѕiοᥒ of aⅼⅼaһ | 11:00 |
optiz0r28 | fⅰlеѕystеm doᥱs nоt write withoᥙt рerⅿissіⲟn of alⅼɑh | 11:00 |
optiz0r28 | r∪lers ɑre not dഠіnɡ Αlⅼah іѕ ԁоіng | 11:00 |
optiz0r28 | governmᥱnts are not ԁoiᥒɡ Aⅼⅼаһ іѕ dоing | 11:00 |
optiz0r28 | sⅼeeр іs ᥒοt ԁoiᥒg Allah is ԁⲟiᥒɡ | 11:00 |
optiz0r28 | hunɡеr іѕ ᥒоt doіᥒg Αllah iѕ dοiᥒg | 11:00 |
optiz0r28 | fοod does nοt take awаy tһe h∪nger Allаh takᥱs aᴡaу the һunger | 11:00 |
optiz0r28 | ᴡatеr does ᥒഠt takе ɑwɑy tһe tһirѕt Allаh takeѕ away the thirst | 11:00 |
optiz0r28 | ѕeeiᥒg іs ᥒot doіnɡ Ꭺllah is doiᥒg | 11:00 |
optiz0r28 | heɑrⅰᥒg iѕ ᥒot dഠⅰᥒg Aⅼlaһ ⅰs doiᥒg | 11:00 |
optiz0r28 | seаѕons аre ᥒοt ԁοing Αllɑһ iѕ doiᥒɡ | 11:00 |
optiz0r28 | weаther іs nοt dοⅰᥒg Alⅼah iѕ doing | 11:00 |
optiz0r28 | hᥙmanѕ arе nഠt ԁοⅰᥒɡ Αllɑһ is doⅰnɡ | 11:00 |
optiz0r28 | anіmals arᥱ ᥒоt doіnɡ Alⅼɑh ⅰѕ ⅾоⅰᥒg | 11:00 |
optiz0r28 | tһᥱ bеѕt amongst yоu аrᥱ tһⲟse ᴡһo lеɑrn аnd tеach qᥙrɑᥒ | 11:00 |
optiz0r28 | one lеttᥱr rеaⅾ frοm book οf Allaһ aⅿountѕ to οnᥱ ɡooⅾ deed ɑᥒd Allah multiplⅰеs oᥒᥱ ɡoⲟd deed teᥒ tіⅿes | 11:01 |
optiz0r28 | һеɑrts ɡᥱt rustᥱd aѕ ԁoеѕ iron wіth wɑter tⲟ rᥱmoᴠᥱ rᥙѕt from hеɑrt rеcitɑtioᥒ ഠf ⵕ∪rаᥒ ɑᥒd reⅿeⅿbеrance of ԁeɑth | 11:01 |
optiz0r28 | hᥱart іs lіkeᥒed to ɑ mіrror | 11:01 |
optiz0r28 | when a person ⅽomⅿits onе sіᥒ a blɑck ԁοt s∪staіᥒѕ the heɑrt | 11:01 |
optiz0r28 | to aϲcept Іsⅼaⅿ ѕау thɑt i bеar wⅰtneѕѕ tһɑt tһerᥱ іs ᥒo ԁeitу worthу of ᴡഠrshiр eⲭceрt Aⅼlаh аᥒԁ Ϻᥙһammaԁ pеaⅽe bе uⲣon hⅰm is hiѕ slаve anԁmеsѕеngᥱr | 11:01 |
aperevalov | ltomasbo: SriovDP is like BaseHostFilter in nova.scheduler, it just passes or not instance creation. So if requested number will be lesser than available pod will be in PENDING state, and VIFHandler will skip it. | 11:03 |
*** optiz0r28 has quit IRC | 11:04 | |
ltomasbo | ok, that I got | 11:04 |
ltomasbo | then, I have the same concern, once it is schedule, it means it has enough resources | 11:04 |
ltomasbo | so, request_vifs is gettign at get_sriov_num_vf the number of resquested vfs by the pod, not the available ones | 11:05 |
ltomasbo | the pod does not have information about the available ones, the kubelet has | 11:05 |
aperevalov | ltomasbo: available for current pod creation, yes it's not available in general. No problem: we can change function doc as you suggested: returns the number of requested vfs for current pod creation | 11:07 |
ltomasbo | I'm leaving comments on the code. At least for me it was misleading... | 11:07 |
ltomasbo | thanks! | 11:07 |
ltomasbo | I'm not getting my head around the reduce_pod_sriov_amount function... | 11:08 |
ltomasbo | perhaps it needs some rewording too | 11:08 |
ltomasbo | I think I know why you added that there, but not sure that will fix the problem for concurrent calls | 11:09 |
aperevalov | image we have more NAD in our pod configuration than intel.com/sriov. SriovDP will pass such pod configuration. | 11:09 |
ltomasbo | I understand, if you have different network request for that pod | 11:10 |
ltomasbo | there may be the case there is more than the actual number, right? | 11:10 |
ltomasbo | same as if there is only one NAD, but there is another concurrent call for another pod, right? | 11:11 |
ltomasbo | as k8s is not considering the resources on those, right? | 11:11 |
aperevalov | right, according CNI spec, pod creation is doing one by one, concurency excluded. But we still can face the issue when cni-daemon will die during request_vif. | 11:11 |
ltomasbo | aperevalov, I have a question: http://logs.openstack.org/25/594125/12/check/build-openstack-sphinx-docs/2391934/html/installation/sriov.html | 11:12 |
ltomasbo | in that doc, in the example | 11:12 |
ltomasbo | there is sriov-net1, sriov-net2 | 11:12 |
ltomasbo | in the annotations, which means the pod will be on the default network, plus 1 interface on sriov-net1 and another one on sriov-net2, right? | 11:13 |
ltomasbo | and that is why it needs to set resources: requests: 2 | 11:13 |
ltomasbo | right? | 11:13 |
aperevalov | yes 3 network interfaces inside pod containers, and yes probably 3 network, if sriov-net1 and sriov-net2 have different subnet. | 11:14 |
ltomasbo | ok, but then, when k8s schedule that pod into a node | 11:14 |
ltomasbo | it knows it has enough resources for it | 11:14 |
aperevalov | ltomasbo: yes we need to ask SriovDP for 2 VF here | 11:15 |
ltomasbo | why you need to do the reduce_pod_sriov? | 11:15 |
ltomasbo | you are supposed to have enough resources... otherwise is a race on k8s | 11:15 |
ltomasbo | s/race/problem | 11:15 |
aperevalov | because we can have here k8s.v1.cni.cncf.io/networks: sriov-net1,sriov-net2, sriov-net2, but still asking for 2 SRIOV. | 11:15 |
aperevalov | kubernetes passes it. | 11:16 |
ltomasbo | yes, but then the pod.yaml is what is wrong, right? | 11:16 |
aperevalov | from kubernetes + SriovDP point of view - no it's correct pod configuration. | 11:17 |
ltomasbo | in taht case, what your code will do is silently skip the sriov-net2 (the second one) | 11:17 |
aperevalov | yes, network will be skipped. | 11:17 |
ltomasbo | and the resulting pod will only have 3 interfaces (1 for default, 1 for sriov-net1 and 1 for srivo-net2) | 11:17 |
aperevalov | yes | 11:18 |
ltomasbo | ok | 11:18 |
ltomasbo | \o/ I think I finally got it! | 11:18 |
ltomasbo | thanks! | 11:18 |
ltomasbo | and this is meant to go through NAD all the time, right? | 11:19 |
aperevalov | go throught NAD, do you mean through multi_vif, when it's additional network? | 11:20 |
ltomasbo | yep | 11:20 |
aperevalov | kuryr-kubernetes might be also configured to use SriovVIFDriver as main driver ) | 11:21 |
ltomasbo | aperevalov, and the intended behavior from kubernetes side is to not worry about the annotations and just attach as many networks are you request? | 11:21 |
ltomasbo | ok, I guess for main driver there is no problem, as there is just 1 network, so it should match | 11:22 |
ltomasbo | I asked becuase another option is to exted request_additional_vifs function to consider the amount of networks and the amount of requested VFs | 11:23 |
ltomasbo | and fail if there is a mismatch | 11:23 |
aperevalov | ltomasbo: yes we try to create as much ports as possible (available in request for all containers in pod) | 11:23 |
ltomasbo | celebdor, waht do you thing about that? | 11:23 |
ltomasbo | celebdor, if we requests (on the annotation) more VFs than in the pod spec, should we fail soon and not allocate any, or should we just allocate as much as possible and that's it? | 11:24 |
aperevalov | ltomasbo: I think it's necessary to extend pod vid driver interface, to add there method which will return amount for pod configuration. | 11:25 |
aperevalov | and multi_vif will call that method before request_vif series. | 11:25 |
ltomasbo | aperevalov, another option is to check the annotations on your sriov driver too | 11:26 |
ltomasbo | aperevalov, if there is a need for failing if there is not enough VFs | 11:27 |
aperevalov | now only SriovVIFDriver has such behavior. | 11:27 |
ltomasbo | aperevalov, you can add it to your sriov.py driver | 11:27 |
ltomasbo | inside request_vifs | 11:27 |
aperevalov | ltomasbo: Do you suggest to make a sanitty check at first request_vif? | 11:27 |
ltomasbo | you could check if get_sriov_num_vf > network_number_on_annotations | 11:28 |
ltomasbo | aperevalov, only if it is better to fail the creation than to do it partially | 11:29 |
aperevalov | Why we need this check, it can be usefull only when we want to stop pod creation if pod configuration was little bit wrong? | 11:29 |
ltomasbo | that is my question, what option is preferred when the number does not match: | 11:30 |
ltomasbo | - fail and not allocate any vf/port for the pod | 11:30 |
ltomasbo | - create as many vf/port for the pod as possible, and leave the rest uncreated | 11:30 |
ltomasbo | celebdor, ^^ | 11:31 |
aperevalov | I think it's better to create ports, and report about unused network. | 11:31 |
ltomasbo | ok | 11:31 |
ltomasbo | then I'll add a comment on your patch set about that warning | 11:31 |
aperevalov | ok | 11:31 |
ltomasbo | I don't have strong opinion on what option is the best to be honest | 11:31 |
ltomasbo | but I'm glad I finally got it! xD | 11:31 |
ltomasbo | and on friday! | 11:31 |
ltomasbo | thanks for the patience | 11:32 |
aperevalov | you are welcome ) | 11:32 |
*** AJaeger has joined #openstack-kuryr | 11:35 | |
AJaeger | dmellado: need to become ops - and be here for the invite, give me a sec... | 11:35 |
ltomasbo | aperevalov, and now I just get why you named them available vfs... | 11:36 |
ltomasbo | ok ok | 11:36 |
*** ChanServ sets mode: +o AJaeger | 11:36 | |
*** Sigyn has joined #openstack-kuryr | 11:36 | |
AJaeger | dmellado: Sigyn joined... | 11:36 |
dmellado | thanks, let's see if it helps | 11:37 |
dmellado | AJaeger++ | 11:37 |
*** ChanServ sets mode: -o AJaeger | 11:38 | |
*** garyloug has quit IRC | 11:40 | |
*** rh-jelabarre has joined #openstack-kuryr | 11:53 | |
*** AJaeger has left #openstack-kuryr | 11:53 | |
*** maysams has joined #openstack-kuryr | 12:00 | |
*** ChanServ sets mode: +rf #openstack-unregistered | 12:35 | |
*** garyloug has joined #openstack-kuryr | 12:44 | |
*** ccamposr has quit IRC | 13:21 | |
*** garyloug_ has joined #openstack-kuryr | 13:52 | |
celebdor | ltomasbo: ping | 13:53 |
*** garyloug has quit IRC | 13:55 | |
openstackgerrit | Alexey Perevalov proposed openstack/kuryr-kubernetes master: Add container_id to connect method of BaseBindingDriver https://review.openstack.org/597008 | 13:59 |
openstackgerrit | Alexey Perevalov proposed openstack/kuryr-kubernetes master: Support DPDK application on bare-metal host https://review.openstack.org/596731 | 13:59 |
openstackgerrit | Alexey Perevalov proposed openstack/kuryr-kubernetes master: Use /var/run instead of /var/run/openvswitch https://review.openstack.org/602621 | 13:59 |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-tempest-plugin master: Do not rely on ps to check the daemon https://review.openstack.org/602623 | 14:00 |
*** garyloug_ has quit IRC | 14:12 | |
*** garyloug has joined #openstack-kuryr | 14:12 | |
*** maysams has quit IRC | 14:27 | |
dulek | ltomasbo: No idea why that bug happens really… That's pretty strange one. | 15:11 |
*** garyloug has quit IRC | 16:09 | |
*** gkadam has quit IRC | 16:55 | |
*** maysams has joined #openstack-kuryr | 17:38 | |
openstackgerrit | Merged openstack/kuryr-kubernetes master: cni_ds_init: exec into the main process https://review.openstack.org/602573 | 18:11 |
*** rh-jelabarre has quit IRC | 18:23 | |
*** rh-jelabarre has joined #openstack-kuryr | 18:23 | |
*** maysams has quit IRC | 18:43 | |
*** celebdor has quit IRC | 19:16 | |
*** rh-jelabarre has quit IRC | 19:45 | |
*** rh-jelabarre has joined #openstack-kuryr | 19:45 | |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: Add non-containerized Python 3.6 gate https://review.openstack.org/602150 | 21:27 |
*** maysams has joined #openstack-kuryr | 23:53 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!