14:03:00 <dulek> #startmeeting Kuryr
14:03:01 <openstack> Meeting started Mon Dec  3 14:03:00 2018 UTC and is due to finish in 60 minutes.  The chair is dulek. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:03:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:03:05 <openstack> The meeting name has been set to 'kuryr'
14:03:40 <dulek> Hello there! dmellado is feeling sick today, so he won't be with us, but let's start.
14:04:23 <ltomasbo> o/
14:04:52 <maysams> Hi!
14:04:57 <dulek> celebdor, yboaron_, aperevalov, maysams?
14:05:06 <aperevalov> o/
14:06:48 <celebdor> o/
14:06:56 <yboaron_> o/
14:07:02 <dulek> I guess we can skip announcements if nobody has anything general?
14:07:09 <dulek> #topic kuryr-kubernetes
14:07:32 <celebdor> dulek: I don't
14:07:48 <dulek> Okay, so maybe I can start with quick update.
14:07:59 <dulek> CRI-O support patches are up for reviews!
14:08:19 <dulek> Here's the list: https://review.openstack.org/#/q/status:open+project:openstack/kuryr-kubernetes+topic:bp/crio-support
14:08:47 <dulek> Or even better list: https://review.openstack.org/#/q/topic:bp/crio-support
14:09:10 <dulek> This still has no support for Fedora/CentOS and OpenShift, but I guess it's a good step towards support of different container engine.
14:09:19 <dulek> s/engine/engines
14:09:24 <dulek> Reviews welcome!
14:10:00 <dulek> I've also fixed a minor bug raised by SonicSong, regaring kuryr-daemon and hostnames with capital letters.
14:10:00 <yboaron_> dulek, Q: what container engines do we plan/want to support in addition to docker?
14:11:01 <dulek> yboaron_: It's CRI-O (+ podman as tool used in DevStack) at the moment. I don't really have resources to test any other.
14:12:00 <yboaron_> dulek, OK, cool!
14:12:12 <dulek> Regarding kuryr-daemon bug - here's the patch: https://review.openstack.org/#/c/621188/
14:12:36 <dulek> If you ever see that kuryr-daemon doesn't notice pods, it might be this issue.
14:13:19 <dulek> I think that's it from me, this week I'll probably work on CRI-O on Fedora/CentOS and integration with OpenShift.
14:15:24 <aperevalov> I have a question regarding nested DPDK support. https://review.openstack.org/#/c/559363/ anybody in charge of rebasing it? Gary is not here, but if you don't have objection I can rebase it, of course I'll try to contact with him.
14:15:29 <celebdor> dulek: I failed your https://review.openstack.org/#/c/620045/5 on a small thing
14:15:53 <celebdor> that you forgot quotes (which also demands changing [*] into [@]
14:16:03 <dulek> aperevalov: Seems like garylong is on #openstack-kuryr?
14:16:44 <aperevalov> dulek: oh, ok. Lets talk with him there )
14:16:46 <dulek> celebdor: Ha, that's funny, I've tested `echo ${command[*]}` and it looked good.
14:16:56 <dulek> celebdor: I'll take a look again, thanks.
14:17:16 <celebdor> dulek: that will work too, but shellcheck linting will complain about not "" your vars
14:17:35 <aperevalov> Also I'm preparing tempest test for SR-IOV, I found it has common parts with npwg test (NAD creation). So I try to generalize it.
14:18:11 <celebdor> NAD?
14:18:14 <dulek> celebdor: linter in my IDE definitely ignores that, but whatever. ;)
14:18:16 <celebdor> networkattachmentdevice
14:18:18 <celebdor> of course
14:18:22 <celebdor> dulek: :-)
14:18:31 <celebdor> aperevalov: sounds good
14:19:18 <celebdor> dulek: https://review.openstack.org/#/c/621188/1 should be backported, did you backport?
14:19:45 <aperevalov> Also I little bit stuck with credentials for tempest test to connect to devstack.
14:19:46 <dulek> celebdor: Was waiting for master first.
14:20:10 <dulek> celebdor: But if you insist - here you go: https://review.openstack.org/#/c/621588/
14:20:45 <dulek> aperevalov: Why is that? Tempest should give you a client that can connect.
14:21:18 <ltomasbo> dulek, was not another bug forcing it to get it from socket all the time instead of env var?
14:21:34 <celebdor> aperevalov: did you enable our tempest plugin devstack in your local.conf?
14:22:12 <ltomasbo> dulek, ahh, only on kuryr-controller for leader election... nevermind...
14:22:22 <aperevalov> yes, I installed it with local.conf which I got from local.conf.sample.
14:23:15 <dulek> aperevalov: Okay, so to use `openstack command` just do `source /opt/stack/devstac/openrc admin admin`.
14:23:51 <dulek> aperevalov: And in Tempest I don't think you need anything else than the predefined Tempest methods.
14:24:34 <dulek> aperevalov: Such methods: https://github.com/openstack/kuryr-tempest-plugin/blob/0db1f300850094127815ff9597949270a2814719/kuryr_tempest_plugin/tests/scenario/base.py#L567-L569
14:25:12 <dulek> aperevalov: And to connect to K8s API just use the clients defined here: https://github.com/openstack/kuryr-tempest-plugin/blob/ffbd59d79213fc66d91e0fb17fe76c7841d84026/kuryr_tempest_plugin/tests/base.py#L32-L36
14:25:22 <dulek> aperevalov: I'm not sure what else is needed?
14:25:22 <aperevalov> there wasn't problem here with admin account ) Tempest was able to create new users (it does it in dynamic credential provider). Newly created users havn't enough rights.
14:25:50 <dulek> aperevalov: Why do you want a new user then?
14:26:10 <dulek> aperevalov: Let's do a step back and try to understand what test are you trying to write.
14:27:02 <aperevalov> lets discuss it later after meeting ;)
14:28:22 <dulek> Okay!
14:28:27 <dulek> Anyone else has a report?
14:29:06 <maysams> I want
14:29:36 <maysams> I'm working in giving support to mach expressions in the NP
14:29:53 <maysams> https://review.openstack.org/#/c/620572/
14:30:09 <maysams> would really appreciate any more reviews
14:31:00 <maysams> or any additional ideas on how to approach it
14:31:56 <maysams> also, I'm aiming to add tempests tests to the readiness quota check
14:32:02 <celebdor> cool
14:32:13 <dulek> maysams: How do we plan to test that in Tempest.
14:32:21 * dulek has no immediate idea of a useful test.
14:33:22 <maysams> dulek: Fetch the quota assigned to sg
14:33:30 <maysams> create more than should
14:33:41 <maysams> fetch status of kuryr controller
14:34:06 <dulek> maysams: Hm… I wonder if readiness probes are run periodically or only on pod start?
14:34:19 <maysams> periodically
14:34:38 <maysams> dulek: from what I've seen
14:34:38 <ltomasbo> dulek, maysams: actually, one of the frequent failures on the gates is fips quota...
14:34:59 <ltomasbo> dulek, maysams: not sure if that quota checking was added though
14:35:26 <dulek> maysams: Okay, then that makes perfect sense!
14:37:45 <yboaron_> maysams, how the user will be notified that the reason for Kuryr-controller failure was Openstack quota?
14:38:09 <maysams> yboaron_: It will be marked as not ready
14:38:10 <maysams> and
14:38:15 <yboaron_> maysams, and how restarting kuryr-controller will solve that?
14:38:17 <maysams> the user can look into the logs
14:38:35 <yboaron_> maysams, we should expect loop of restarts , right?
14:39:09 <maysams> yboaron_: No. We should wait for the user to cleanup resources
14:39:29 <maysams> then the controller will be marked as ready again
14:39:41 <maysams> and will be able to receive traffic
14:41:02 <dulek> yboaron_: That's the idea, readiness probe doesn't trigger restarts.
14:41:14 <dulek> yboaron_: That's why we're adding those checks there.
14:41:57 <yboaron_> maysams, dulek - OK!, thanks for the clarification - I missed this pieace of information
14:42:12 <maysams> yboaron_: np! :-)
14:42:50 <dulek> ALso one more clarification - failing on readiness probe doesn't stop kuryr-controller from handling requests.
14:43:21 <dulek> As there's really no way to do that through K8s mechanisms (we could just stop watcher internally, but we don't do that).
14:43:57 <dulek> But Kuryr pod being not ready it's a bit better indication than it was before.
14:44:21 <dulek> And there's no way of propagating exact failure message to the user on `kubectl describe` level
14:44:30 <dulek> At least when I've checked, maybe missed something.
14:44:34 <yboaron_> dulek, what was the previous solution?  trigger pod restart?
14:45:42 <maysams> dulek: I've checked that as well
14:46:17 <yboaron_> The previous solution result in controller restart loop, right?
14:46:26 <maysams> dulek: from what I saw k8s do not provide this(the logging at 'describe' level)
14:46:43 <maysams> yboaron_:yes
14:47:05 <yboaron_> maysams, 10x
14:47:14 <dulek> yboaron_: It was triggered because pod has had high number of failures IIRC. And that was probably kuryr-daemon restarts.
14:47:16 <dulek> Not controller.
14:47:26 <dulek> But I'm not sure without digging the code more. :P
14:48:56 <dulek> Okay, anything else folks?
14:52:03 <dulek> #endmeeting