09:00:41 <flwang1> #startmeeting magnum
09:00:42 <openstack> Meeting started Wed Oct 16 09:00:41 2019 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:43 <strigazi> brtknr: we can build this container in the ci
09:00:45 <openstack> The meeting name has been set to 'magnum'
09:00:49 <flwang1> #topic roll call
09:00:51 <strigazi> o/
09:00:53 <flwang1> o/
09:00:57 <brtknr> \o/
09:01:34 <flwang1> i think just us?
09:01:40 <flwang1> let's go through the topics
09:01:48 <flwang1> #topic fcos driver
09:02:07 <flwang1> strigazi: mic is on your hand now
09:02:35 * brtknr needs to work on that plugin to notify active users
09:03:31 <jakeyip> o/  sorry
09:03:33 <strigazi> yeap, I think everything is ready
09:03:54 <flwang1> strigazi: see my latest comment?
09:04:03 <flwang1> we may need to mount /dev to get cinder work
09:04:06 <strigazi> nothing else to do. I run e2e for calico and flannel
09:04:43 <strigazi> flwang1: does cinder still work?
09:05:21 <strigazi> flwang1: anyway, I'll try with an old k8s verison
09:06:00 <flwang1> strigazi: TBH, it's tested by one of our consumer, i haven't confirmed it
09:06:13 <flwang1> he used k8s v1.15.4
09:06:18 <strigazi> ok
09:06:40 <brtknr> strigazi: hmm i still can see pod logs
09:06:59 <strigazi> brtknr: context?
09:07:17 <brtknr> for coreos containers
09:07:31 <flwang1> brtknr: can or can't?
09:08:02 <brtknr> "can't" sorry
09:08:02 <strigazi> brtknr: I don't understand. which command doesn't work?
09:08:12 <brtknr> kubectl logs coredns-7584bf494f-vhp4d -n kube-system
09:08:31 <brtknr> eventually times out
09:08:33 <brtknr> Error from server: Get https://[fd13:6667:48e0:0:f816:3eff:fe55:8372]:10250/containerLogs/kube-system/coredns-7584bf494f-vhp4d/coredns: dial tcp [fd13:6667:48e0:0:f816:3eff:fe55:8372]:10250: i/o timeout
09:08:39 <strigazi> brtknr: ipv6
09:09:18 <brtknr> do you disable it in config?
09:09:39 <strigazi> brtknr: I don't have it
09:10:10 <strigazi> ipv6 will never work with flannel anyway
09:10:18 <brtknr> devstack automatically creates it though
09:10:22 <strigazi> and dual stack is added as alpha in 1.16
09:10:56 <strigazi> I don't think we should waste time with ipv6 at this point
09:11:27 <strigazi> I have IP_VERSION=4 in my local.conf
09:11:43 <strigazi> brtknr: maybe with calico works
09:11:58 <strigazi> but let's focus on ipv4?
09:11:59 <ttsiouts> o/
09:12:21 <brtknr> strigazi: im happy with just ipv4
09:12:27 <brtknr> i wonder if this is a bug
09:12:48 <brtknr> when a network has both ipv4 and ipv6 subnet, magnum seems to attach both to the instance
09:12:59 <brtknr> rathar than only the one specified in the cluster template
09:14:30 <brtknr> strigazi: no it doesnt work with calico either
09:15:46 <strigazi> brtknr: what do you want to do? what is the use case?
09:16:14 <brtknr> i was just going by the default devstack behaviour when you dont specify IP_VERSION=4
09:16:42 <brtknr> when i specify on the cluster template i want private-subnet and not ipv6-private-subnet in the cluster template, why does it use both?
09:17:10 <brtknr> doesnt seem like an expected behaviour
09:18:46 <strigazi> brtknr: flwang1 I don't know what to say about ipv6
09:19:16 <flwang1> strigazi: personally, i don't think ipv6 is a high priority at this moment
09:19:34 <strigazi> I can investigate altough it is a waste of time
09:19:34 <flwang1> we can take it as a TODO and revisit it in U release
09:19:55 <flwang1> and i don't think k8s support ipv6
09:20:37 <brtknr> its not ipv6 specifically i am expressing concerned about but i noticed that instances that get created get both ipv4 and ipv6 interfaces.... instead of just the one asked for
09:21:19 <flwang1> then i would say that's another separate issue
09:21:23 <brtknr> i can ask in the #openstack-neutron channel, dont worry
09:21:27 <flwang1> not releated to the fcos driver, right?
09:21:51 <brtknr> no dont think so but its only been a problem with fcos driver
09:22:10 <flwang1> then we can take it as a known issue and keep an eye
09:22:21 <brtknr> my instances were being given ipv6 addresses all the time
09:23:47 <brtknr> lets move to the next topic
09:24:20 <flwang1> i have raised my comments in the fcos patch
09:24:36 <strigazi> give me a moment to check
09:24:40 <flwang1> 1. i have seen timeout of the heat container agent service, which has been addressed in the latest ps
09:25:04 <flwang1> 2. i have seen pod restart
09:25:19 <flwang1> 3. the cinder support needs the /dev mount
09:25:37 <flwang1> strigazi: could you pls try the #3? i will test it as well
09:26:31 <strigazi> 1. i have seen pod restart, this probably happens due to slow environment
09:26:40 <strigazi> sorry 1.
09:26:42 <strigazi> sorry 2.
09:26:56 <strigazi> for 3. , which version you think work?
09:27:24 <strigazi> flwang1: if we don't add csi-cinder your clients won't be able to use cinder and new k8s versions.
09:27:59 <flwang1> strigazi: i understand that, we need to support csi in U release and i would say ASAP :(
09:28:59 <strigazi> flwang1: to unblock focs, which k8s version you want to work with cinder?
09:29:16 <flwang1> AFAIK, the build-in cinder support code will be totally removed in v1.17.x
09:29:26 <strigazi> ok
09:29:34 <strigazi> so it exists in 1.16?
09:29:58 <flwang1> yes
09:30:26 <flwang1> #action /me will check when built-in cinder code will be removed from k8s
09:31:29 <strigazi> anything else on fcos?
09:31:42 <strigazi> we still don't have a list on what needs to be working
09:31:52 <strigazi> apart from conformance
09:32:09 <flwang1> cinder support
09:32:54 <strigazi> it needs to be written.
09:33:08 <strigazi> maybe a functional test?
09:33:21 <strigazi> anyway, we wasted too much time with fcos
09:33:23 <brtknr> strigazi: i am rerunning e2e test now with ipv6 turned off
09:33:38 <strigazi> brtknr: I'm running them too now
09:33:41 <flwang1> strigazi: if we can have a functional test for fcos, it would be great
09:33:51 <flwang1> but i don't mind adding it later
09:34:19 <flwang1> since i know you have already put a lot of effort on this and i do really appreciate that
09:34:22 <strigazi> flwang1: I need everyone to provide requiements though
09:34:45 <strigazi> let's move on
09:34:59 <flwang1> #topic ng
09:35:07 <flwang1> ttsiouts: brtknr: ?
09:35:32 <brtknr> I tested the ng upgrades, and all looking good now
09:35:38 <brtknr> I'm happy to merge
09:35:43 <strigazi> \o/
09:36:05 <ttsiouts> flwang1: apart from the ng-10/13 series we need to take the bugs I added in the agenda
09:37:00 <ttsiouts> brtknr: thanks again on your input!
09:38:26 <ttsiouts> do you want me to tell you more about these?
09:38:31 <brtknr> ttsiouts: thanks for your hard work, i just playing with toys you build :)
09:38:40 <ttsiouts> brtknr: :)
09:40:18 <strigazi> flwang1: brtknr any more comments on NGs?
09:40:25 <flwang1> cool, thank you for you guys good work, i will help review as well
09:40:26 <brtknr> i tested the api_address patch which seems to do the job
09:40:28 <strigazi> we take them for train?
09:40:41 <flwang1> strigazi: i'm ok with that
09:40:45 <brtknr> i am not sure how to test the Failed ng state
09:41:07 <brtknr> but Ive checked the login and it seems sensible
09:41:14 <brtknr> but Ive checked the logic and it seems sensible
09:41:29 <brtknr> Havent looked at the Docker volume size yet
09:41:40 <brtknr> but also seems reasonable
09:42:25 <strigazi> cool
09:42:37 <ttsiouts> brtknr: to test the failed state you have to force the default ngs to go to UPDATE_FAILED. Quota can help you here
09:43:24 <ttsiouts> brtknr: without the patch, the cluster goes to UPDATE_COMPLETE. with the patch the cluster reports UPDATE_FAILED as it should
09:43:47 <brtknr> ttsiouts: ok cool I'll try that thanks
09:44:10 <ttsiouts> brtknr: an easier way to force the UPDATE_FAILED would be to upgrade using a CT that does not have the kube_tag label
09:44:26 <openstackgerrit> Spyros Trigazis proposed openstack/magnum master: Support Fedora CoreOS 30  https://review.opendev.org/678458
09:45:35 <ttsiouts> brtknr: scratch the upgrade test. conductor will make the cluster go to UPDATE_FAILED directly
09:45:50 <ttsiouts> brtknr: quota is the way to go..
09:46:29 <brtknr> #action /me to test Failed ng state
09:46:57 <ttsiouts> brtknr: thanks again!
09:47:50 <strigazi> brtknr: flwang1 are you happy with fcos now so that we can take it with NGs?
09:48:21 <flwang1> strigazi: as long as you added the /dev mount, i'm happy with the current fcos driver status
09:48:29 <flwang1> we can fix small issues later
09:48:57 <strigazi> flwang1: dev and tabs vs spaces and order in the unit, all done
09:49:10 <flwang1> strigazi: fantastic
09:49:16 <flwang1> thank you, my friend
09:49:17 <strigazi> flwang1: cinder works with 1.14.7, just tested it
09:49:22 <brtknr> yep /dev is there
09:49:35 <strigazi> both dynamic provisioning and static
09:50:11 <flwang1> i'm going to +2 now ;)
09:50:50 <flwang1> next topic?
09:51:02 <strigazi> yes
09:51:04 <flwang1> #topic ignition issue in heat
09:51:27 <flwang1> i have found the root cause of why the local-data doesn't work
09:51:58 <flwang1> i think there is a bug in os-apply-config, which will overwrite the deployments when doing config merging
09:52:09 <flwang1> https://review.opendev.org/688317
09:52:18 <flwang1> here is the patch i proposed in os-apply-config
09:52:23 <strigazi> cool
09:52:45 <flwang1> but until it's accepted, we may have to push heat team to accept this https://review.opendev.org/688322
09:53:07 <strigazi> +1
09:53:08 <flwang1> i mean before the os-apply-config patch accepted
09:53:27 <flwang1> now i'm pushing heat team to review it
09:53:53 <flwang1> i will let  you guys know when there is progress
09:54:13 <flwang1> next topic? we only have 6 mins
09:54:20 <strigazi> yes
09:54:24 <flwang1> #topic the autoscaler image
09:54:24 <strigazi> build autoscaler in the ci
09:54:31 <flwang1> i'm ok to have it in our ci
09:54:45 <strigazi> I +1 already
09:54:47 <strigazi> I +2 already
09:55:06 <flwang1> approved
09:55:20 <flwang1> #topic   heat-container-agent:train-stable?
09:55:48 <flwang1> strigazi: can you please tag a train-stable version for heat-container-agent?
09:56:01 <strigazi> +1
09:56:07 <strigazi> I'll do it
09:56:14 <flwang1> thank you
09:56:52 <flwang1> i will send an email to openstack community to announce that we will have fcos support in Train
09:57:22 <flwang1> that's a tremendous achievement by the team
09:57:39 <flwang1> we should be proud of it
09:58:28 <flwang1> anything else?
09:58:28 <strigazi> you can mention NGs too, which is much bigger
09:58:39 <flwang1> sure, i will
09:58:54 <flwang1> those are the 2 things on my PTL nomination email
09:59:02 <flwang1> and i'm happy to see we did it
09:59:05 <strigazi> :)
09:59:27 <flwang1> i really appreciate for the great work you guys done
09:59:42 <flwang1> THANK YOU
10:00:10 <flwang1> i have to go, after day light saving, now it's 11:00PM in NZ
10:00:22 <strigazi> sleep well
10:00:24 <strigazi> cheers
10:00:33 <brtknr> good night flwang1  :)
10:00:33 <flwang1> thank you for joining
10:00:37 <flwang1> #endmeeting