09:00:41 #startmeeting magnum 09:00:42 Meeting started Wed Oct 16 09:00:41 2019 UTC and is due to finish in 60 minutes. The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:43 brtknr: we can build this container in the ci 09:00:45 The meeting name has been set to 'magnum' 09:00:49 #topic roll call 09:00:51 o/ 09:00:53 o/ 09:00:57 \o/ 09:01:34 i think just us? 09:01:40 let's go through the topics 09:01:48 #topic fcos driver 09:02:07 strigazi: mic is on your hand now 09:02:35 * brtknr needs to work on that plugin to notify active users 09:03:31 o/ sorry 09:03:33 yeap, I think everything is ready 09:03:54 strigazi: see my latest comment? 09:04:03 we may need to mount /dev to get cinder work 09:04:06 nothing else to do. I run e2e for calico and flannel 09:04:43 flwang1: does cinder still work? 09:05:21 flwang1: anyway, I'll try with an old k8s verison 09:06:00 strigazi: TBH, it's tested by one of our consumer, i haven't confirmed it 09:06:13 he used k8s v1.15.4 09:06:18 ok 09:06:40 strigazi: hmm i still can see pod logs 09:06:59 brtknr: context? 09:07:17 for coreos containers 09:07:31 brtknr: can or can't? 09:08:02 "can't" sorry 09:08:02 brtknr: I don't understand. which command doesn't work? 09:08:12 kubectl logs coredns-7584bf494f-vhp4d -n kube-system 09:08:31 eventually times out 09:08:33 Error from server: Get https://[fd13:6667:48e0:0:f816:3eff:fe55:8372]:10250/containerLogs/kube-system/coredns-7584bf494f-vhp4d/coredns: dial tcp [fd13:6667:48e0:0:f816:3eff:fe55:8372]:10250: i/o timeout 09:08:39 brtknr: ipv6 09:09:18 do you disable it in config? 09:09:39 brtknr: I don't have it 09:10:10 ipv6 will never work with flannel anyway 09:10:18 devstack automatically creates it though 09:10:22 and dual stack is added as alpha in 1.16 09:10:56 I don't think we should waste time with ipv6 at this point 09:11:27 I have IP_VERSION=4 in my local.conf 09:11:43 brtknr: maybe with calico works 09:11:58 but let's focus on ipv4? 09:11:59 o/ 09:12:21 strigazi: im happy with just ipv4 09:12:27 i wonder if this is a bug 09:12:48 when a network has both ipv4 and ipv6 subnet, magnum seems to attach both to the instance 09:12:59 rathar than only the one specified in the cluster template 09:14:30 strigazi: no it doesnt work with calico either 09:15:46 brtknr: what do you want to do? what is the use case? 09:16:14 i was just going by the default devstack behaviour when you dont specify IP_VERSION=4 09:16:42 when i specify on the cluster template i want private-subnet and not ipv6-private-subnet in the cluster template, why does it use both? 09:17:10 doesnt seem like an expected behaviour 09:18:46 brtknr: flwang1 I don't know what to say about ipv6 09:19:16 strigazi: personally, i don't think ipv6 is a high priority at this moment 09:19:34 I can investigate altough it is a waste of time 09:19:34 we can take it as a TODO and revisit it in U release 09:19:55 and i don't think k8s support ipv6 09:20:37 its not ipv6 specifically i am expressing concerned about but i noticed that instances that get created get both ipv4 and ipv6 interfaces.... instead of just the one asked for 09:21:19 then i would say that's another separate issue 09:21:23 i can ask in the #openstack-neutron channel, dont worry 09:21:27 not releated to the fcos driver, right? 09:21:51 no dont think so but its only been a problem with fcos driver 09:22:10 then we can take it as a known issue and keep an eye 09:22:21 my instances were being given ipv6 addresses all the time 09:23:47 lets move to the next topic 09:24:20 i have raised my comments in the fcos patch 09:24:36 give me a moment to check 09:24:40 1. i have seen timeout of the heat container agent service, which has been addressed in the latest ps 09:25:04 2. i have seen pod restart 09:25:19 3. the cinder support needs the /dev mount 09:25:37 strigazi: could you pls try the #3? i will test it as well 09:26:31 1. i have seen pod restart, this probably happens due to slow environment 09:26:40 sorry 1. 09:26:42 sorry 2. 09:26:56 for 3. , which version you think work? 09:27:24 flwang1: if we don't add csi-cinder your clients won't be able to use cinder and new k8s versions. 09:27:59 strigazi: i understand that, we need to support csi in U release and i would say ASAP :( 09:28:59 flwang1: to unblock focs, which k8s version you want to work with cinder? 09:29:16 AFAIK, the build-in cinder support code will be totally removed in v1.17.x 09:29:26 ok 09:29:34 so it exists in 1.16? 09:29:58 yes 09:30:26 #action /me will check when built-in cinder code will be removed from k8s 09:31:29 anything else on fcos? 09:31:42 we still don't have a list on what needs to be working 09:31:52 apart from conformance 09:32:09 cinder support 09:32:54 it needs to be written. 09:33:08 maybe a functional test? 09:33:21 anyway, we wasted too much time with fcos 09:33:23 strigazi: i am rerunning e2e test now with ipv6 turned off 09:33:38 brtknr: I'm running them too now 09:33:41 strigazi: if we can have a functional test for fcos, it would be great 09:33:51 but i don't mind adding it later 09:34:19 since i know you have already put a lot of effort on this and i do really appreciate that 09:34:22 flwang1: I need everyone to provide requiements though 09:34:45 let's move on 09:34:59 #topic ng 09:35:07 ttsiouts: brtknr: ? 09:35:32 I tested the ng upgrades, and all looking good now 09:35:38 I'm happy to merge 09:35:43 \o/ 09:36:05 flwang1: apart from the ng-10/13 series we need to take the bugs I added in the agenda 09:37:00 brtknr: thanks again on your input! 09:38:26 do you want me to tell you more about these? 09:38:31 ttsiouts: thanks for your hard work, i just playing with toys you build :) 09:38:40 brtknr: :) 09:40:18 flwang1: brtknr any more comments on NGs? 09:40:25 cool, thank you for you guys good work, i will help review as well 09:40:26 i tested the api_address patch which seems to do the job 09:40:28 we take them for train? 09:40:41 strigazi: i'm ok with that 09:40:45 i am not sure how to test the Failed ng state 09:41:07 but Ive checked the login and it seems sensible 09:41:14 but Ive checked the logic and it seems sensible 09:41:29 Havent looked at the Docker volume size yet 09:41:40 but also seems reasonable 09:42:25 cool 09:42:37 brtknr: to test the failed state you have to force the default ngs to go to UPDATE_FAILED. Quota can help you here 09:43:24 brtknr: without the patch, the cluster goes to UPDATE_COMPLETE. with the patch the cluster reports UPDATE_FAILED as it should 09:43:47 ttsiouts: ok cool I'll try that thanks 09:44:10 brtknr: an easier way to force the UPDATE_FAILED would be to upgrade using a CT that does not have the kube_tag label 09:44:26 Spyros Trigazis proposed openstack/magnum master: Support Fedora CoreOS 30 https://review.opendev.org/678458 09:45:35 brtknr: scratch the upgrade test. conductor will make the cluster go to UPDATE_FAILED directly 09:45:50 brtknr: quota is the way to go.. 09:46:29 #action /me to test Failed ng state 09:46:57 brtknr: thanks again! 09:47:50 brtknr: flwang1 are you happy with fcos now so that we can take it with NGs? 09:48:21 strigazi: as long as you added the /dev mount, i'm happy with the current fcos driver status 09:48:29 we can fix small issues later 09:48:57 flwang1: dev and tabs vs spaces and order in the unit, all done 09:49:10 strigazi: fantastic 09:49:16 thank you, my friend 09:49:17 flwang1: cinder works with 1.14.7, just tested it 09:49:22 yep /dev is there 09:49:35 both dynamic provisioning and static 09:50:11 i'm going to +2 now ;) 09:50:50 next topic? 09:51:02 yes 09:51:04 #topic ignition issue in heat 09:51:27 i have found the root cause of why the local-data doesn't work 09:51:58 i think there is a bug in os-apply-config, which will overwrite the deployments when doing config merging 09:52:09 https://review.opendev.org/688317 09:52:18 here is the patch i proposed in os-apply-config 09:52:23 cool 09:52:45 but until it's accepted, we may have to push heat team to accept this https://review.opendev.org/688322 09:53:07 +1 09:53:08 i mean before the os-apply-config patch accepted 09:53:27 now i'm pushing heat team to review it 09:53:53 i will let you guys know when there is progress 09:54:13 next topic? we only have 6 mins 09:54:20 yes 09:54:24 #topic the autoscaler image 09:54:24 build autoscaler in the ci 09:54:31 i'm ok to have it in our ci 09:54:45 I +1 already 09:54:47 I +2 already 09:55:06 approved 09:55:20 #topic heat-container-agent:train-stable? 09:55:48 strigazi: can you please tag a train-stable version for heat-container-agent? 09:56:01 +1 09:56:07 I'll do it 09:56:14 thank you 09:56:52 i will send an email to openstack community to announce that we will have fcos support in Train 09:57:22 that's a tremendous achievement by the team 09:57:39 we should be proud of it 09:58:28 anything else? 09:58:28 you can mention NGs too, which is much bigger 09:58:39 sure, i will 09:58:54 those are the 2 things on my PTL nomination email 09:59:02 and i'm happy to see we did it 09:59:05 :) 09:59:27 i really appreciate for the great work you guys done 09:59:42 THANK YOU 10:00:10 i have to go, after day light saving, now it's 11:00PM in NZ 10:00:22 sleep well 10:00:24 cheers 10:00:33 good night flwang1 :) 10:00:33 thank you for joining 10:00:37 #endmeeting