21:30:30 <flwang> #startmeeting magnum 21:30:31 <openstack> Meeting started Tue Jun 4 21:30:30 2019 UTC and is due to finish in 60 minutes. The chair is flwang. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:30:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:30:35 <openstack> The meeting name has been set to 'magnum' 21:30:45 <flwang> #roll call 21:30:53 <colin-> \o 21:30:54 <flwang> #topic roll call 21:31:05 <strigazi> o/ 21:31:08 <flwang> o/ 21:31:13 <flwang> jakeyip: around? 21:31:44 <jakeyip> o/ 21:32:29 <flwang> ok, i don't have any announcement for this meeting, let's discuss the ideas and features we're working on 21:32:51 <flwang> strigazi do you want to start with the 'out-of-tree driver'? 21:33:20 <strigazi> flwang: Jim will be around next week, maybe do it then? 21:33:45 <flwang> strigazi: no problem 21:33:57 <flwang> strigazi: anything else from your side? 21:34:29 <strigazi> ipv6 is bad, nothing else :) 21:34:46 <strigazi> I tested the upgrade patch you pushed 21:35:08 <strigazi> Seems to work, we can merge tomorrow ( for me ) if you want 21:35:17 <strigazi> api is ok 21:35:32 <flwang> strigazi: there is a regression issue after i removed the master_image_id, i will post a new patchset today 21:35:59 <flwang> strigazi: did you leave a comments? or let's discuss it now? 21:36:05 <strigazi> I will run it again on a clean env tomorrow 21:36:28 <strigazi> lgtm, I don't have something to comment 21:36:51 <flwang> strigazi: the only problem for the current way is, the image upgrade 21:37:26 <colin-> am still on sabbatical in octavia land personally, will be back to magnum soon :) 21:37:34 <flwang> with image upgrade, heat will trigger a nova rebuild, and then there is no chance to call k8s api to do drain, do you have any idea for that ? 21:37:48 <flwang> colin-: all good ;) 21:38:14 <strigazi> flwang: have two RGs, and move nodes from one to the other 21:38:43 <strigazi> when upgrade is done, one RG will have count 0 and the other count N 21:39:22 <strigazi> makes sense? 21:39:30 <flwang> strigazi: but you still don't have chance to call 'drain' to avoid downtime for the application running on the cluster 21:40:47 <strigazi> with what I described it is possible 21:41:04 <strigazi> we can have a SD that acts on delete 21:41:34 <flwang> if we had 2 RGs, 3 nodes in the RG 1, and 0 in the RG 2, we create a new node in RG 2 and then remove the RG1, when can we call kubectl drain? 21:41:48 <flwang> strigazi: ah, i can see your point now 21:42:04 <flwang> leverage the ON_DELETE 21:42:35 <flwang> yep, it could work 21:42:45 <flwang> we can have it in next stage 21:42:50 <strigazi> yes 21:43:18 <flwang> strigazi: are you going to leave comments on my patch so that i can address them today? 21:43:20 <strigazi> let's try to merge later today (for you) or tomorrow, as you want 21:43:37 <strigazi> i think it is ok 21:43:38 <flwang> if we can get it done by this week, it would be great 21:44:08 <flwang> so that we can have enough time for testing in this cycle 21:44:16 <strigazi> +1 21:45:55 <flwang> cool 21:46:38 <flwang> strigazi: can we have a discussion for this one https://review.opendev.org/#/c/621734/ ? 21:47:07 <flwang> boot from volume for k8s nodes 21:47:37 <flwang> and it also support set the volume type, which is useful for cloud provider who has high performance storage 21:48:11 <flwang> strigazi: i'd like to understand why do you think we have to support both 21:48:32 <strigazi> both what? 21:48:58 <strigazi> docker-volume-size cen be replaced by bfv 21:49:20 <strigazi> but supporting both is trivial 21:49:39 <strigazi> personally, I don't like --docker-v-s 21:50:21 <flwang> strigazi: both = boot from volume and boot from image 21:53:42 <strigazi> so three options: 21:53:55 <strigazi> 1. boot from image (and use local ssds) 21:54:08 <strigazi> or whatever the cloud has 21:54:22 <strigazi> 2. boot from image plus volume 21:54:26 <strigazi> 3. bfv 21:54:43 <flwang> 1 is the current one we have 21:54:50 <flwang> 3 is the one i'm proposing 21:55:04 <strigazi> yes 21:55:10 <flwang> 2 need more work because heat resource property doesn't support condition 21:55:29 <strigazi> it does, I have tested it 21:55:47 <flwang> ok, how did you do that? 21:55:52 <strigazi> one sec 21:57:43 <strigazi> eg http://paste.openstack.org/show/752512/ 21:58:55 <flwang> strigazi: that one works, i know 21:59:22 <strigazi> what doesn't? I can leave a comment in gerrit 21:59:34 <strigazi> let's discuss it tehre 21:59:42 <strigazi> * there 22:00:18 <flwang> the problem i still can't fix is, to make heat/nova accept both image and block_device_mapping_v2 22:00:53 <strigazi> I'll check it 22:01:04 <flwang> i will post my latest code 22:01:23 <flwang> i know i'm very close, but i just haven't fully get it done 22:01:35 <flwang> maybe you can shed some lights for me 22:02:27 <strigazi> sure, is it up to date in gerrit with your changes? 22:03:40 <flwang> strigazi: i will upload a new one now, one sec 22:04:10 <strigazi> οκ 22:06:40 <openstackgerrit> Feilong Wang proposed openstack/magnum master: [fedora atomic k8s] Add boot from volume support https://review.opendev.org/621734 22:07:18 <flwang> strigazi: https://review.opendev.org/#/c/621734/9/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml@737 22:07:22 <flwang> that's the tricky part 22:08:13 <strigazi> "" doesn't work? 22:08:25 <flwang> maybe the condition i'm defining is still not correct, because heat/nova is still complaining that it's not allowed to pass in multi bootable source 22:08:42 <flwang> no, it doesn't work 22:08:59 <flwang> as i said above, i think i'm very close to get it work :) 22:09:27 <flwang> but it doesn't, and it's raising error like 'multi bootable source are passed in' 22:10:22 <strigazi> ok, I can test tomorrow. We should be able to fix it 22:10:41 <flwang> strigazi: thanks 22:10:52 <strigazi> anything else? we can continue tomorrow from the office 22:11:08 <flwang> as for the upgrade patch 22:11:16 <flwang> should I wait your comments? 22:11:36 <flwang> i will do some final clean up today 22:12:59 <strigazi> push, if you have anything else, I can't think anything on the top of my head now 22:13:27 <flwang> strigazi: cool, then i will catch up with you tonight(your tomorrow), thank you very much 22:13:36 <flwang> jakeyip: anything else you want to bring in? 22:14:25 <strigazi> ping in gerrit if you need anything 22:14:33 * strigazi signing off 22:15:37 <openstackgerrit> Merged openstack/magnum stable/stein: Blacklist bandit 1.6.0 and cap Sphinx on Python2 https://review.opendev.org/660243 22:15:38 <jakeyip> something minor - some people have been asking for supported (?) software 22:16:36 <jakeyip> software matrix maybe? e.g. which version of magnum, k8s, os, agents. 22:18:02 <flwang> strigazi: thank you 22:18:05 <jakeyip> there's a change which someone says don't work with fa27/28. someone else wants to work on ubuntu 22:18:34 <flwang> jakeyip: i discussed that with strigazi years ago 22:19:14 <flwang> we can start build one i think, but we need to identify a maintainer for each driver 22:19:30 <flwang> and we need to make sure it's actively maintained 22:20:24 <jakeyip> ok 22:21:22 <flwang> i think me and strigazi can maintain the fedora atomic driver 22:21:43 <flwang> we can add it into the agenda of out-of-tree driver 22:21:49 <flwang> i think it's highly related 22:22:02 <jakeyip> ok 22:22:36 <jakeyip> does that include testing / supporting different occm and k8s versions? 22:24:57 <flwang> jakeyip: i think so, the matrix does(needs) cover the versions of k8s and occm 22:25:12 <flwang> because those are key parts of the cluster 22:25:29 <jakeyip> put it simply, from an operator POV I would like to know what others are using already so I don't have to waste my time testing each version. 22:25:44 <jakeyip> that would be helpful to me 22:27:45 <flwang> jakeyip: yep, that's a good point and should be one of the purposes 22:28:03 <jakeyip> from magnum devs POV it might also help focus on versions of OS to support 22:28:08 <openstackgerrit> Feilong Wang proposed openstack/magnum stable/stein: [k8s_fedora_atomic] Make calico devices unmanaged in NetworkManager config for master node https://review.opendev.org/662997 22:28:39 <jakeyip> thanks for taking into account this suggestion 22:28:48 <flwang> jakeyip: thank you! 22:29:12 <flwang> jakeyip: i may put more time on this after the rolling upgrade patch done 22:31:06 <jakeyip> sure. I want to help out with reviews on new features, but still troubleshooting/debugging devstack 22:33:01 <jakeyip> just a quick question - do heat/magnum services start automatically for you with your local.conf? 22:39:05 <flwang> jakeyip: yes 22:39:12 <flwang> what's your current issue? 22:39:57 <jakeyip> the services don't start. 22:40:24 <jakeyip> can you pass me the commit your devstack is on? maybe something different in master 22:46:02 <jakeyip> hm, nvm, might be something else 22:46:16 <jakeyip> nothing else from me, cheers 23:00:03 <flwang> #endmeeting