10:00:03 #startmeeting containers 10:00:04 Meeting started Tue May 8 10:00:03 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 10:00:07 The meeting name has been set to 'containers' 10:00:13 #topic Roll Call 10:00:31 o/ 10:00:34 o/ 10:00:37 o/ 10:01:22 hi 10:01:45 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-05-08_1000_UTC 10:02:07 #topic Blueprints/Bugs/Ideas 10:02:39 Already started with flwang1 in the channel: Multi-master clusters without API lbass 10:03:16 The idea is: 10:03:52 Let users create HA clusters without paying/using a loadbalancer 10:04:18 swarm clusters don't need LB for the masters to join the cluster 10:04:22 and in kubernetes 10:04:46 you can point flannel to many etcd servers 10:05:04 and kubelet/proxy to many kubernetes api servers 10:06:03 strigazi: i got your point, but does that mean, user should be fully aware of what he is doing? 10:06:31 as i asked above, if the server he was using is dead, how can he know what to do next? 10:07:19 advanced users will know how to point to a different master. 10:07:22 but 10:07:42 for convenience we can intergrate this with magnum's monitoring 10:08:11 strigazi: the monitoring for master is on my list, btw 10:08:38 if we all agree user understand what he/she is doing, then that's ok 10:09:41 for example, gke is giving cluster that are not accessible from the outside 10:09:48 for example, gke is giving clusters that are not accessible from the outside 10:10:02 these clusters are cheaper since they don't have public ips 10:10:33 strigazi: that's right 10:10:49 the same could apply for LBs 10:10:51 if the cluster is just for dev/test, ci/cd, then that's true 10:12:15 we can start doing this as soon we have master monitoting? 10:12:55 sounds a plan 10:13:51 ok, next 10:13:57 test eventlet 0.23 10:14:26 swift already found a breaking change with the new eventlet version: 10:14:29 test eventlet 0.23 10:14:32 #link https://bugs.launchpad.net/swift/+bug/1769749 10:14:33 Launchpad bug 1769749 in OpenStack Object Storage (swift) "eventlet 0.23.0 breaks swift's unicode handling" [Undecided,Confirmed] 10:14:51 0.23 is the version that has the fix for the kubernetes client 10:15:57 does that mean we may still can't use that version? 10:16:01 I'm having a look into this, it is absolutely necessary to implement the monitoring we want while using oslo.service 10:16:37 I think we can't use it if it is not accepted in o/requirements 10:16:47 ricolin: isn't it like this? 10:17:30 yes 10:18:03 if it's in global requirement 10:19:49 we need to find one more fix for eventlet I guess... 10:20:29 strigazi: back to the original plan? 10:20:59 I can have a quick look today and see if it it can be fixed for swift 10:21:10 cool 10:21:45 If not, back to the original plan I guess 10:22:10 :( 10:22:49 Next, for upgrades, I'm having some issues with the rebuild of the master server, hence the no lb mulitmaster proposal 10:24:26 I'll push an apiref patch today for upgrades too 10:25:44 For reference this is the example request: http://paste.openstack.org/show/720555/ 10:26:30 flwang1: http://paste.openstack.org/show/720557/ 10:26:43 strigazi: cool 10:26:47 will review it 10:26:58 that is all from me 10:28:27 i'm still working/testing the k8s-keystone-auth 10:28:42 hopefully i can submit another patch set today or tomorrow 10:28:54 no progress for the monitoring stuff 10:30:24 that's all from me 10:30:59 thanks 10:31:22 slunkad: do you have something to discuss? 10:31:24 I am testing ricolin patch today and will post a review soon, and then will get started on the magnum patch for trusts 10:31:34 slunkad: cool 10:31:56 slunkad, cool 10:32:04 ricolin: since you are here, I have a question when you are done 10:32:14 ricolin: since you are here, I have a question when slunkad is done 10:32:24 I am done 10:32:32 :) 10:33:33 ricolin: is it possible to make an OS::Nova::Server resource to depend on a SoftwareDeployment resource 10:33:36 ? 10:34:13 And that software deployment to be applied to the same OS::Nova::Server 10:34:25 not if SoftwareDeployment contain that nova server info 10:34:33 why you need this 10:35:11 I want to drain a Nova server first and then delete it or rebuild it 10:35:52 I could run the drain command in the master node tough... 10:36:00 So this is possible right: 10:36:42 apply softwareDeployment to novaA and then replace/rebuild NovaB 10:36:46 ricolin: ^^ 10:37:47 I see 10:39:52 I'll try ti 10:39:54 I'll try it 10:40:02 strigazi, wonder how can you rebuild before create SD created? 10:40:22 Is all this in the same stack creation process? 10:40:33 yes 10:40:55 and how you rebuild it? 10:41:06 Passing a new image 10:41:14 Passing a new glance image 10:41:55 with update stack? 10:41:59 yes 10:42:52 you can update a new SD to replace the old one 10:43:20 so after server updated, the new SD will do software deploy again 10:44:21 I want to run the first SD before the server update 10:44:32 Feilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ https://review.openstack.org/561783 10:44:50 flwang1: ++ 10:45:33 strigazi, you can create a SD for server and when it's complete, use the second SD to update again 10:45:51 strigazi, have to check if SD is rebuild awared 10:46:18 flwang1: maybe we need to go for an SD here: 10:46:25 * ricolin kind of forgot about that part 10:46:39 flwang1: http://paste.openstack.org/show/720559/ 10:46:55 ricolin: thanks, I'll check 10:48:49 flwang1: did you see the paste? 10:48:51 clicking 10:49:37 yes 10:49:40 i saw that 10:49:55 we are very close to the limit, do you mean that? 10:50:02 yes 10:50:19 And this is already after this patch: 10:50:27 https://review.openstack.org/#/c/566533/ 10:52:04 yep, it's on my list 10:52:35 i will review it today or tomorrow 10:52:35 it was | 63756 | kube-calico-b3b6jkuxsinl-master-0 | 10:52:46 what's our target? 10:52:55 and it failed on my devstack 10:53:03 flwang1: what do you mean? 10:54:47 i mean target to 10000 or 0? 10:55:06 or whatever feasible number ;) 10:55:34 0 we can't do, if we cut it in half it should be ok 10:55:59 similar to my patch: https://review.openstack.org/#/c/561858/ 10:56:49 Time is running out 10:56:49 great, it would be cool if we can set a goal 10:57:19 I'll rebase my patch and see which number makes sense 10:58:14 strigazi: nice, i will review it 10:58:22 i think that's us 10:59:03 Let's wrap, we can continue in the channel 10:59:22 Thanks for joining folks! 10:59:27 #endmeeting