16:00:23 #startmeeting containers 16:00:24 Meeting started Tue Aug 22 16:00:23 2017 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:27 The meeting name has been set to 'containers' 16:00:56 #topic Roll Call 16:00:58 Adrian Otto 16:01:08 Ricardo Rocha 16:01:09 Spyros Trigazis 16:01:29 Sayali Lunkad 16:02:55 Thanks for joining the meeting adrian_otto rochaporto slunkad 16:03:05 :-) 16:03:17 #topic Announcements 16:03:52 is it the right time to welcome strigazi as out new PTL? 16:03:59 s/out/our/ 16:04:01 i think it is! :) 16:04:14 Sometime ago we decided to go for bi-weekly meeting but didn't work out very well :) 16:04:15 Welcome strigazi !! You will be an awesome PTL! 16:04:21 and thank Adrian! :) 16:04:35 Thank you adrian_otto ;) 16:05:05 Let's try to have our meeting every week again at this time 16:05:17 +1 16:05:27 strigazi: +1, it's easier for new people like me to catch up! 16:05:43 Congrats Strigazi 16:05:54 strigazi: and congrats :) 16:06:15 The agenda will be at the usual place again https://wiki.openstack.org/wiki/Meetings/Containers 16:06:21 congrats ! Can I get free +2 ? 16:06:22 mvpnitesh slunkad thanks 16:06:49 mvelten: xD 16:07:26 This week we have to do the pike release. Most likely I'll do it on Thursday 16:07:57 The most important feaure that we try to merge is mvelten's run-kube-in-containers 16:08:10 #link https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/run-kube-as-container 16:08:33 yey this is happening 16:08:47 These is a slightly new approach that we work on with the fedora/centos-atomic team 16:09:07 After that we can still take in fixes for pike 16:09:14 I will review these this morning and cote on them 16:09:20 s/cote/vote/ 16:09:28 as we do for stable branches 16:09:33 thanks adrian_otto 16:10:20 you may notice that the kubernetes tests are non voting due to the constant timeouts, but the logs have a great detail in logs/cluster-nodes 16:11:05 One more announcement 16:11:47 Since we won't have a room in the PTG we can make the next meeting or the next two meetings our planing meetings. 16:12:11 for queens? 16:12:16 So that new contributors can take new blueprints or bugs 16:12:20 yes for queens 16:12:21 strigazi: are you'll going to be attending the ptg? 16:12:31 slunkad: no, I won't 16:12:53 hey I have a question about one of the patches in the above set 16:12:54 https://review.openstack.org/#/c/488443/8/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh 16:13:22 why did we decide to bind to a localhost address explicitly on line 15 of the proposed patch? 16:13:26 strigazi: +1 for the planning meetings 16:13:55 it was done before like that but not here 16:13:56 2s 16:14:15 https://review.openstack.org/#/c/488443/8/magnum/drivers/common/templates/kubernetes/fragments/enable-kube-proxy-master.sh 16:14:22 you can see the master here 16:14:24 adrian_otto the code is coming from the previous definition of kube-proxy. 16:14:51 So, these are the announcements I have. 16:15:06 I'll send an email to the ML to notidy everyone 16:15:21 ok, I will withhold further discussion until strigazi calls on me 16:15:27 so if you have ideas we can discuss them in the next meeting 16:16:03 Of course, for mvpnitesh and slunkad that are here we can have a chat today too 16:16:39 I need to leave, adrian_otto please comment on review, I'll address them 16:17:07 It's really not a problem with this contribution. I just think we are configuring k8s suboptimally 16:17:27 since there are no action items in the agenda, 16:17:30 strigazi: sure, I mostly am working on integrating the opensuse driver atm but I also want to get involved in the main project itself so would help to get a sort of status of the project 16:17:49 Can i work on this https://blueprints.launchpad.net/magnum/+spec/ask-confirm-when-delete-bay 16:19:13 slunkad: sounds good, we can add the suse driver in the main tree. If we have a regular contributor it would be easier 16:19:43 adrian_otto: what to you think about https://blueprints.launchpad.net/magnum/+spec/ask-confirm-when-delete-bay ? 16:20:10 slunkad: what is your timezone? 16:20:16 strigazi: yes I am working for suse so I will be the regular contributor there! 16:20:22 strigazi: CEST 16:21:28 slunkad: mine too, so if you need anything shoot in #openstack-containers, is there something specific that you want to discuss today? 16:22:07 on ask to confirm, this is client side only right? we should do it with a -y option to not to have to confirm manually (we have several scripts doing things like this) 16:22:33 strigazi: nothing specific really, just want to know where the focus of the project is for this cycle but if we have the planning meetings then I guess its a topic we can discuss there 16:22:52 rochaporto mvpnitesh or like --yes like heat does 16:23:15 sounds good 16:23:27 slunkad: I guess that you are interested in suse + kubernetes 16:23:51 strigazi: that and the overall project too 16:24:18 strigazi: like heat does 16:24:20 what we and me specificaly are working on is lifecycle operations and more specificaly upgrades. 16:25:04 We rely on heat for all drivers right now and do upgrades we will use the heat agent 16:25:37 In the same context, for all drivers we try to use stock images 16:26:07 for fedora based ones we do, it would be great to do the same for opensuse 16:26:13 strigazi: I see, is there a bp for the upgrade stuff? 16:26:54 slunkad: https://review.openstack.org/#/c/433728/ It is outdated but the direction is the same. 16:27:07 I'll need to update it. 16:28:03 And since we are a containers project we try to deliver all content in the images in containers 16:28:10 slunkad: is there something similar to fedora/centos atomic in suse? 16:28:41 I think opensuse uses a stock image too, nothing special to magnum 16:28:55 rochaporto: slunkad that was my next question xD 16:29:59 strigazi: not atm but there is some effort being put into it recently 16:30:38 slunkad: ok 16:33:36 slunkad: there are two more imprtant features that we want to add 16:33:50 One is cluster federation 16:34:04 #link https://review.openstack.org/#/c/489609/ 16:34:41 and the second one that many users ask about is a option or a kubernetes driver that supports kuryr 16:35:11 https://blueprints.launchpad.net/magnum/?searchtext=kuryr 16:35:44 strigazi: what about the cinder-volumes stuff? I think that is in pike right? 16:36:14 for federation clenimar has submitted the db management patch, the api part will also be there this week 16:36:26 #link https://review.openstack.org/#/c/494546/ 16:36:59 slunkad: in kubernetes 1.7.3+ the cloud provider is working fine and for docker swarm and mesos we have rexray we 16:37:43 strigazi: ok 16:37:48 slunkad: so for the fedora based drivers the integration with cinder is there 16:38:04 strigazi: yes I need to add it for the suse driver too 16:38:26 as soon as we branch we can move suse in-tree 16:38:44 strigazi: ok sounds great! 16:39:18 slunkad: I'll train my self to feel comfortable to test the opensuse driver too ;) 16:40:11 slunkad: I think the best approch would be to align the opensuse driver with the fedora ones and run the same set of tests and expect the same results 16:40:30 the goal is the same for both drivers anyway 16:41:28 strigazi: yes atm we have if on sles so I was thinking about maybe have ci with opensuse but for the time being I think I will also put more effort to test it with opensuse 16:41:35 strigazi: yes for sure 16:41:57 s/if/it 16:43:00 I try to have a 3rd party ci on centos ci for the fedora based ones, if we have one for suse would be amazing 16:44:38 do we cleanup the blueprints before next meeting? it will probably make things easier 16:45:01 strigazi: ok I can check as to what is possible from our side! 16:45:10 guys i wanted to work on this bug https://bugs.launchpad.net/magnum/+bug/1704329 Would like to give solution in this way i) will add a function like check_discovery_url() in https://github.com/openstack/magnum/blob/master/magnum/common/urlfetch.py , which check whether the magnum host can access "https://discovery.etcd.io" ii) Will call that check_discovery_url() function "def post()" i.e at https://github.com/ope 16:45:10 Launchpad bug 1704329 in Magnum "Clluster creation failed because of failure in getting etcd discovery url " [High,New] - Assigned to M V P Nitesh (m-nitesh) 16:45:55 rochaporto: yes, I'll go through them for sure before the next meeting. 16:46:02 mvpnitesh go for it 16:46:06 strigazi: thanks! 16:46:26 strigazi: Sure 16:47:57 Strigazi: what about this https://blueprints.launchpad.net/magnum/+spec/ask-confirm-when-delete-bay , I wanted to go heat way 16:48:03 mvpnitesh: slunkad rochaporto is there anything else? 16:48:34 #action strigazi to cleanup the blueprint list 16:48:36 strigazi: nope, thank you for all the information! 16:48:49 Strigazi : what about this bug this bug https://bugs.launchpad.net/oslo.config/+bug/1517839 is there in magnum, Just wanted to know wether i can work on this or it is not required for magnum project. 16:48:51 Launchpad bug 1517839 in tacker "Make CONF.set_override with parameter enforce_type=True by default" [Undecided,In progress] - Assigned to Ji.Wei (jiwei) 16:49:25 #action Make Queens planning in the next two meeting 16:49:41 mvpnitesh I 16:49:46 mvpnitesh I'll have a look 16:50:25 #action strigazi to send an email to the ML for Queens planning 16:50:39 strigazi: ok , What about this https://blueprints.launchpad.net/magnum/+spec/cluste-netinfo 16:51:33 mvpnitesh you can start working on ask-confirm-when-delete-bay by adding the --yes parameter and 16:52:28 mvpnitesh: I'm not sure about bug #1517839 16:52:29 bug 1517839 in tacker "Make CONF.set_override with parameter enforce_type=True by default" [Undecided,In progress] https://launchpad.net/bugs/1517839 - Assigned to Ji.Wei (jiwei) 16:52:45 mvpnitesh: I'll get back to you tomorrow about it 16:53:07 mvpnitesh: for ask-confirm-when-delete-bay you can start working I'll approve the bp 16:53:49 strigazi: for that should add --yes parameter and it is madatory to delete the cluster 16:53:51 I have to leave but was nice talking to you'll! See you next week 16:53:51 right ? 16:54:03 slunkad: see you 16:54:21 mvpnitesh yes, you can have a look in the heat code 16:54:48 slunkad: see you 16:55:21 strigazi: It heat it ask for confirmation and if we give 'y' it will delete the stack , i've to implement in the same way right ? 16:55:31 leaving here too, thanks all and talk next week 16:55:34 mvpnitesh: yes 16:55:43 stragazi: ok 16:55:49 rochaporto: goodbye 16:56:23 I think we can wrap up see you next week or in #openstack-containers 16:56:30 sure 16:56:41 #endmeeting