16:00:23 <strigazi> #startmeeting containers
16:00:24 <openstack> Meeting started Tue Aug 22 16:00:23 2017 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:27 <openstack> The meeting name has been set to 'containers'
16:00:56 <strigazi> #topic Roll Call
16:00:58 <adrian_otto> Adrian Otto
16:01:08 <rochaporto> Ricardo Rocha
16:01:09 <strigazi> Spyros Trigazis
16:01:29 <slunkad> Sayali Lunkad
16:02:55 <strigazi> Thanks for joining the meeting adrian_otto rochaporto slunkad
16:03:05 <adrian_otto> :-)
16:03:17 <strigazi> #topic Announcements
16:03:52 <adrian_otto> is it the right time to welcome strigazi as out new PTL?
16:03:59 <adrian_otto> s/out/our/
16:04:01 <rochaporto> i think it is! :)
16:04:14 <strigazi> Sometime ago we decided to go for bi-weekly meeting but didn't work out very well :)
16:04:15 <adrian_otto> Welcome strigazi !! You will be an awesome PTL!
16:04:21 <rochaporto> and thank Adrian! :)
16:04:35 <strigazi> Thank you adrian_otto ;)
16:05:05 <strigazi> Let's try to have our meeting every week again at this time
16:05:17 <adrian_otto> +1
16:05:27 <slunkad> strigazi: +1, it's easier for new people like me to catch up!
16:05:43 <mvpnitesh> Congrats Strigazi
16:05:54 <slunkad> strigazi: and congrats :)
16:06:15 <strigazi> The agenda will be at the usual place again https://wiki.openstack.org/wiki/Meetings/Containers
16:06:21 <mvelten> congrats ! Can I get free +2 ?
16:06:22 <strigazi> mvpnitesh slunkad thanks
16:06:49 <strigazi> mvelten: xD
16:07:26 <strigazi> This week we have to do the pike release. Most likely I'll do it on Thursday
16:07:57 <strigazi> The most important feaure that we try to merge is mvelten's run-kube-in-containers
16:08:10 <strigazi> #link https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/run-kube-as-container
16:08:33 <mvelten> yey this is happening
16:08:47 <strigazi> These is a slightly new approach that we work on with the fedora/centos-atomic team
16:09:07 <strigazi> After that we can still take in fixes for pike
16:09:14 <adrian_otto> I will review these this morning and cote on them
16:09:20 <adrian_otto> s/cote/vote/
16:09:28 <strigazi> as we do for stable branches
16:09:33 <strigazi> thanks adrian_otto
16:10:20 <strigazi> you may notice that the kubernetes tests are non voting due to the constant timeouts, but the logs have a great detail in logs/cluster-nodes
16:11:05 <strigazi> One more announcement
16:11:47 <strigazi> Since we won't have a room in the PTG we can make the next meeting or the next two meetings our planing meetings.
16:12:11 <rochaporto> for queens?
16:12:16 <strigazi> So that new contributors can take new blueprints or bugs
16:12:20 <strigazi> yes for queens
16:12:21 <slunkad> strigazi: are you'll going to be attending the ptg?
16:12:31 <strigazi> slunkad: no, I won't
16:12:53 <adrian_otto> hey I have a question about one of the patches in the above set
16:12:54 <adrian_otto> https://review.openstack.org/#/c/488443/8/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh
16:13:22 <adrian_otto> why did we decide to bind to a localhost address explicitly on line 15 of the proposed patch?
16:13:26 <slunkad> strigazi: +1 for the planning meetings
16:13:55 <mvelten> it was done before like that but not here
16:13:56 <mvelten> 2s
16:14:15 <mvelten> https://review.openstack.org/#/c/488443/8/magnum/drivers/common/templates/kubernetes/fragments/enable-kube-proxy-master.sh
16:14:22 <mvelten> you can see the master here
16:14:24 <strigazi> adrian_otto the code is coming from the previous definition of kube-proxy.
16:14:51 <strigazi> So, these are the announcements I have.
16:15:06 <strigazi> I'll send an email to the ML to notidy everyone
16:15:21 <adrian_otto> ok, I will withhold further discussion until strigazi calls on me
16:15:27 <strigazi> so if you have ideas we can discuss them in the next meeting
16:16:03 <strigazi> Of course, for mvpnitesh and slunkad that are here we can have a chat today too
16:16:39 <mvelten> I need to leave, adrian_otto please comment on review, I'll address them
16:17:07 <adrian_otto> It's really not a problem with this contribution. I just think we are configuring k8s suboptimally
16:17:27 <strigazi> since there are no action items in the agenda,
16:17:30 <slunkad> strigazi: sure, I mostly am working on integrating the opensuse driver atm but I also want to get involved in the main project itself so would help to get a sort of status of the project
16:17:49 <mvpnitesh> Can i work on this https://blueprints.launchpad.net/magnum/+spec/ask-confirm-when-delete-bay
16:19:13 <strigazi> slunkad: sounds good, we can add the suse driver in the main tree. If we have a regular contributor it would be easier
16:19:43 <strigazi> adrian_otto: what to you think about https://blueprints.launchpad.net/magnum/+spec/ask-confirm-when-delete-bay ?
16:20:10 <strigazi> slunkad: what is your timezone?
16:20:16 <slunkad> strigazi: yes I am working for suse so I will be the regular contributor there!
16:20:22 <slunkad> strigazi: CEST
16:21:28 <strigazi> slunkad: mine too, so if you need anything shoot in #openstack-containers, is there something specific that you want to discuss today?
16:22:07 <rochaporto> on ask to confirm, this is client side only right? we should do it with a -y option to not to have to confirm manually (we have several scripts doing things like this)
16:22:33 <slunkad> strigazi: nothing specific really, just want to know where the focus of the project is for this cycle but if we have the planning meetings then I guess its a topic we can discuss there
16:22:52 <strigazi> rochaporto mvpnitesh or like --yes like heat does
16:23:15 <rochaporto> sounds good
16:23:27 <strigazi> slunkad: I guess that you are interested in suse + kubernetes
16:23:51 <slunkad> strigazi: that and the overall project too
16:24:18 <mvpnitesh> strigazi: like heat does
16:24:20 <strigazi> what we and me specificaly are working on is lifecycle operations and more specificaly upgrades.
16:25:04 <strigazi> We rely on heat for all drivers right now and do upgrades we will use the heat agent
16:25:37 <strigazi> In the same context, for all drivers we try to use stock images
16:26:07 <strigazi> for fedora based ones we do, it would be great to do the same for opensuse
16:26:13 <slunkad> strigazi: I see, is there a bp for the upgrade stuff?
16:26:54 <strigazi> slunkad: https://review.openstack.org/#/c/433728/ It is outdated but the direction is the same.
16:27:07 <strigazi> I'll need to update it.
16:28:03 <strigazi> And since we are a containers project we try to deliver all content in the images in containers
16:28:10 <rochaporto> slunkad: is there something similar to fedora/centos atomic in suse?
16:28:41 <strigazi> I think opensuse uses a stock image too, nothing special to magnum
16:28:55 <strigazi> rochaporto: slunkad that was my next question xD
16:29:59 <slunkad> strigazi: not atm but there is some effort being put into it recently
16:30:38 <strigazi> slunkad: ok
16:33:36 <strigazi> slunkad: there are two more imprtant features that we want to add
16:33:50 <strigazi> One is cluster federation
16:34:04 <strigazi> #link https://review.openstack.org/#/c/489609/
16:34:41 <strigazi> and the second one that many users ask about is a option or a kubernetes driver that supports kuryr
16:35:11 <strigazi> https://blueprints.launchpad.net/magnum/?searchtext=kuryr
16:35:44 <slunkad> strigazi: what about the cinder-volumes stuff? I think that is in pike right?
16:36:14 <rochaporto> for federation clenimar has submitted the db management patch, the api part will also be there this week
16:36:26 <rochaporto> #link https://review.openstack.org/#/c/494546/
16:36:59 <strigazi> slunkad: in kubernetes 1.7.3+ the cloud provider is working fine and for docker swarm and mesos we have rexray we
16:37:43 <slunkad> strigazi: ok
16:37:48 <strigazi> slunkad: so for the fedora based drivers the integration with cinder is there
16:38:04 <slunkad> strigazi: yes I need to add it for the suse driver too
16:38:26 <strigazi> as soon as we branch we can move suse in-tree
16:38:44 <slunkad> strigazi: ok sounds great!
16:39:18 <strigazi> slunkad: I'll train my self to feel comfortable to test the opensuse driver too ;)
16:40:11 <strigazi> slunkad: I think the best approch would be to align the opensuse driver with the fedora ones and run the same set of tests and expect the same results
16:40:30 <strigazi> the goal is the same for both drivers anyway
16:41:28 <slunkad> strigazi: yes atm we have if on sles so I was thinking about maybe have ci with opensuse but for the time being I think I will also put more effort to test it with opensuse
16:41:35 <slunkad> strigazi: yes for sure
16:41:57 <slunkad> s/if/it
16:43:00 <strigazi> I try to have a 3rd party ci on centos ci for the fedora based ones, if we have one for suse would be amazing
16:44:38 <rochaporto> do we cleanup the blueprints before next meeting? it will probably make things easier
16:45:01 <slunkad> strigazi: ok I can check as to what is possible from our side!
16:45:10 <mvpnitesh> guys i wanted to work on this bug https://bugs.launchpad.net/magnum/+bug/1704329 Would like to give solution in this way  i) will add a function like check_discovery_url() in https://github.com/openstack/magnum/blob/master/magnum/common/urlfetch.py , which check whether the magnum host can access "https://discovery.etcd.io" ii) Will call that check_discovery_url() function "def post()" i.e at https://github.com/ope
16:45:10 <openstack> Launchpad bug 1704329 in Magnum "Clluster creation failed because of failure in getting etcd discovery url " [High,New] - Assigned to M V P Nitesh (m-nitesh)
16:45:55 <strigazi> rochaporto: yes, I'll go through them for sure before the next meeting.
16:46:02 <strigazi> mvpnitesh go for it
16:46:06 <rochaporto> strigazi: thanks!
16:46:26 <mvpnitesh> strigazi: Sure
16:47:57 <mvpnitesh> Strigazi: what about this https://blueprints.launchpad.net/magnum/+spec/ask-confirm-when-delete-bay , I wanted to go heat way
16:48:03 <strigazi> mvpnitesh: slunkad rochaporto is there anything else?
16:48:34 <strigazi> #action strigazi to cleanup the blueprint list
16:48:36 <slunkad> strigazi: nope, thank you for all the information!
16:48:49 <mvpnitesh> Strigazi : what about this bug this bug https://bugs.launchpad.net/oslo.config/+bug/1517839 is there in magnum, Just wanted to know wether i can work on this or it is not required for magnum project.
16:48:51 <openstack> Launchpad bug 1517839 in tacker "Make CONF.set_override with parameter enforce_type=True by default" [Undecided,In progress] - Assigned to Ji.Wei (jiwei)
16:49:25 <strigazi> #action Make Queens planning in the next two meeting
16:49:41 <strigazi> mvpnitesh I
16:49:46 <strigazi> mvpnitesh I'll have a look
16:50:25 <strigazi> #action strigazi to send an email to the ML for Queens planning
16:50:39 <mvpnitesh> strigazi: ok , What about this https://blueprints.launchpad.net/magnum/+spec/cluste-netinfo
16:51:33 <strigazi> mvpnitesh you can start working on ask-confirm-when-delete-bay by adding the --yes parameter and
16:52:28 <strigazi> mvpnitesh: I'm not sure about bug #1517839
16:52:29 <openstack> bug 1517839 in tacker "Make CONF.set_override with parameter enforce_type=True by default" [Undecided,In progress] https://launchpad.net/bugs/1517839 - Assigned to Ji.Wei (jiwei)
16:52:45 <strigazi> mvpnitesh: I'll get back to you tomorrow about it
16:53:07 <strigazi> mvpnitesh: for ask-confirm-when-delete-bay you can start working I'll approve the bp
16:53:49 <mvpnitesh> strigazi: for that should add --yes parameter and it is madatory to delete the cluster
16:53:51 <slunkad> I have to leave but was nice talking to you'll! See you next week
16:53:51 <mvpnitesh> right ?
16:54:03 <strigazi> slunkad: see you
16:54:21 <strigazi> mvpnitesh yes, you can have a look in the heat code
16:54:48 <mvpnitesh> slunkad: see you
16:55:21 <mvpnitesh> strigazi: It heat it ask for confirmation and if we give 'y' it will delete the stack , i've to implement in the same way right ?
16:55:31 <rochaporto> leaving here too, thanks all and talk next week
16:55:34 <strigazi> mvpnitesh: yes
16:55:43 <mvpnitesh> stragazi: ok
16:55:49 <strigazi> rochaporto: goodbye
16:56:23 <strigazi> I think we can wrap up see you next week or in #openstack-containers
16:56:30 <mvpnitesh> sure
16:56:41 <strigazi> #endmeeting