16:00:03 <hongbin> #startmeeting containers
16:00:08 <openstack> Meeting started Tue Jul 19 16:00:03 2016 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:11 <openstack> The meeting name has been set to 'containers'
16:00:11 <hongbin> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-07-19_1600_UTC Today's agenda
16:00:19 <hongbin> #topic Roll Call
16:00:27 <muralia> Murali Allada
16:00:31 <vipulnayyar> o/
16:00:31 <adrian_otto> Adrian Otto
16:00:33 <mkrai> Madhuri Kumari
16:00:35 <strigazi> Spyros Trigazis
16:00:59 <dane_leblanc> o/
16:01:21 <Drago> o/
16:01:27 <hongbin> Thanks for joining the meeting muralia vipulnayyar adrian_otto mkrai strigazi dane_leblanc Drago
16:01:28 <rochaporto> Ricardo Rocha
16:01:36 <hongbin> #topic Announcements
16:01:47 <hongbin> I have no announcement
16:01:56 <adrian_otto> welcome back hongbin
16:02:01 <hongbin> Any announcement from others?
16:02:14 <hongbin> adrian_otto: Thanks for chairing the last meeting while I am in China
16:02:27 <adrian_otto> you are welcome
16:02:45 <hongbin> #topic Review Action Items
16:02:46 <tango__> o/
16:02:49 <hongbin> None
16:02:59 <hongbin> #topic Essential Blueprints Review
16:03:08 <hongbin> 1. Support baremetal container clusters (strigazi)
16:03:18 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support
16:03:34 <strigazi> I'm trying setup the gate job. I'm very close
16:03:46 <strigazi> debugging locally
16:03:56 <strigazi> and madhuri works on swarm
16:04:09 <mkrai> I am writing the templates
16:04:21 <mkrai> Which post it this week
16:04:48 <strigazi> that's all from me
16:05:06 <hongbin> mkrai: Thanks. What do you think by creating a dedicated BP for Ironic-swarm?
16:05:25 <mkrai> Yes that will be good to track
16:05:45 <strigazi> one more thing
16:05:47 <hongbin> Yes, then teh current BP track the Ironic for k8s
16:06:08 <strigazi> we build the image using dib
16:06:24 <strigazi> there are no elements for fedora 23, only 24
16:06:52 <strigazi> I'll setup a job to build fedora 24 for ironic k8s and swarm
16:07:12 <adrian_otto> strigazi: which docker version in in 24?
16:07:18 <strigazi> 10
16:07:22 <strigazi> 1.10
16:07:40 <strigazi> at least this was the last time I checked
16:07:51 <mkrai> strigazi, Please let us know when image is uploaded to test swarm too
16:08:08 <strigazi> hopefully tomorrow
16:08:19 <mkrai> cool
16:08:26 <strigazi> adrian_otto: about the atomic image
16:08:34 <strigazi> could we upgrade to 24?
16:08:50 <strigazi> what tests whould we run?
16:09:00 <tango__> DIB supports 24?
16:09:05 <strigazi> I tests everything locally for starters
16:09:10 <strigazi> tango_: yes
16:09:52 <tango__> It would be good to move to 24 then
16:10:07 <strigazi> https://review.openstack.org/#/c/339681/
16:10:16 <hongbin> strigazi: I think the ironic bay could use 24, while the atomic bay can stay at 23
16:10:40 <hongbin> Unless there is a requirement that all bays should use 24
16:10:45 <adrian_otto> hongbin: why stay with an older version?
16:10:55 <strigazi> There is not, but since it's out
16:11:05 <adrian_otto> if a newer one works, and does not exhibit any stability issues
16:11:09 <hongbin> adrian_otto: Just because atomic doesn't seem to have 24
16:11:11 <tango__> I can give it a shot to build Atomic with 24
16:11:26 <adrian_otto> do we know when 24 is scheduled to land?
16:11:39 <strigazi> landed a few days ago
16:11:51 <tango__> if everything works, I will upload Atomic 24 to the Fedora site, then users can choose
16:11:52 <hongbin> oh, that is good
16:12:46 <rochaporto> seems to be there: https://getfedora.org/en/cloud/download/atomic.html
16:13:01 <hongbin> It looks everyone want to upgrade to 24?
16:13:38 <hongbin> Any opposing point of view?
16:13:45 <tango__> This seems to make sense
16:13:57 <muralia> +1
16:14:07 <hongbin> #agreed upgrade atomic bay to feodra 24
16:14:30 <hongbin> OK. any other comment in this topic?
16:14:31 <tango__> Thanks rochaporto for pointer
16:15:06 <hongbin> 2. Magnum User Guide for Cloud Operator (tango)
16:15:13 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/user-guide
16:15:26 <tango__> The section for bay and Mesos are under review.
16:15:37 <tango__> Got some good feedback, thanks everyone for the reviews
16:16:07 <tango__> Next I will consolidate the TLS doc into the user guide, updating as needed
16:16:37 <tango__> I guess we will need a section for DCOS when the patches start coming
16:16:49 <tango__> that's all for now
16:17:17 <hongbin> Thanks tango__
16:17:21 <hongbin> 3. COE Bay Drivers (jamie_h, muralia)
16:17:28 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/bay-drivers
16:17:51 <muralia> I've been investigating how to use stevedore and been doing some coding around that. i should have a patch soon.
16:18:01 <muralia> for the drivers.
16:18:32 <muralia> one thing to think about is the contract the drivers need to follow.
16:19:16 <muralia> in the driver spec, we talked about the drivers being responsible for making heat calls and actually being responsible for all bay CRUD methods.
16:19:21 <muralia> thats not the case right now.
16:19:43 <muralia> so thats some work that needs to be done.
16:20:08 <muralia> im working on that as well.
16:20:43 <muralia> thats it for me.
16:21:06 <hongbin> Thanks muralia . Others, any question for muralia ?
16:21:40 <hongbin> 4. Create a magnum installation guide (strigazi)
16:21:46 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/magnum-installation-guide\
16:22:07 <strigazi> I've pushed instructions for debian and ubuntu
16:22:43 <tango__> Nice
16:23:18 <strigazi> and made a few updates to reflect the current state of the project, x509keypair support and trust_domain_name instead of id
16:23:58 <strigazi> https://review.openstack.org/#/c/343283/
16:24:29 <strigazi> The instructions are pretty much the same, the difference is that it's tested
16:24:36 <strigazi> that's all
16:25:00 <hongbin> strigazi: A question. Besides ubuntu, any other OS you plan to add?
16:25:45 <strigazi> All projects support, suse, fedora and ubuntu. Many of them debian as well
16:26:05 <hongbin> I guess we already have suse and rdo?
16:26:14 <strigazi> I don't know if it make sense to add anything else.
16:26:17 <strigazi> yes we do
16:26:29 <strigazi> http://docs.openstack.org/project-install-guide/draft/index.html
16:26:40 <strigazi> here is the list for all projects
16:26:45 <hongbin> OK. I worry the size of the OS-distro we need to support
16:26:57 <hongbin> Keep it consistent with other projects is good
16:27:26 <strigazi> most of the instructions are in the common directory
16:27:29 <hongbin> strigazi: Then I guess we can close the BP after the ubuntu patch is landed?
16:28:28 <strigazi> yes. I can always add the debconf flavor for debian later
16:28:44 <hongbin> strigazi: ack
16:29:15 <strigazi> we'll have to check again when the official packages are out
16:29:23 <strigazi> at the end of the cycle
16:30:10 <strigazi> That's all
16:30:26 <hongbin> Thanks strigazi
16:30:43 <hongbin> Any comment?
16:30:59 <hongbin> #topic Kuryr Integration Update (tango)
16:31:11 <hongbin> tango__: ^^
16:31:18 <tango__> I attended the Kuryr meeting yesterday
16:31:31 <tango__> Their code refactoring is almost complete, just one more patch
16:31:47 <tango__> So they should be stable again.
16:32:14 <tango__> They have some initial prototype with CNI for K8s
16:32:23 <tango__> patches for that coming soon
16:33:09 <tango__> I double checked with them on our integration with Swarm, and they confirmed it should work
16:33:32 <tango__> (the way we are implementing)
16:34:15 <tango__> I am still working on getting the openvswitch in a container.  I tried using some of the existing ones from Docker Hub but no luck
16:34:45 <tango__> I am now creating our own.
16:35:57 <rhallisey> tango__, did you try kolla's?
16:36:25 <tango__> I did, but it requires some Kolla config
16:37:25 <tango__> I just realize we don't have a BP for Swarm Kuryr integration yet, so I will create one now, and add the details we have so far
16:37:29 <rhallisey> tango__, do you need some custom config?
16:38:37 <tango__> rhallisey:  I don't think so, just basic ML2 Openvswitch that talks to the OpenStack Neutron
16:39:15 <tango__> It should look like a normal compute node configuration
16:40:01 <rhallisey> tango__, ok cool.  What you can do is run the container and mount in the config file you need
16:40:12 <rhallisey> or you can just use the config file kolla generates
16:40:53 <rhallisey> what you likely hit was the container was looking for you config file in a specific location
16:41:48 <rhallisey> tango__, don't mean to de rail your meeting, but after your done come ask and #openstack-kolla and I can help you
16:41:56 <rhallisey> or anyone in the community
16:42:36 <hongbin> Thanks tango__ rhallisey
16:42:46 <tango___> sorry, lost connection
16:43:06 <tango___> That's all I have
16:43:20 <hongbin> Any other comment?
16:43:42 <hongbin> #topic Other blueprints/Bugs/Reviews/Ideas
16:43:48 <hongbin> 1. Provide Discovery Options for Bay Types (Vipul Nayyar)
16:43:52 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/bay-type-discovery-options
16:43:57 <hongbin> vipulnayyar: ^^
16:44:00 <vipulnayyar> So, I sent the concerned BP on ML to get ideas and suggestions. Nothing much as of now.
16:44:14 <muralia> +1 for this work.
16:44:50 <hongbin> vipulnayyar: I guess you could summarize what you sent in ML so everyone is on the same page?
16:44:50 <vipulnayyar> As hongbin suggested to me, working on static clustering should be a good idea to start... maybe similar to arguments accepted by etcd
16:45:49 <vipulnayyar> So, it's about building a discovery service inside magnum, if I understand correctly
16:46:25 <vipulnayyar> Etcd does this by static bootstrapping and dynamic discovery, I believe
16:47:13 <muralia> yes. this is to replace the public cluster discovery service which we currently use for all bays.
16:47:39 <muralia> one comment, maybe not do this for swarm yet. we'll just update to 1.12 when its released.
16:48:11 <hongbin> sure
16:48:31 <vipulnayyar> Thanks, then I'll start with the static method following the etcd model then, if it's alright
16:48:57 <muralia> +1
16:49:19 <hongbin> vipulnayyar: If you go with the static boostraping approach, you might need to use SoftwareDeployment to statically write down all the IP addresses of master nodes
16:49:43 <hongbin> vipulnayyar: I did this in mesos bay to bootstrap the zookeeper
16:49:44 <vipulnayyar> Thanks hongbin , will take a look at that :-)
16:49:59 <hongbin> vipulnayyar: There is a drawback though
16:50:11 <hongbin> vipulnayyar: The speed of softwaredeployment is very slow
16:51:01 <hongbin> vipulnayyar: I am not exactly sure why, but that is the drawback
16:51:11 <Drago> I believe it has to do with the frequency with which os-collect-config polls Heat
16:51:25 <hongbin> Possibly yes
16:51:47 <vipulnayyar> I'll do a more thorough read up on this and send it out once again over ML with more concrete plans
16:52:11 <hongbin> Thanks vipulnayyar
16:52:21 <Drago> I can take a look into that software deployment problem because I am looking at converting the templates over too.
16:52:45 <vipulnayyar> Will sync up with you, Drago
16:52:51 <adrian_otto> strigazi and tango___, we resolved the auth problem I raised for discussion last week. The fix is merged: https://review.openstack.org/342466
16:53:37 <hongbin> #topic Open Discussion
16:53:39 <tango___> Thanks Adrian, looks like a tricky bug to squash
16:53:55 <jvgrant_> i have topic on the magnum client and versioning
16:54:07 <hongbin> jvgrant_: go ahead
16:54:23 <jvgrant_> what should be the magnum's client behavior with versioning?
16:54:34 <jvgrant_> should it always be the latest micro-version?
16:54:44 <jvgrant_> should we add an option to specify version?
16:55:00 <jvgrant_> should it support all "supported" micro-versions or just the latest?
16:55:17 <vijendar> jvgrant_ + 1 default is latest version and should have option to specify specific version
16:55:48 <hongbin> If that is the behavior of other openstack clients, works for me
16:55:58 <rochaporto> there is already a --magnum-api-version?
16:56:32 <jvgrant_> as we haven't versioned before, there is not anything yet
16:56:57 <jvgrant_> we are going to be now, and wanted to make sure we added the correct behavior to client as well
16:57:06 <rochaporto> i meant at least the option appears in the current help
16:57:32 <jvgrant_> yeah, that is what we would use
16:57:44 <vijendar> I think current behavior does not respect microversion
16:57:48 <jvgrant_> right now by default it will be the base version not latest
16:58:27 <rochaporto> i have one for cinder support. we're trying to add it, but stuck by https://github.com/rackspace/gophercloud/pull/603
16:59:10 <rochaporto> without it we need to put user credentials in the nodes, which is not ok
16:59:41 <hongbin> Sorry, time is up
16:59:42 <tango___> We have similar problem elsewhere also, for Kuryr
16:59:53 <hongbin> Overflow on #openstack-containers
17:00:10 <hongbin> FYI, we have a etherpad to select the midcycle topics: https://etherpad.openstack.org/p/magnum-newton-midcycle-topics . Please feel free to comment
17:00:17 <hongbin> All. Thanks for joining the meeting
17:00:21 <hongbin> #endmeeting