13:01:54 <newt_> #startmeeting openstack-salt
13:01:55 <openstack> Meeting started Tue Aug  2 13:01:54 2016 UTC and is due to finish in 60 minutes.  The chair is newt_. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:59 <openstack> The meeting name has been set to 'openstack_salt'
13:01:59 <newt_> hello
13:02:07 <newt_> #topic roll call
13:02:15 <newt_> o/
13:03:13 <lelouch_> o/
13:03:44 <newt_> anyone else 'round? :)
13:03:52 <newt_> ok moving on
13:03:59 <newt_> #topic Introduction
13:04:04 <newt_> This meeting for the openstack-salt team
13:04:12 <newt_> This meeting for the openstack-salt team
13:04:21 <newt_> if you're interested in contributing to the discussion, please join #openstack-salt
13:04:26 <newt_> #link http://eavesdrop.openstack.org/#OpenStack_Salt_Team_Meeting
13:04:34 <newt_> #link https://wiki.openstack.org/wiki/Meetings/openstack-salt
13:04:40 <newt_> #topic Review past action items
13:05:14 * newt_ newt to add the salt-formula-kubernetes
13:05:35 <newt_> this one is being processed by openstack governance team
13:06:00 <newt_> and is being accepted so far, I think it will be integrated within a week
13:06:18 * newt_ tux to update the documentation on bootstrap for vagrant and docker
13:06:25 <newt_> is Tux around?
13:07:21 * newt_ apuimedo to contact jpavlik about the kuryr integration
13:07:30 <newt_> is apuimedo around? :o
13:08:37 <newt_> I dont think jakob got any message, but i'll have to ask him
13:09:01 <newt_> and the last item from past issues is ACTION: create deployments for docker and kubernetes and synchronise with Tux on docs
13:09:44 <newt_> This one has not moved forward much
13:10:07 <newt_> we have added the metadata for the deployment but it is not tested yet
13:12:04 <newt_> #topic Today's Agenda
13:13:02 <newt_> well today's issues are follow on past weeks issues, get the kubebased/docker deployments up-2-day on doc and deployment level
13:13:13 <apuimedo> newt_: I am around
13:13:17 <apuimedo> I wrote to him on IRC but I don't think he saw it
13:14:12 <newt_> apuimedo: hello, can you write him an email? jakub.pavlik@tcpcloud.eu
13:14:47 <apuimedo> sur
13:14:49 <apuimedo> *sure
13:14:50 <newt_> as there's not much folk around we can start discussion now
13:14:57 <newt_> about the kuryr needs
13:15:32 <newt_> to setup the formula basis, get to know the architecture and services/roles
13:16:01 <apuimedo> well, the libnetwork integration already has a container
13:16:21 <apuimedo> we are now finalizing support for keystone v3
13:16:27 <apuimedo> since kolla requires it
13:16:42 <apuimedo> then it needs to know where Neutron is
13:16:48 <apuimedo> and have admin role credentials
13:16:53 <apuimedo> (the info is passed as env vars
13:17:32 <apuimedo> https://hub.docker.com/r/kuryr/libnetwork/
13:17:45 <newt_> do you have some architecture pic
13:17:46 <newt_> ?
13:18:04 <apuimedo> I do
13:18:19 <newt_> can you please share?
13:18:45 <apuimedo> http://paste.openstack.org/show/545625/
13:18:52 <apuimedo> this is the parameters the container takes
13:18:58 <apuimedo> and now I'll pass the diagram
13:19:02 <newt_> and the formula think - we use it to cover both container and vm deployments and to define the metadata and tests
13:20:01 <apuimedo> https://gitlab.com/celebdor/design/blob/master/diagrams/fosdem/libnetwork_diagram.svg
13:20:13 <apuimedo> https://gitlab.com/celebdor/design/blob/master/diagrams/fosdem/libnetwork_rd_diagram.svg
13:20:28 <newt_> but it can be run just as container from source - where the metadata layer is thinner
13:21:02 <apuimedo> newt_: what do you mean as running a container from source?
13:21:12 <apuimedo> Do you mean from a container registry?
13:23:03 <newt_> if the formula is defined - it serves as deploy script both for container and vm service
13:23:23 <newt_> and has metadata definition - to which endpoints it connects
13:24:02 <apuimedo> ah, cool
13:24:28 <newt_> and there is a validation layer for it as welll
13:24:43 <newt_> and you can make definitions for varius backends you have
13:24:44 <apuimedo> newt_:so it would be creating an openstack/salt-formula-kuryr
13:24:52 <newt_> yes
13:24:55 <apuimedo> ok
13:25:08 <apuimedo> how long does it typically take to do that?
13:25:35 <newt_> where is your dockerfile? :)
13:26:09 <apuimedo> https://github.com/openstack/kuryr-libnetwork/tree/master/contrib/docker/libnetwork
13:26:10 <newt_> usually not long [hours]
13:26:15 <apuimedo> but I need to update it
13:26:22 <apuimedo> we were splitting the repository
13:26:41 <apuimedo> openstack/kuryr got split into openstack/kuryr and openstack/kuryr-libnetwork
13:26:43 <newt_> yes this looks pretty straightforward
13:27:00 <newt_> is it just 1 service if I read the architecture right?
13:27:29 <apuimedo> yes
13:27:36 <newt_> brb
13:27:37 <apuimedo> it's a single service with different endpoints
13:27:48 <apuimedo> it answers both ipam and remote driver endpoints
13:27:58 <apuimedo> the kubernetes integration is a bit more involved
13:28:32 <apuimedo> but we're still polishing the prototype to put it in openstack/kuryr-kubernetes
13:32:15 <newt_> yes, the formula should be easy to implement as seen in dockerfile, then we can proceed the setup deployemnt params in new stacks that will represent the various deployment screnarios, starting with the current one
13:32:39 <newt_> you can create the formula repo in your namespace on github and give me access
13:33:12 <apuimedo> newt_: alright
13:33:20 <newt_> I can setup the skeleton and get the basic services up and running
13:33:41 <apuimedo> I am flying to Prague on Thursday, so I think probably I'll have time next Monday-Tuesday night
13:34:03 <newt_> #action apuimedo create the kuryr formula repo
13:34:10 <apuimedo> +1
13:34:12 <apuimedo> :-)
13:34:58 <newt_> what are the other requirements on openstack side?
13:35:33 <newt_> I'd sum it up in a blueprint which we can tune
13:35:36 <apuimedo> none
13:35:46 <newt_> just the neutron backend
13:35:47 <apuimedo> kuryr-kubernetes will need lbaas
13:36:03 <newt_> it supports midonet now, do i remember right?
13:36:05 <apuimedo> but kuryr-libnetwork has enough with just plain Neutron
13:36:24 <apuimedo> it supports Midonet, ovs, plumgrid, ovn and dragonflow
13:36:33 <apuimedo> kuryr-kubernetes only midonet for now
13:37:46 <newt_> kuryr-kub brindges kube and os deployments, right?
13:38:02 <newt_> and kuryr-libn does what?
13:38:13 <newt_> sorry for the ignorance :)
13:38:48 <apuimedo> right
13:38:57 <apuimedo> kuryr-libnetwork is useful for swarm deployments
13:39:05 <apuimedo> it gets them neutron networking
13:39:25 <apuimedo> (even without swarm, single docker engine setups also get neutron networking)
13:41:04 <newt_> yes, I understand better now :)
13:41:18 <apuimedo> ;-)
13:43:23 <newt_> and the kur-libn and kur-kube are 2 different containers now, or one with different parametrisation
13:44:56 <apuimedo> they will always be different containers
13:45:16 <newt_> and together or either or
13:45:18 <apuimedo> kuryr-kube is currently as a midonet proto at midonet/raven
13:46:44 <newt_> a#link https://blueprints.launchpad.net/openstack-salt/+spec/salt-formula-kuryr
13:47:05 <newt_> here I'd outline the params and the setup needed for testing
13:47:34 <newt_> params = complete metadata - the identity, network endpoints for libnet
13:47:44 <apuimedo> good
13:47:53 <newt_> and if the service has any internal params that are worth tuning
13:49:09 <apuimedo> there's some configuration options like the scope of the networks
13:49:19 <apuimedo> I'll add it to the blueprint
13:49:58 <newt_> yes, let me know when the repo is ready and I will fill it up with base formula
13:52:55 <newt_> the kuryr is accepted, does anyone have any other issue?
13:53:19 <apuimedo> thanks newt_
13:58:11 <newt_> #topic Open Discussion, Bug and Review triage (submit modules to triage here)
13:58:35 <newt_> well anything else from anyone :)
13:59:13 <newt_> If not, thank you apuimedo, and well see each other over kuryr :)
13:59:21 <apuimedo> very well
13:59:23 <apuimedo> thanks
13:59:40 <newt_> #endmeeting