13:00:07 <yanyanhu> #startmeeting senlin
13:00:08 <openstack> Meeting started Tue Nov 29 13:00:07 2016 UTC and is due to finish in 60 minutes.  The chair is yanyanhu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:12 <openstack> The meeting name has been set to 'senlin'
13:00:22 <yanyanhu> hi, guys
13:00:32 <XueFengLiu> hi yanyan
13:00:37 <yanyanhu> hi, XueFengLiu
13:00:44 <lvdongbing> hi, all
13:00:48 <yanyanhu> hello
13:00:52 <Qiming> hi
13:00:56 <yanyanhu> hi, Qiming
13:01:13 <yanyanhu> lets wait for a while for other attenders
13:01:20 <yanyanhu> here is the agenda, https://wiki.openstack.org/wiki/Meetings/SenlinAgenda#Agenda_.282016-11-29_1300_UTC.29
13:01:27 <yanyanhu> please feel free to add items
13:02:26 <yanyanhu> ok, lets move on
13:02:39 <yanyanhu> #topic ocata workitem
13:02:45 <yanyanhu> https://etherpad.openstack.org/p/senlin-ocata-workitems
13:02:51 <yanyanhu> here is the list
13:03:03 <yanyanhu> - Improve tempest API test
13:03:21 <yanyanhu> I think we can consider to start working on it for the versioned request support is almost done
13:04:39 <yanyanhu> the basic idea is adding the verification of exception message
13:04:52 <Qiming> we have removed the performance benchmarking work completely?
13:04:56 <yanyanhu> to ensure the request handling logic is correct
13:05:11 <yanyanhu> Qiming, I plan to move it back to todo list
13:05:26 <yanyanhu> for I don't have time to work on it recently...
13:05:36 <yanyanhu> if anyone want to pick it up, that will be great
13:05:50 <Qiming> I see
13:06:11 <yanyanhu> the basement is there, just need efforts to support more profiles and scenarios
13:06:59 <yanyanhu> ok, next one
13:07:04 <yanyanhu> HA support
13:07:17 <yanyanhu> hi, lixinhui
13:07:34 <yanyanhu> we just started the topic about HA in seconds :)
13:07:55 <lixinhui> okay
13:07:57 <lixinhui> :)
13:08:30 <yanyanhu> any new progress?
13:08:33 <lixinhui> in the past week, I submitted the ocativia BP and patch
13:08:43 <Qiming> noticed that and your patch
13:08:46 <Qiming> good work
13:08:52 <yanyanhu> great
13:09:28 <lixinhui> Michael has some question about the background
13:09:44 <lixinhui> and I gave some answer
13:10:00 <yanyanhu> about using lbaas hm for HA purpose?
13:10:35 <lixinhui> if answer can not resolve his questions, may need to discuss that on the weekly meeting of ocativa
13:10:55 <lixinhui> yes, I mentioned the use case
13:11:05 <Qiming> emm
13:11:07 <lixinhui> https://review.openstack.org/402296
13:11:21 <Qiming> hope it won't become next vpnaas
13:11:25 <Qiming> which is dying
13:11:34 <lixinhui> yes, it is dying
13:12:04 <yanyanhu> people do have strong requirement for native lb support
13:12:05 <lixinhui> anyway, not a big change. hope he can accept
13:12:19 <yanyanhu> although imho, lbaas is not ready fro production requirement
13:12:26 <yanyanhu> s/fro/for
13:12:33 <Qiming> agreed
13:13:22 <yanyanhu> so maybe we can give them a simple introduction about our use case
13:13:35 <Qiming> they may and may not care
13:13:41 <yanyanhu> Qiming, yes
13:13:50 <yanyanhu> so we just try our best
13:13:56 <Qiming> it is not about our use case
13:14:00 <Qiming> it is their BUG
13:14:01 <lixinhui> okay
13:14:06 <yanyanhu> actually this is a contribution for lbaas I feel
13:14:10 <Qiming> a serious bug, they need to take care of it
13:14:12 <yanyanhu> no harm for them :)
13:14:39 <yanyanhu> Qiming, you mean the incorrect health status of lb member
13:14:43 <Qiming> if lbaas is maintaining health status for nodes, they should do it right
13:15:00 <yanyanhu> I mean omitting event for status change
13:15:20 <Qiming> we tried help fix it, and they reject the patch, and they "WON'T" fix the bug
13:15:43 <yanyanhu> I have more strong feeling that our work about event/notification is such important :)
13:15:45 <Qiming> now xinhui is trying another workaround, let them send out notifications when node health change
13:15:58 <lixinhui> I think it is not reasonable to submit patch into lbaas any more
13:16:00 <yanyanhu> Qiming, yes, have the save feeling for that patch
13:16:09 <yanyanhu> anyway, it's their decision.
13:16:24 <lixinhui> so we are trying to use ocativia to emit event
13:16:38 <lixinhui> then any listener can handle in their own way
13:16:51 <Qiming> maybe need a ptl to ptl talk with them
13:16:52 <yanyanhu> lixinhui, yes, that's pretty reasonable
13:16:54 <lixinhui> no need to care about neutron-lbaas anymore
13:17:19 <lixinhui> yes, agree Qiming :)
13:17:29 <yanyanhu> so neutron-lbaas and ocaticia are two individual projects?
13:17:36 <lixinhui> yes
13:17:41 <yanyanhu> ocativia
13:17:44 <yanyanhu> I see
13:17:47 <lixinhui> neutron-lbaas is dying
13:17:57 <yanyanhu> I thought they are the same project
13:18:08 <Qiming> octavia
13:18:20 <lixinhui> haha
13:18:31 <yanyanhu> :)
13:18:38 <XueFengLiu> :)
13:19:07 <haiwei_> there is lbaasv2?
13:19:08 <lixinhui> I will keep tracking the patch
13:19:18 <lixinhui> that is my part
13:19:21 <yanyanhu> so maybe we need to have further discussion on this issue
13:19:22 <lixinhui> of update
13:19:37 <lixinhui> Yes, Haiwei_
13:19:38 <yanyanhu> to see how to move on
13:19:45 <lixinhui> lbaasv1 has been droped
13:20:01 <yanyanhu> yes, lbaasv2 is now the default enabled api
13:20:13 <Qiming> neutron ptl is armax
13:20:36 <lixinhui> Michael Johnson is the octavia PTL
13:20:37 <yanyanhu> octavia is part of neutron?
13:20:42 <Qiming> yes
13:20:55 <yanyanhu> or an individual project?
13:20:56 <lixinhui> yes
13:20:57 <Qiming> octavia has its own core team: https://review.openstack.org/#/admin/groups/370,members
13:21:12 <lixinhui> they have the same weekly meeting with neutron
13:21:17 <haiwei_> lbaas is dying, lbaasv2 is the substitute´╝č
13:21:25 <Qiming> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n2205
13:21:30 <Qiming> yes, I think so, haiwei_
13:22:05 <yanyanhu> wait a minute, lbaasv1 is dying or lbaas project is dying?
13:22:28 <elynn> Oh, I thought lbv2 is dying too
13:22:35 <yanyanhu> I see
13:22:36 <haiwei_> thanks goddess , they are not dying both
13:22:46 <yanyanhu> ...
13:22:47 <lixinhui> lbaasv1 has been dropped
13:22:52 <yanyanhu> yes, it is
13:23:14 <haiwei_> v1 is dead, v2 is dying?
13:23:31 <yanyanhu> I don't know, I'm listening :)
13:23:56 <lixinhui> I did not hear anything about v2 is dying
13:24:27 <yanyanhu> sigh, if so, lbaas project is ok
13:24:30 <yanyanhu> just its v1 api is deprecated which is expected
13:24:31 <haiwei_> it seems the advanced feature like lbaas , vpn are done by vendors themselves
13:24:36 <lixinhui> ocativa will replace neutron-lbaas
13:24:39 <lixinhui> project
13:24:53 <lixinhui> octavia
13:25:15 <lixinhui> they are on the path of transforming
13:25:24 <yanyanhu> replace? you mean enhancement or completely replacement
13:25:38 <yanyanhu> so no more haproxy based lb
13:25:42 <Qiming> deprecation
13:25:44 <yanyanhu> only vm based one
13:26:01 <lixinhui> that is another reason why we should not submit patch to the dying project
13:26:04 <yanyanhu> and neutron-lbaas will be renamed to octivia?
13:26:38 <lixinhui> by default, octavia provide haproxy lb
13:26:54 <Qiming> haproxy installed into a VM
13:26:54 <lixinhui> and it provide driver for different vendors to plugin
13:27:09 <yanyanhu> yes, this is what lbaas is doing now
13:27:19 <lixinhui> so it is not necessary to have also neutron-lbaas...
13:27:58 <yanyanhu> so octivia will replace lbaas in bigtent?
13:28:22 <lixinhui> no one ever mention big tent obviously
13:28:26 <yanyanhu> anyway, proposing patch to octivia is reasonable
13:28:34 <elynn> http://lists.openstack.org/pipermail/openstack-dev/2016-November/106911.html
13:28:51 <XueFengLiu> Octavia is currently the reference backend for Neutron LBaaS. In the near future, Octavia is likely to become the standard OpenStack LBaaS API endpoint.
13:29:00 <XueFengLiu> https://github.com/openstack/octavia
13:29:23 <lixinhui> Yes
13:29:34 <yanyanhu> I see.
13:30:00 <Qiming> should we move on? we have been talking about this extraterrestrial project for 30 minutes
13:30:02 <elynn> So customers who use lbv1 API will be easy to transfer to use octavia, right?
13:30:15 <yanyanhu> so we just focusing on communication with octavia team
13:30:15 <Qiming> 20 minutes, sorry
13:30:36 <yanyanhu> sorry, lets discuss this issue offline
13:30:39 <yanyanhu> lets move on now
13:30:49 <Qiming> ping the cores, send emails to mailinglist, we will find out
13:31:00 <yanyanhu> no document update I think
13:31:13 <Qiming> no
13:31:14 <yanyanhu> version request support
13:31:23 <yanyanhu> almost done
13:31:32 <yanyanhu> I will finish receiver part this week
13:31:45 <yanyanhu> I think XueFengLiu and lvdongbing's work has been done
13:32:05 <yanyanhu> then we can anounce the first step is finished
13:32:08 <lvdongbing> yes
13:32:19 <XueFengLiu> Yes, only action create/delete need to do.
13:32:42 <yanyanhu> XueFengLiu, thanks
13:32:46 <Qiming> remove line 20?
13:32:57 <yanyanhu> Qiming, yes
13:33:09 <yanyanhu> ok, next one, container profile
13:33:12 <yanyanhu> hi, haiwei_
13:33:29 * Qiming really enjoys pressing the DEL key
13:33:42 <yanyanhu> :)
13:34:10 <haiwei_> hi, yanyanhu, I think we need to discuss the image management of containers
13:34:33 <yanyanhu> haiwei_, yes, that is an important issue
13:34:43 <yanyanhu> currently, we can get image from glance I think?
13:34:52 <yanyanhu> although it is not layered
13:34:59 <haiwei_> I think the next step maybe the image jobs, what do you think Qiming
13:35:18 <Qiming> what do you mean by image jobs?
13:35:24 <haiwei_> I am think about using Zun yanyanhu
13:35:48 <haiwei_> how to give the container an image
13:35:55 <Qiming> glance?
13:36:15 <Qiming> when you do glance image-create, you have this:
13:36:15 <yanyanhu> yes, maybe this is the first choice, at least for now?
13:36:16 <Qiming> --container-format <CONTAINER_FORMAT>
13:36:16 <Qiming> Format of the container Valid values: None, ami, ari,
13:36:16 <Qiming> aki, bare, ovf, ova, docker
13:36:18 <haiwei_> currently we only support downloading the images from the docker lab
13:36:46 <haiwei_> not the vm's image
13:37:06 <Qiming> 'container-format docker'
13:37:10 <Qiming> what does that mean?
13:37:42 <haiwei_> this is glance image-create?
13:37:47 <Qiming> yes
13:37:52 <XueFengLiu> It is
13:38:09 <haiwei_> never saw this before
13:38:29 <yanyanhu> may need more investigation here before making decision :)
13:38:32 <haiwei_> we usually use bare for container-format
13:38:32 <Qiming> okay
13:38:45 <haiwei_> ok, will check it
13:38:51 <yanyanhu> haiwei_, great, thanks a lot
13:38:51 <XueFengLiu> this means we can upload a docker image to glance
13:38:56 <Qiming> current docker profile relies on docker to parse and download the image
13:39:03 <XueFengLiu> use glance-create, I thiknk
13:39:06 <yanyanhu> Qiming, yes, it is now
13:39:16 <Qiming> em ... primary use case was for nova-docker I think
13:39:23 <lvdongbing> that's the way nova-docker used to work
13:39:39 <yanyanhu> so maybe an important issue is figuring out how to consume container image stored in glance
13:39:40 <Qiming> not sure how useful it is if we let docker(d) to check the image
13:40:02 <yanyanhu> zun has the same problem I think
13:40:31 <Qiming> the assumption is that the controller node has access to either the public hub or a local registry
13:40:55 <Qiming> it sounds more of the configuration/deployment topic than a senlin programming job ?
13:41:20 <yanyanhu> Qiming, yes it is
13:41:52 <haiwei_> if the image is not uploaded to glance, glance will download it from public lib and then support it to other project?
13:42:04 <yanyanhu> maybe this can be a property of container profile?
13:42:31 <yanyanhu> haiwei_, not sure about how it works
13:42:34 <Qiming> if wanted, user (of a public cloud) can create a vm for storing his private docker images, then start a container cluster referencing images stored there?
13:42:56 <Qiming> docker determines the registry using tag, ....
13:42:59 <yanyanhu> Qiming, you mean local registry for each container cluster?
13:43:06 <haiwei_> why use a vm to store the images?
13:43:12 <Qiming> just want to say that is possible
13:43:17 <Qiming> for public cloud
13:43:18 <haiwei_> ok
13:43:44 <yanyanhu> maybe we can collect different options and then have a dicussion to see which way is better
13:43:48 <Qiming> for private cloud, you can build it somewhere so long the controller node can access its 5000 port
13:44:24 <yanyanhu> haiwei_, maybe we can collect all possible options in an etherpad
13:44:38 <haiwei_> ok, yanyanhu, will create one
13:44:41 <yanyanhu> and we can leave comments there
13:44:49 <yanyanhu> haiwei_, great, thanks :)
13:44:59 <yanyanhu> ok, only 15 minutes left
13:45:04 <yanyanhu> lets move to next topic
13:45:09 <yanyanhu> event/notification
13:45:15 <yanyanhu> hi, Qiming, your turn
13:46:19 <Qiming> okay, almost done
13:46:25 <yanyanhu> great
13:46:34 <Qiming> common interface abstracted
13:46:39 <yanyanhu> https://blueprints.launchpad.net/senlin/+spec/generic-event
13:46:48 <Qiming> no reviews yet
13:46:57 <Qiming> may have to approve it myself
13:47:13 <Qiming> next thing would be about the configuration part
13:47:29 <Qiming> making 'database' and 'message' two plugins
13:47:40 <Qiming> and load them via steverdore
13:48:01 <Qiming> add senlin.conf section/options to control when/how to log ...
13:48:05 <yanyanhu> sounds great
13:48:39 <yanyanhu> about the message way, so the messages will be published to a public queue?
13:48:58 <Qiming> yes
13:49:01 <yanyanhu> which is accessable for all openstack services
13:49:06 <yanyanhu> I see
13:49:13 <Qiming> it is controller plane thing
13:49:20 <Qiming> irrelevant to zaqar
13:49:25 <yanyanhu> this is also what we expect octavia to do :)
13:49:31 <yanyanhu> Qiming, I see
13:49:45 <Qiming> zaqar is USER-FACING message queue
13:50:04 <yanyanhu> yes
13:50:23 <yanyanhu> sorry for didn't get time to review the code. Will check it tomorrow
13:50:36 <Qiming> if you have check the code:
13:50:44 <yanyanhu> Qiming, please feel free to self approve to avoid denpendency issue
13:50:45 <Qiming> 85     def _emit(self, context, event_type, publisher_id, payload):
13:50:45 <Qiming> 86         notifier = messaging.get_notifier(publisher_id)
13:50:45 <Qiming> 87         notify = getattr(notifier, self.priority)
13:50:45 <Qiming> 88         notify(context, event_type, payload)
13:50:48 <Qiming> that is all
13:51:02 <yanyanhu> looks simple
13:51:27 <yanyanhu> get_notifier is provided by oslo?
13:51:33 <Qiming> yes
13:51:35 <yanyanhu> I mean the backend logic
13:51:37 <yanyanhu> I see
13:51:59 <Qiming> we have a thin wrapper over it
13:52:04 <Qiming> also pretty simple
13:52:19 <yanyanhu> oh, is there any limit about publishing message to this public queue in control plane?
13:52:30 <Qiming> don't think so
13:52:47 <Qiming> it is the same message service for rpc
13:53:01 <yanyanhu> I mean if a service publishs too many messages, it could break the queue?
13:53:30 <Qiming> no experience
13:53:48 <yanyanhu> me neither...
13:54:00 <Qiming> if you have ever checked events collected by ceilometer, you will realize that we are very cautious on this
13:54:16 <yanyanhu> yep
13:54:24 <Qiming> if you compare this to the number of RPC calls received by keystone
13:54:34 <yanyanhu> I know ceilometer gets lots of message from other services
13:54:34 <Qiming> by neutron
13:54:55 <Qiming> things we are sending are trivial
13:55:21 <yanyanhu> yes, compared with others
13:55:55 <Qiming> also compare to what nova is sending out: http://git.openstack.org/cgit/openstack/nova/tree/nova/notifications/objects/instance.py#n246
13:56:14 <yanyanhu> @@
13:56:42 <yanyanhu> message for each status switching for each instance
13:57:11 <Qiming> so I'm not that worried about it
13:57:14 <yanyanhu> ok, seems we don't need to worry about this issue
13:57:16 <yanyanhu> yep
13:57:28 <Qiming> in addition, we are making the behavior configurable
13:58:00 <yanyanhu> I see
13:58:34 <yanyanhu> ok, these are all workitems on the list
13:58:38 <yanyanhu> the time is almost done
13:58:48 <yanyanhu> any more update?
13:58:57 <yanyanhu> if not, will end the meeting.
13:59:03 <yanyanhu> hi, lixinhui, still there?
13:59:24 <yanyanhu> if it's ok, want to talk with you about the patch for octavia
13:59:47 <yanyanhu> we can discuss how to work on it together
13:59:53 <yanyanhu> to push it move on
14:00:05 <yanyanhu> ok, time is over. Thanks you guys for joining
14:00:09 <yanyanhu> have a good night
14:00:12 <yanyanhu> #endmeeting