13:11:55 <joehuang> #startmeeting tricircle
13:11:56 <openstack> Meeting started Wed Jul 15 13:11:55 2015 UTC and is due to finish in 60 minutes.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:11:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:12:00 <openstack> The meeting name has been set to 'tricircle'
13:12:33 <joehuang> #topic rollcall
13:12:43 <joehuang> #info joehuang
13:12:48 <gampel> #info gampel
13:12:49 <zhiyuan> #info zhiyuan
13:13:36 <joehuang> #topic design discussion
13:13:59 <irenab> #info irenab
13:14:07 <joehuang> #info the cascade service should be able run with multiple instances
13:14:25 <gampel> yes i agree
13:14:40 <gampel> One option that we've already discussed to avoid contention is that each service is responsible for different bottom sites.
13:14:40 <joehuang> therefore, I think it should work like Nova-scheduler
13:14:57 <joehuang> this is one option
13:15:22 <joehuang> anathor option is that cascade service work like nova-scheduler
13:15:37 <joehuang> multiple Nova-scheduler will work in parallel
13:15:56 <joehuang> but only one will be selected for a boot vm request
13:16:10 <gampel> but then you will have only one cascade service
13:16:25 <irenab> joehuang: this should cover both load balancing and hight availability for the cascade service, right?
13:16:34 <gampel> responsible  for the scheduling
13:16:50 <joehuang> No, multiple nova-scheduler can work together
13:17:13 <joehuang> but I did not look at the code how it works
13:17:21 <gampel> But it is not only nova lets say you add port to a router
13:18:09 <gampel> If I understand  correctly you mean to add it in the Nova and Neutron core plugin
13:18:14 <joehuang> the cascade service to send request to multiple bottom OpenStack
13:18:45 <joehuang> the cascade service to send request to multiple bottom OpenStack if needed
13:18:52 <gampel> I am not sure i understand  where you want to do the sch before the MQ or after
13:19:29 <joehuang> not shedule, but like one queue, multiple cascade consume the same queue
13:19:48 <joehuang> whenever one cascade pickup the message from the queue
13:20:03 <joehuang> the message will be removed from the queue
13:20:09 <zhiyuan> i think joe's idea is that each cascade service is aware of all the bottom openstack
13:20:19 <joehuang> then only one cascade service will process the message
13:20:49 <joehuang> to zhiyuan, currently in the design doc, we are thinking in this way
13:20:50 <zhiyuan> every cascade service can service the request independently, am I correct?
13:21:07 <joehuang> all bottom information could be seen from the database
13:21:18 <joehuang> yes
13:22:33 <joehuang> zhiyuan, you mentioned that if the message is sending not using fanout, then it work like this way
13:23:35 <joehuang> to irenab: it's not loadbalancing, how we use the message bus
13:24:03 <gampel> So you want to use queue, in the POC where there any sequence of API requests that were related ?
13:24:07 <zhiyuan> yes, services consume the same queue with the same topic if fanout is not used
13:24:33 <irenab> joehuang: thank you for clarification
13:25:29 <joehuang> So I just give an example, that the way for nova-scheduler using the message bus is what we can leverage
13:25:54 <gampel> So all the services will be authenticated  to all the bottom OpenStack
13:26:29 <joehuang> using the token carried in the top API request
13:26:48 <gampel> You mean the token will be in the DB ?
13:27:04 <gampel> or just proxy
13:27:05 <joehuang> same to token to the bottom OpenStack instance, it's carried in the contex. not in DB
13:27:28 <joehuang> same token to the bottom OpenStack instance, it's carried in the contex. not in DB
13:28:01 <gampel> Did it work in the POC  using same token on all the bottom OpenStacks
13:28:12 <joehuang> not only proxy, cascade serice will do long-run task
13:28:45 <gampel> which long-run task ?
13:29:03 <zhiyuan> POC uses one single Keystone, so I think using the same token is fine
13:29:40 <joehuang> it works in the PoC. for background periodic task, some configured user is used to access the bottom OpenStack
13:30:14 <joehuang> Federated KeyStone not tried in the poc
13:30:26 <gampel> which background periodic task ?
13:30:50 <joehuang> periodic to poll the VM/volume/port status
13:31:11 <joehuang> so that the status in the top layer will be refreshed
13:31:36 <joehuang> but if we have interception at the API layer, we can change this way
13:31:37 <gampel> At the moment as a start point we said that we going to pass run time info to the bottom layer
13:32:56 <joehuang> the status could be queried only when API request coming, and return stale status first, and refrest the status immedidtly for next time query
13:33:40 <joehuang> to gampel, what you mean "pass run time info to the bottom layer"
13:34:34 <joehuang> through the delay refresh for next time, we can reduce to use a configured user info to sync the status from the bottom openstack
13:34:54 <gampel> I mean if you get a port status from the API you pass it to the cascading service that will query the bottom Opensatck
13:35:34 <gampel> But  if is is configuration request  query get it from the DB
13:35:59 <joehuang> Ok, but if there is a lot of ports, the latency for two hop query will be longer
13:37:16 <gampel> You assume the query request will arrive together, is that what happened in the POC
13:37:46 <joehuang> and if multiple bottom openstack involved, it's complex to query at the same time
13:38:30 <joehuang> a query can be issued with different filter, and the ports meet the filter will be returned
13:38:55 <joehuang> the filter can be varied a lot
13:39:30 <gampel> Ok l suggest  to look at it know as an optimization that we can add latter after we have the basic staff working
13:40:42 <joehuang> ok
13:41:09 <gampel> I want to update regarding the status of the experimental work status
13:41:50 <joehuang> I think the current design is a very good beginning to start
13:41:57 <gampel> we have the infrastructure for the foundation service
13:41:58 <joehuang> so how about nova
13:42:14 <gampel> neutron core plugin on top pass the request on the message queue up to the service endpoint
13:42:47 <joehuang> I mentioned in the mail we need to inherited from the API class, just like cells does
13:42:59 <gampel> nova is almost done and will be checked in on Sunday
13:43:16 <gampel> yes we will send it soon
13:43:23 <joehuang> ooo, that;s great
13:43:31 <zhiyuan> cool
13:43:48 <joehuang> I think once nova and neutron work, we can move it to the master branch
13:43:58 <gampel> if you like we have the infrastructure now to work together on it
13:44:16 <joehuang> so that we can also to add code on it
13:44:35 <gampel> next step is to create  the DB alchemy module to the cascading service
13:45:18 <gampel> devstack is ready and working as well for neutron request
13:45:20 <joehuang> zhiyuan, how about to study how to make multiple cascade service work in parallel: multi-worker, multi-instances
13:45:41 <joehuang> I think it's necessary
13:46:25 <gampel> we will update the patches and merge them all into the experimental branch on Sunday
13:46:26 <joehuang> gampel, could you upload code to git for review
13:46:39 <gampel> so you could join the work next week
13:47:02 <joehuang> that's super cool, then we can work together on the new staff
13:47:02 <zhiyuan> no problem, i can try based on saggi's patch
13:47:43 <gampel> there are now 3 patches by saggi covering neutron devstack and the cascading service including the endpoint
13:48:10 <gampel> probably Sunday we will have Nova in as well
13:48:23 <zhiyuan> and I will set up the new CI job after all the new code is submitted
13:48:44 <joehuang> perfect, zhiyuan
13:48:54 <gampel> can we add test to the CI to do pep8 at least
13:49:52 <joehuang> let's try it
13:49:55 <zhiyuan> sure
13:49:59 <gampel> thx
13:50:14 <joehuang> how about the topic in Tokyo summit
13:51:03 <gampel> i summited today with irena  me  and joe as speakers
13:51:22 <joehuang> CJK project will have a poc of cascading/tricircle,
13:52:13 <gampel> can you elaborate about what is CJK ?
13:52:45 <joehuang> CJK submitted with one topic: inter-cloud in China, Japan, Korea by Takashi and me and guys from Korea
13:53:00 <joehuang> CJK is China, Japan, Korea
13:53:22 <joehuang> nice. info sync-ed
13:53:37 <gampel> very nice please share the POC result when you have it
13:54:20 <joehuang> and OPNFV day, the multisite project is an formal project in OPNFV, so we'll apply one session in OPNFV day in Tokyo summit
13:54:43 <joehuang> Sure, we will start soon
13:55:06 <joehuang> We have a very good progress
13:55:20 <joehuang> let's have a meeting next week
13:55:26 <gampel> Ok thx
13:55:38 <joehuang> thanks you all
13:56:05 <joehuang> see you next time
13:56:08 <irenab> thanks you
13:56:10 <joehuang> bye
13:56:17 <gampel> thank you bye
13:56:28 <joehuang> #endmeeting