13:11:55 #startmeeting tricircle 13:11:56 Meeting started Wed Jul 15 13:11:55 2015 UTC and is due to finish in 60 minutes. The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:11:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:12:00 The meeting name has been set to 'tricircle' 13:12:33 #topic rollcall 13:12:43 #info joehuang 13:12:48 #info gampel 13:12:49 #info zhiyuan 13:13:36 #topic design discussion 13:13:59 #info irenab 13:14:07 #info the cascade service should be able run with multiple instances 13:14:25 yes i agree 13:14:40 One option that we've already discussed to avoid contention is that each service is responsible for different bottom sites. 13:14:40 therefore, I think it should work like Nova-scheduler 13:14:57 this is one option 13:15:22 anathor option is that cascade service work like nova-scheduler 13:15:37 multiple Nova-scheduler will work in parallel 13:15:56 but only one will be selected for a boot vm request 13:16:10 but then you will have only one cascade service 13:16:25 joehuang: this should cover both load balancing and hight availability for the cascade service, right? 13:16:34 responsible for the scheduling 13:16:50 No, multiple nova-scheduler can work together 13:17:13 but I did not look at the code how it works 13:17:21 But it is not only nova lets say you add port to a router 13:18:09 If I understand correctly you mean to add it in the Nova and Neutron core plugin 13:18:14 the cascade service to send request to multiple bottom OpenStack 13:18:45 the cascade service to send request to multiple bottom OpenStack if needed 13:18:52 I am not sure i understand where you want to do the sch before the MQ or after 13:19:29 not shedule, but like one queue, multiple cascade consume the same queue 13:19:48 whenever one cascade pickup the message from the queue 13:20:03 the message will be removed from the queue 13:20:09 i think joe's idea is that each cascade service is aware of all the bottom openstack 13:20:19 then only one cascade service will process the message 13:20:49 to zhiyuan, currently in the design doc, we are thinking in this way 13:20:50 every cascade service can service the request independently, am I correct? 13:21:07 all bottom information could be seen from the database 13:21:18 yes 13:22:33 zhiyuan, you mentioned that if the message is sending not using fanout, then it work like this way 13:23:35 to irenab: it's not loadbalancing, how we use the message bus 13:24:03 So you want to use queue, in the POC where there any sequence of API requests that were related ? 13:24:07 yes, services consume the same queue with the same topic if fanout is not used 13:24:33 joehuang: thank you for clarification 13:25:29 So I just give an example, that the way for nova-scheduler using the message bus is what we can leverage 13:25:54 So all the services will be authenticated to all the bottom OpenStack 13:26:29 using the token carried in the top API request 13:26:48 You mean the token will be in the DB ? 13:27:04 or just proxy 13:27:05 same to token to the bottom OpenStack instance, it's carried in the contex. not in DB 13:27:28 same token to the bottom OpenStack instance, it's carried in the contex. not in DB 13:28:01 Did it work in the POC using same token on all the bottom OpenStacks 13:28:12 not only proxy, cascade serice will do long-run task 13:28:45 which long-run task ? 13:29:03 POC uses one single Keystone, so I think using the same token is fine 13:29:40 it works in the PoC. for background periodic task, some configured user is used to access the bottom OpenStack 13:30:14 Federated KeyStone not tried in the poc 13:30:26 which background periodic task ? 13:30:50 periodic to poll the VM/volume/port status 13:31:11 so that the status in the top layer will be refreshed 13:31:36 but if we have interception at the API layer, we can change this way 13:31:37 At the moment as a start point we said that we going to pass run time info to the bottom layer 13:32:56 the status could be queried only when API request coming, and return stale status first, and refrest the status immedidtly for next time query 13:33:40 to gampel, what you mean "pass run time info to the bottom layer" 13:34:34 through the delay refresh for next time, we can reduce to use a configured user info to sync the status from the bottom openstack 13:34:54 I mean if you get a port status from the API you pass it to the cascading service that will query the bottom Opensatck 13:35:34 But if is is configuration request query get it from the DB 13:35:59 Ok, but if there is a lot of ports, the latency for two hop query will be longer 13:37:16 You assume the query request will arrive together, is that what happened in the POC 13:37:46 and if multiple bottom openstack involved, it's complex to query at the same time 13:38:30 a query can be issued with different filter, and the ports meet the filter will be returned 13:38:55 the filter can be varied a lot 13:39:30 Ok l suggest to look at it know as an optimization that we can add latter after we have the basic staff working 13:40:42 ok 13:41:09 I want to update regarding the status of the experimental work status 13:41:50 I think the current design is a very good beginning to start 13:41:57 we have the infrastructure for the foundation service 13:41:58 so how about nova 13:42:14 neutron core plugin on top pass the request on the message queue up to the service endpoint 13:42:47 I mentioned in the mail we need to inherited from the API class, just like cells does 13:42:59 nova is almost done and will be checked in on Sunday 13:43:16 yes we will send it soon 13:43:23 ooo, that;s great 13:43:31 cool 13:43:48 I think once nova and neutron work, we can move it to the master branch 13:43:58 if you like we have the infrastructure now to work together on it 13:44:16 so that we can also to add code on it 13:44:35 next step is to create the DB alchemy module to the cascading service 13:45:18 devstack is ready and working as well for neutron request 13:45:20 zhiyuan, how about to study how to make multiple cascade service work in parallel: multi-worker, multi-instances 13:45:41 I think it's necessary 13:46:25 we will update the patches and merge them all into the experimental branch on Sunday 13:46:26 gampel, could you upload code to git for review 13:46:39 so you could join the work next week 13:47:02 that's super cool, then we can work together on the new staff 13:47:02 no problem, i can try based on saggi's patch 13:47:43 there are now 3 patches by saggi covering neutron devstack and the cascading service including the endpoint 13:48:10 probably Sunday we will have Nova in as well 13:48:23 and I will set up the new CI job after all the new code is submitted 13:48:44 perfect, zhiyuan 13:48:54 can we add test to the CI to do pep8 at least 13:49:52 let's try it 13:49:55 sure 13:49:59 thx 13:50:14 how about the topic in Tokyo summit 13:51:03 i summited today with irena me and joe as speakers 13:51:22 CJK project will have a poc of cascading/tricircle, 13:52:13 can you elaborate about what is CJK ? 13:52:45 CJK submitted with one topic: inter-cloud in China, Japan, Korea by Takashi and me and guys from Korea 13:53:00 CJK is China, Japan, Korea 13:53:22 nice. info sync-ed 13:53:37 very nice please share the POC result when you have it 13:54:20 and OPNFV day, the multisite project is an formal project in OPNFV, so we'll apply one session in OPNFV day in Tokyo summit 13:54:43 Sure, we will start soon 13:55:06 We have a very good progress 13:55:20 let's have a meeting next week 13:55:26 Ok thx 13:55:38 thanks you all 13:56:05 see you next time 13:56:08 thanks you 13:56:10 bye 13:56:17 thank you bye 13:56:28 #endmeeting