13:03:58 #startmeeting tricircle 13:03:59 Meeting started Wed Jul 29 13:03:58 2015 UTC and is due to finish in 60 minutes. The chair is zhipeng. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:04:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:04:02 The meeting name has been set to 'tricircle' 13:04:11 #topic Roll Call 13:04:20 card punching time :P 13:04:25 #info zhipeng 13:04:26 #info gampel 13:04:31 #info joehuang 13:05:00 is saggi around? 13:05:13 saggi will not join us today 13:05:25 okey 13:05:38 #topic Design BP 13:05:58 let's try to use irc command more 13:06:09 start with #info to share your thoughts :P 13:06:16 #info zhiyuan 13:06:38 #info I just update the DB design, to reduce the duplication with keystone 13:06:49 hwo about your ideas? 13:07:38 ok looks good i will go over it after the meeting 13:07:47 #info the project id should be in the mapping table 13:07:57 I missed it 13:08:35 and I am afraid that the table size will grow to very large for the mapping table 13:08:45 can you add a section describing how we work with one keystone 13:09:06 hence, shall we split it into servral table or not? 13:09:26 yes, since most of the resources need a mapping entry 13:09:34 you mean by type of resource ? 13:10:00 ok, I'll do that. Today I am in lab to prepare the CJK VPN connection, not update the doc 13:10:35 #action update the doc for how to work with KeyStone, joehuang 13:10:54 joe: do you mean create a table per resource ? 13:11:08 looks good 13:11:52 for example, cinder object mapping in one table, nova resource mapping in another table 13:12:11 or just use project id as the sharding key. 13:12:34 we can built in sharding capability into the DAL layer 13:13:10 so that if the table grows large we have a way to split it into several small tables 13:13:17 this is a good idea ? 13:14:05 to gamel, your question? 13:14:05 so we have a table per bottom site 13:15:04 per bottom site may be not so good idea, for sometimes you may have to query mutiple tables 13:15:42 may we make the sharding policy being configurable? 13:16:03 so what do you mean "sharding capability " so group of bottom site in one table 13:16:36 per bottom site or per project or per AZ 13:18:28 let me see how to do a sharding in case of the table become large 13:19:15 how about the task flow 13:19:32 before that, one thing I would like to discuss 13:19:34 #topic task flow in cascade service 13:19:40 do we need to maintain a session reference in a context? as I understand, information in context can be used for policy check, but we can finish policy check before database access 13:19:41 please 13:20:48 i though of context to understand who is query the DB admin/ tenant user etc 13:20:50 the Nova/cinder/Neutron API should have done the policy check, isn't it? 13:21:07 yes but we have cascading service api as well 13:21:26 they query/update the DAL 13:22:04 cascade servcice API mainly for admin, do you mean we will open the cascade service API to tenant? 13:23:28 For the moment it seem that it is , but we might need roles and responsibility API 13:23:35 by concept tenants shouldn't be aware of cascade service 13:24:03 in my opinion, the context will be passed to a function like query_mapping, query_mapping will use the information in context to finish policy check, then issue a database access 13:24:55 session is created before the database access 13:25:01 I think that this will be a good infrastructure to allow different rules in the cascading API 13:25:36 but not included in the context 13:25:39 One context is form the API maybe different admin rules and another context is from the running service 13:27:18 Zhiyuan, is it difficult to include session in the context 13:27:28 sorry i mean roles 13:28:50 the session lifecycle is shorter that other information like project, roles, etc 13:30:30 if the session is over, then shall we remove it from the context? 13:30:32 it's easy. I just feel like a bit strange. context and session are separated in nova, but together in neutron ... 13:30:59 what do they do in nova ? 13:31:21 both session and context are passed to the database handler 13:32:09 but if you pass the context any way it make sense to have the session as a member 13:33:46 zhiyuan: if you feel this work is not needed we could discuss this offline 13:34:33 ok, I just want the core database part not depend on the context 13:34:57 Zhiyuan /Gampel, one question, if the task flow access the DB many times, whill the session changed in different DB access 13:35:04 yes i agree 13:35:18 #topic task flow in cascade service 13:36:25 i think that the task compiler an dependency builder will access the db with one session 13:37:19 hi okrieg 13:37:24 #info it'll be difficult to process the intermidiate result produfced during the task flow if we using mistral 13:38:18 #info therefore I would prefer to use the task flow like what used in the Cinder, but not to have dependency on Mistral 13:38:36 mistral support passing results to next taskflow 13:38:57 you mean the API combination ? 13:39:12 but some result should be processed by the cascade service 13:39:23 for example, during a VM boot task flow 13:39:37 you have to record the object uuid mapping 13:39:50 you mean the port status and vm status 13:40:38 the cascade service should be aware of some new created objects and persist the mapping in the DB 13:41:10 I will check if mistral support providing response 13:41:39 any way we could have the task exe as a pluggable module 13:41:54 that's a good idea 13:42:45 #info task flow exec should be pluggable module 13:43:03 #action gampel check what mistral supports and which taskflow we want in the reference implementation 13:44:09 hi, okrieg back 13:45:26 #topic cascade dal 13:45:33 I heard that Saggi is sick, sorry to hear that 13:45:58 i will let him know :) 13:47:09 bring our regards to him :) 13:47:15 I think we can have a runable framework for the first BP, but not to do to much 13:47:49 yes we need to start working on the dependency builder and the task compiler 13:47:55 more patches to the framework but still working 13:49:56 agree 13:50:47 zhiyuan anything want to discuss about the dal 13:52:29 the DAL part will grow as more features includes 13:53:04 #info "port status" and other site-localized information collection 13:53:07 currently we have a basis to start, that's enough 13:53:52 zhipeng: it's discussed in the previous section :) 13:53:53 we analyzed how to do the port-status caching mechanism (we discussed this last week).. 13:54:12 can we add thought about "port status" in the doc 13:54:40 we think the poll approach will not scale, we need something else 13:54:58 interesting 13:55:26 we think we need to introduce a "bottom" service (like "cascaded service" or something) that will handle bottom data collection (and other tasks) locally 13:55:42 like, l2gw, port-status monitoring (and push it up to the "top"), 13:56:19 the local service should be aware of status change 13:56:32 it will let us synchronize different network implementations (vlan<->vxlan<->ovn<-stt ... ) 13:57:39 we're still working on it, but we think that it should implement the L2GW API, and maybe other APIs as well (should probably be a pluggable system with extensions) 13:57:57 gampel i think this is what joehuang mentioned before, the bottom service 13:58:13 if so, you have to tough bottom OpenStack. Will all vendor of the distributions will allow a new servici to insert 13:58:20 okay, maybe I missed it... 13:59:30 the "local cascaded service" needs to be designed like a pluggable driver(s) that each distribution can implement 14:00:12 need more discussion 14:00:12 global cascade service plug to the top, whereas the local ones plug to the bottom 14:00:15 as for making a change to the bottom instance - it will not. it's just another service you deploy to enable participation in the "Cascading Cloud". (not different from L2GW) 14:00:44 we can discuss in the m-l 14:00:54 @zhipeng - yes, I think you're right 14:01:06 @joehung - yes, that's probably a good idea :) 14:01:31 please trigger a discussion in m-l 14:01:42 #action discuss pluggable cascade service module on bottom 14:02:23 okey let's wrap up the meeting, got a lot of AIs :) 14:02:36 I agree 14:02:36 thanks for you guys participating. 14:02:48 thank you guys 14:02:49 yes. thanks a lot 14:02:59 bye 14:03:05 bye 14:03:07 bye~~ 14:03:14 #endmeeting