13:03:58 <zhipeng> #startmeeting tricircle 13:03:59 <openstack> Meeting started Wed Jul 29 13:03:58 2015 UTC and is due to finish in 60 minutes. The chair is zhipeng. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:04:00 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:04:02 <openstack> The meeting name has been set to 'tricircle' 13:04:11 <zhipeng> #topic Roll Call 13:04:20 <zhipeng> card punching time :P 13:04:25 <zhipeng> #info zhipeng 13:04:26 <gampel> #info gampel 13:04:31 <joehuang> #info joehuang 13:05:00 <zhipeng> is saggi around? 13:05:13 <gampel> saggi will not join us today 13:05:25 <zhipeng> okey 13:05:38 <zhipeng> #topic Design BP 13:05:58 <zhipeng> let's try to use irc command more 13:06:09 <zhipeng> start with #info to share your thoughts :P 13:06:16 <zhipeng> #info zhiyuan 13:06:38 <joehuang> #info I just update the DB design, to reduce the duplication with keystone 13:06:49 <joehuang> hwo about your ideas? 13:07:38 <gampel> ok looks good i will go over it after the meeting 13:07:47 <joehuang> #info the project id should be in the mapping table 13:07:57 <joehuang> I missed it 13:08:35 <joehuang> and I am afraid that the table size will grow to very large for the mapping table 13:08:45 <gampel> can you add a section describing how we work with one keystone 13:09:06 <joehuang> hence, shall we split it into servral table or not? 13:09:26 <zhiyuan> yes, since most of the resources need a mapping entry 13:09:34 <gampel> you mean by type of resource ? 13:10:00 <joehuang> ok, I'll do that. Today I am in lab to prepare the CJK VPN connection, not update the doc 13:10:35 <joehuang> #action update the doc for how to work with KeyStone, joehuang 13:10:54 <gampel> joe: do you mean create a table per resource ? 13:11:08 <joehuang> looks good 13:11:52 <joehuang> for example, cinder object mapping in one table, nova resource mapping in another table 13:12:11 <joehuang> or just use project id as the sharding key. 13:12:34 <joehuang> we can built in sharding capability into the DAL layer 13:13:10 <joehuang> so that if the table grows large we have a way to split it into several small tables 13:13:17 <gampel> this is a good idea ? 13:14:05 <joehuang> to gamel, your question? 13:14:05 <gampel> so we have a table per bottom site 13:15:04 <joehuang> per bottom site may be not so good idea, for sometimes you may have to query mutiple tables 13:15:42 <joehuang> may we make the sharding policy being configurable? 13:16:03 <gampel> so what do you mean "sharding capability " so group of bottom site in one table 13:16:36 <joehuang> per bottom site or per project or per AZ 13:18:28 <joehuang> let me see how to do a sharding in case of the table become large 13:19:15 <joehuang> how about the task flow 13:19:32 <zhiyuan> before that, one thing I would like to discuss 13:19:34 <joehuang> #topic task flow in cascade service 13:19:40 <zhiyuan> do we need to maintain a session reference in a context? as I understand, information in context can be used for policy check, but we can finish policy check before database access 13:19:41 <joehuang> please 13:20:48 <gampel> i though of context to understand who is query the DB admin/ tenant user etc 13:20:50 <joehuang> the Nova/cinder/Neutron API should have done the policy check, isn't it? 13:21:07 <gampel> yes but we have cascading service api as well 13:21:26 <gampel> they query/update the DAL 13:22:04 <joehuang> cascade servcice API mainly for admin, do you mean we will open the cascade service API to tenant? 13:23:28 <gampel> For the moment it seem that it is , but we might need roles and responsibility API 13:23:35 <zhipeng> by concept tenants shouldn't be aware of cascade service 13:24:03 <zhiyuan> in my opinion, the context will be passed to a function like query_mapping, query_mapping will use the information in context to finish policy check, then issue a database access 13:24:55 <zhiyuan> session is created before the database access 13:25:01 <gampel> I think that this will be a good infrastructure to allow different rules in the cascading API 13:25:36 <zhiyuan> but not included in the context 13:25:39 <gampel> One context is form the API maybe different admin rules and another context is from the running service 13:27:18 <joehuang> Zhiyuan, is it difficult to include session in the context 13:27:28 <gampel> sorry i mean roles 13:28:50 <joehuang> the session lifecycle is shorter that other information like project, roles, etc 13:30:30 <joehuang> if the session is over, then shall we remove it from the context? 13:30:32 <zhiyuan> it's easy. I just feel like a bit strange. context and session are separated in nova, but together in neutron ... 13:30:59 <gampel> what do they do in nova ? 13:31:21 <zhiyuan> both session and context are passed to the database handler 13:32:09 <gampel> but if you pass the context any way it make sense to have the session as a member 13:33:46 <gampel> zhiyuan: if you feel this work is not needed we could discuss this offline 13:34:33 <zhiyuan> ok, I just want the core database part not depend on the context 13:34:57 <joehuang> Zhiyuan /Gampel, one question, if the task flow access the DB many times, whill the session changed in different DB access 13:35:04 <gampel> yes i agree 13:35:18 <zhipeng> #topic task flow in cascade service 13:36:25 <gampel> i think that the task compiler an dependency builder will access the db with one session 13:37:19 <zhipeng> hi okrieg 13:37:24 <joehuang> #info it'll be difficult to process the intermidiate result produfced during the task flow if we using mistral 13:38:18 <joehuang> #info therefore I would prefer to use the task flow like what used in the Cinder, but not to have dependency on Mistral 13:38:36 <gampel> mistral support passing results to next taskflow 13:38:57 <gampel> you mean the API combination ? 13:39:12 <joehuang> but some result should be processed by the cascade service 13:39:23 <joehuang> for example, during a VM boot task flow 13:39:37 <joehuang> you have to record the object uuid mapping 13:39:50 <gampel> you mean the port status and vm status 13:40:38 <joehuang> the cascade service should be aware of some new created objects and persist the mapping in the DB 13:41:10 <gampel> I will check if mistral support providing response 13:41:39 <gampel> any way we could have the task exe as a pluggable module 13:41:54 <joehuang> that's a good idea 13:42:45 <joehuang> #info task flow exec should be pluggable module 13:43:03 <gampel> #action gampel check what mistral supports and which taskflow we want in the reference implementation 13:44:09 <joehuang> hi, okrieg back 13:45:26 <zhipeng> #topic cascade dal 13:45:33 <joehuang> I heard that Saggi is sick, sorry to hear that 13:45:58 <gampel> i will let him know :) 13:47:09 <zhipeng> bring our regards to him :) 13:47:15 <joehuang> I think we can have a runable framework for the first BP, but not to do to much 13:47:49 <gampel> yes we need to start working on the dependency builder and the task compiler 13:47:55 <joehuang> more patches to the framework but still working 13:49:56 <joehuang> agree 13:50:47 <zhipeng> zhiyuan anything want to discuss about the dal 13:52:29 <joehuang> the DAL part will grow as more features includes 13:53:04 <gampel> #info "port status" and other site-localized information collection 13:53:07 <joehuang> currently we have a basis to start, that's enough 13:53:52 <zhiyuan> zhipeng: it's discussed in the previous section :) 13:53:53 <gampel> we analyzed how to do the port-status caching mechanism (we discussed this last week).. 13:54:12 <joehuang> can we add thought about "port status" in the doc 13:54:40 <gampel> we think the poll approach will not scale, we need something else 13:54:58 <joehuang> interesting 13:55:26 <gampel> we think we need to introduce a "bottom" service (like "cascaded service" or something) that will handle bottom data collection (and other tasks) locally 13:55:42 <gampel> like, l2gw, port-status monitoring (and push it up to the "top"), 13:56:19 <joehuang> the local service should be aware of status change 13:56:32 <gampel> it will let us synchronize different network implementations (vlan<->vxlan<->ovn<-stt ... ) 13:57:39 <gampel> we're still working on it, but we think that it should implement the L2GW API, and maybe other APIs as well (should probably be a pluggable system with extensions) 13:57:57 <zhipeng> gampel i think this is what joehuang mentioned before, the bottom service 13:58:13 <joehuang> if so, you have to tough bottom OpenStack. Will all vendor of the distributions will allow a new servici to insert 13:58:20 <gampel> okay, maybe I missed it... 13:59:30 <gampel> the "local cascaded service" needs to be designed like a pluggable driver(s) that each distribution can implement 14:00:12 <joehuang> need more discussion 14:00:12 <zhipeng> global cascade service plug to the top, whereas the local ones plug to the bottom 14:00:15 <gampel> as for making a change to the bottom instance - it will not. it's just another service you deploy to enable participation in the "Cascading Cloud". (not different from L2GW) 14:00:44 <joehuang> we can discuss in the m-l 14:00:54 <gampel> @zhipeng - yes, I think you're right 14:01:06 <gampel> @joehung - yes, that's probably a good idea :) 14:01:31 <joehuang> please trigger a discussion in m-l 14:01:42 <zhipeng> #action discuss pluggable cascade service module on bottom 14:02:23 <zhipeng> okey let's wrap up the meeting, got a lot of AIs :) 14:02:36 <gampel> I agree 14:02:36 <zhipeng> thanks for you guys participating. 14:02:48 <gampel> thank you guys 14:02:49 <joehuang> yes. thanks a lot 14:02:59 <joehuang> bye 14:03:05 <zhipeng> bye 14:03:07 <zhiyuan> bye~~ 14:03:14 <zhipeng> #endmeeting