13:05:58 #startmeeting tricircle 13:05:58 Meeting started Wed Jul 22 13:05:58 2015 UTC and is due to finish in 60 minutes. The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:05:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:06:02 The meeting name has been set to 'tricircle' 13:06:17 #topic rollcall 13:06:28 #info joe 13:06:33 #info zhiyuan 13:06:34 #info saggi 13:06:37 #info gampel 13:07:09 #topic db 13:07:51 hello, let's dicuss the table design of the cascade service 13:08:18 because the KeyStone already store all information about endpoints 13:08:43 we added the basic endpoint schema to the doc 13:08:58 What do you mean by endpoint? 13:09:01 A site? 13:09:04 so when we store endponint information in cascade servcice, then it may lead to duplicaton 13:09:32 a service endpoint is the pulic/internal/admin url for one serice like nova, cinder 13:09:47 not a site 13:10:25 So you want us to just ask the bottom KeyStone? 13:10:35 instead of storing it in the DB? 13:10:40 can you please explain what is stored in KeyStone I am not sure i follow 13:10:43 So we may only store region info in the cascade service, and ask keystone if required 13:11:02 joehuang: What region info? 13:11:39 there is one table in KeyStone,called region, one OpenStack instance one region 13:12:18 and each region will include service like Nova,cinder, neutron, glance, ceilometer, or other service 13:12:34 can we get from Keystone the urls to access a region 13:12:43 each service has endpoint like pulic/internal/admin url 13:12:55 hi gampel, you got my idea 13:13:14 we simply store region, and cache the endponit url 13:13:37 if the cached url can't be accessed( timeout), then refresh from keystone 13:13:42 So we could store the assess URL for the bottom stack in Ketstone 13:13:58 KeyStone already stores that 13:14:22 joehuang: It's odd that Keystone already has that feature 13:14:31 joehuang: What is it being used for? 13:15:31 when the user has be authenticated, the user can retrieve the endponit list, and select one region to access 13:16:32 hello, zhipeng, welcome back. Hope your babies is sleeping ;) 13:17:07 joehuang thx they are sleeping like babies :P 13:17:07 The cascade service will need to be able to access the information for all regions. 13:17:16 zhipeng: :) 13:17:31 joehuang: Is it possible to query for multiple regions? 13:17:45 sure, keystone 's ability 13:18:36 So you mean when an admin add a Cascaded site we will configure new region in keystone and store all the information 13:18:36 for cascade service, we can provide a way which regions will be managed by the cascade service 13:18:59 joehuang: So for each region there are multiple tenants or is that orthogonal? 13:19:10 yes, the admin first manage these region and endpoints in keystone 13:19:31 each tenenat may cross several regions 13:19:47 any region may support multi=tenancy 13:19:56 But we want to create One Cascading API to mange adding new sites 13:20:25 is it possible to create the region in Keystone from the cascading openstack service 13:20:48 no need to do this. 13:21:01 keystone is for this 13:21:28 but cascade service need to know which regions the cascade service will cover 13:21:41 joehuang: And generate appropriate AZ information 13:21:42 this configuration APi is needed 13:22:18 How do you configure regions in keystone ? 13:22:19 AZ infomation should also be configured in the bottom OpenStack 13:22:28 and retrieved to cascade service 13:24:21 joehuang: I wanted to talk about that. I was wondering why not just expose all the compute node and create a logical immutable AZ for them. Do you follow? 13:24:45 #link http://docs.openstack.org/cli-reference/content/keystoneclient_commands.html 13:24:53 Let first action item this topic I suggest that we study this a bit and if make sense update the doc and change the db schema 13:25:09 gampel: +1 13:25:58 to saggi, you mean nodes in bottom openstack or top layer openstack? 13:26:07 agree 13:26:09 bottom layer 13:26:16 joehuang: ^^ 13:26:23 to expose them as is in the top layer 13:26:44 With an automatic AZ to be able to lock scheduling to a specific site. 13:26:45 yes, it's possible in new design 13:27:10 joehuang: great 13:27:17 because all nodes in bottom openstack will be grouped into AZs 13:27:24 yes 13:27:27 and then we will create default AZ for site but allow creating iner AZ 13:27:33 so we can get AZ info from all bottom openstack 13:27:37 would there be two layers of AZs?? 13:27:56 zhipeng: not layers. Just multiple AZ. Like a ven diagram. 13:28:05 there are should be one interal AZ in top layer 13:28:17 no same layer because you could have one VM in multiple AZs 13:28:44 got it 13:29:09 how about one action to redesign the DB shcema? 13:29:31 joehuang: I don't follow 13:29:33 according to info in keystone and bottom opensrack 13:29:49 joehuang: Oh, action item. I get it. 13:29:54 Yes lets look a it a bit I suggest that we study this a bit and if make sense update the doc and change the db schema 13:30:26 gampel, joehuang: It should contain the endpoint cache at the very least 13:30:34 tables in the google doc store a lot dupplicate info, a bit hard to maintain if somthing changed in keystone or bottom openstack 13:30:37 thats true 13:30:43 yes 13:30:51 I mentioned cache. 13:31:25 #action study info on keystone and update the db schema design 13:31:53 thanks. zhipeng. I remove what I typed :) 13:32:13 joehuang :P 13:32:25 #topic BP 13:32:44 we have a very good start for the source code 13:33:18 we need a BP to link the first patch, frame work, plugin 13:33:56 so that peoples are easy to be involved in 13:34:36 do we need one BP to cover all that? Or BP per patch? 13:35:08 BP link please review https://blueprints.launchpad.net/tricircle/+spec/new-design 13:35:39 #link https://blueprints.launchpad.net/tricircle/+spec/new-design 13:36:30 great 13:36:52 can we push it to master branch ? 13:36:55 we can shares the bp in the mail-list, so it will be visible for all people. 13:37:17 ok. please merge on master branch 13:37:57 and remove all old code. Zhiyuan has already put a tag on the master. 13:38:04 joehuang: Do you want to merge after committing to "experiment" or do you want to close experiment and have me resend it based on "master"? 13:38:26 OK I'll add a rearrange the patches. 13:38:40 And clean up the code base. 13:38:46 how about nova patch 13:39:01 nova patch in Nova-API 13:39:35 sorry that I was on business trip, not review so much 13:39:40 joehuang: It's in progress. It's a very complex API. 13:40:11 for monday and tuesday 13:40:29 to saggi, you mean boot-vm? 13:40:47 this is the complex one indeed 13:40:58 we could collaborate on the other cascading service building blocks 13:41:07 sure 13:41:15 joehuang: For now I'm trying to get a skeleton up so we could start collaborating. 13:41:34 understand 13:42:14 I talked with core member Lingxian of Mistral 13:42:31 Yes what does he think 13:42:43 Mistral doesn't provide lib, but work as standalone serice 13:43:18 that means can only call restful API or use its python client (the client to call restful API) 13:43:19 so using rest api 13:43:48 it'll be great challenge 13:44:35 add one more restful API calling hop 13:44:52 some extra complexity will introduce 13:45:06 if we will do run time query caching an way , this is not a big problem 13:45:12 especiall for status refresh. etc 13:45:51 we need a BP and detail infor about run time query 13:46:22 joehuang: Did you change the topic? Also add an action item for that BP 13:46:29 There is a lot of chllenge we found for status 13:46:48 joehuang: We could bypass minstral for status 13:46:57 since it doesn't require any transaction 13:47:49 could you please share your ideas how to handle the status 13:48:07 we need to add to the design the way we handle run time query 13:48:09 especially if a VM is in live-migration, or migrate volume 13:49:35 if migration task is running in mistral 13:49:57 live migration is not a relevant use case in my opinion 13:50:26 We could define a caching policy per VM or VM group. That is good for high latency situations. When we detect a low latency situation we will start refreshing the cache in shorter intervals. Whatever the case the user always get the information for the cache. 13:50:37 so I am wondering to see if the ideas work for status uery 13:50:56 Status query should bypass minstrall IMHO 13:51:21 I don't think they even have that use case. 13:51:29 Since it's not a real 'task' 13:51:47 where to define the policy, in cascade service? 13:51:56 joehuang: Yes 13:52:16 there are multi-worker multi-processes 13:52:36 so which one will do the cacahe, cache in memory or in DB 13:53:18 joehuang: Depends on scale. We might need to use a caching service like redis. 13:54:08 joehuang: It's not information we care about persisting. But I suspect that for a huge amount of nodes it will get to big to keep in memory. 13:54:16 so which cascade service will be responisble for which VM/VM group cacahed status refreshment 13:54:44 joehuang: Sharding, either by site or by VM_ID % cascade_service 13:54:52 I suggest that we will add an action item to add design for cache and task exec and then discuss if it make sense 13:55:28 agree 13:55:33 +1 13:55:43 +1 13:55:49 +1 13:55:51 status is very important. 13:56:16 I think that we need to fix our CI to include pep8 testing 13:56:33 gampel: Or I can keep fixing your pep8 errors :) 13:56:44 :-D 13:57:15 ci included pep8, is n't it? Zhiyuan 13:57:51 We need to add tox configuration 13:57:56 for it to run 13:57:56 not setup yet. I am coding for DAL these days. But enabling pep8 is easy 13:58:39 what is DAL? 13:58:53 just add a script to run flake8 in our third party CI 13:59:14 #action design for cache and task exec and then discuss 13:59:46 zhipeng: Data Access Layer - And abstraction of all the data sources in the top layer. 14:00:02 zhipeng: The cascade service DB the nova DB the neutron DB etc. 14:00:08 saggi thx :) 14:01:23 let's have a short summary 14:01:29 I will take some time to enable pep8 tomorrow 14:01:53 zhiyuan: great 14:02:25 #action pep8 enable in ci 14:02:51 3 action, is there any more, or something missed? 14:04:32 should be ok for now joehuang 14:04:40 let's wrap up for the day :) 14:04:52 pls. 14:05:34 I am not family to use the irc 14:05:43 just #endmeeting 14:05:46 thank you 14:05:52 but you should be the one typing it 14:05:53 ok ,thank you all 14:06:01 bye 14:06:03 #endmeeting