13:05:58 <joehuang> #startmeeting tricircle
13:05:58 <openstack> Meeting started Wed Jul 22 13:05:58 2015 UTC and is due to finish in 60 minutes.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:05:59 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:06:02 <openstack> The meeting name has been set to 'tricircle'
13:06:17 <joehuang> #topic rollcall
13:06:28 <joehuang> #info joe
13:06:33 <zhiyuan> #info zhiyuan
13:06:34 <saggi> #info saggi
13:06:37 <gampel> #info gampel
13:07:09 <joehuang> #topic db
13:07:51 <joehuang> hello, let's dicuss the table design of the cascade service
13:08:18 <joehuang> because the KeyStone already store all information about endpoints
13:08:43 <gampel> we added the basic endpoint schema to the doc
13:08:58 <saggi> What do you mean by endpoint?
13:09:01 <saggi> A site?
13:09:04 <joehuang> so when we store endponint information in cascade servcice, then it may lead to duplicaton
13:09:32 <joehuang> a service endpoint is the pulic/internal/admin url for one serice like nova, cinder
13:09:47 <joehuang> not a site
13:10:25 <saggi> So you want us to just ask the bottom KeyStone?
13:10:35 <saggi> instead of storing it in the DB?
13:10:40 <gampel> can you please explain what is stored in KeyStone  I am not sure i follow
13:10:43 <joehuang> So we may only store region info in the cascade service, and ask keystone if required
13:11:02 <saggi> joehuang: What region info?
13:11:39 <joehuang> there is one table in KeyStone,called region, one OpenStack instance one region
13:12:18 <joehuang> and each region will include service like Nova,cinder, neutron, glance, ceilometer, or other service
13:12:34 <gampel> can we get from Keystone the urls to access a region
13:12:43 <joehuang> each service has endpoint like pulic/internal/admin url
13:12:55 <joehuang> hi gampel, you got my idea
13:13:14 <joehuang> we simply store region, and cache the endponit url
13:13:37 <joehuang> if the cached url can't be accessed( timeout), then refresh from keystone
13:13:42 <gampel> So we could store the assess URL  for the bottom stack  in Ketstone
13:13:58 <joehuang> KeyStone already stores that
13:14:22 <saggi> joehuang: It's odd that Keystone already has that feature
13:14:31 <saggi> joehuang: What is it being used for?
13:15:31 <joehuang> when the user has be authenticated, the user can retrieve the endponit list, and select one region to access
13:16:32 <joehuang> hello, zhipeng, welcome back. Hope your babies is sleeping ;)
13:17:07 <zhipeng> joehuang thx they are sleeping like babies :P
13:17:07 <saggi> The cascade service will need to be able to access the information for all regions.
13:17:16 <saggi> zhipeng: :)
13:17:31 <saggi> joehuang: Is it possible to query for multiple regions?
13:17:45 <joehuang> sure, keystone 's ability
13:18:36 <gampel> So you mean when an admin add a Cascaded site we will configure new region in keystone and store all the information
13:18:36 <joehuang> for cascade service, we can provide a way which regions will be managed by the cascade service
13:18:59 <saggi> joehuang: So for each region there are multiple tenants or is that orthogonal?
13:19:10 <joehuang> yes, the admin first manage these region and endpoints in keystone
13:19:31 <joehuang> each tenenat may cross several regions
13:19:47 <joehuang> any region may support multi=tenancy
13:19:56 <gampel> But we want to create One Cascading API to mange adding new sites
13:20:25 <gampel> is it possible to create the region in Keystone from the cascading openstack service
13:20:48 <joehuang> no need to do this.
13:21:01 <joehuang> keystone is for this
13:21:28 <joehuang> but cascade service need to know which regions the cascade service will cover
13:21:41 <saggi> joehuang: And generate appropriate AZ information
13:21:42 <joehuang> this configuration APi is needed
13:22:18 <gampel> How do you configure regions in keystone ?
13:22:19 <joehuang> AZ infomation should also be configured in the bottom OpenStack
13:22:28 <joehuang> and retrieved to cascade service
13:24:21 <saggi> joehuang: I wanted to talk about that. I was wondering why not just expose all the compute node and create a logical immutable AZ for them. Do you follow?
13:24:45 <joehuang> #link http://docs.openstack.org/cli-reference/content/keystoneclient_commands.html
13:24:53 <gampel> Let first action item this topic I suggest  that we study this a bit and if make sense update the doc and change the db schema
13:25:09 <saggi> gampel: +1
13:25:58 <joehuang> to saggi, you mean nodes in bottom openstack or top layer openstack?
13:26:07 <joehuang> agree
13:26:09 <saggi> bottom layer
13:26:16 <saggi> joehuang: ^^
13:26:23 <saggi> to expose them as is in the top layer
13:26:44 <saggi> With an automatic AZ to be able to lock scheduling to a specific site.
13:26:45 <joehuang> yes, it's possible in new design
13:27:10 <saggi> joehuang: great
13:27:17 <joehuang> because all nodes in bottom openstack will be grouped into AZs
13:27:24 <saggi> yes
13:27:27 <gampel> and then we will create default  AZ for site but allow creating iner AZ
13:27:33 <joehuang> so we can get AZ info from all bottom openstack
13:27:37 <zhipeng> would there be two layers of AZs??
13:27:56 <saggi> zhipeng: not layers. Just multiple AZ. Like a ven diagram.
13:28:05 <joehuang> there are should be one interal AZ in top layer
13:28:17 <gampel> no same layer because you could have one VM in multiple  AZs
13:28:44 <zhipeng> got it
13:29:09 <joehuang> how about one action to redesign the DB shcema?
13:29:31 <saggi> joehuang: I don't follow
13:29:33 <joehuang> according to info in keystone and bottom opensrack
13:29:49 <saggi> joehuang: Oh, action item. I get it.
13:29:54 <gampel> Yes lets look a it a bit I suggest  that we study this a bit and if make sense update the doc and change the db schema
13:30:26 <saggi> gampel, joehuang: It should contain the endpoint cache at the very least
13:30:34 <joehuang> tables in the google doc store a lot dupplicate info, a bit hard to maintain if somthing changed in keystone or bottom openstack
13:30:37 <gampel> thats true
13:30:43 <joehuang> yes
13:30:51 <joehuang> I mentioned cache.
13:31:25 <zhipeng> #action study info on keystone and update the db schema design
13:31:53 <joehuang> thanks. zhipeng. I remove what I typed :)
13:32:13 <zhipeng> joehuang :P
13:32:25 <joehuang> #topic BP
13:32:44 <joehuang> we have a very good start for the source code
13:33:18 <joehuang> we need a BP to link the first patch, frame work, plugin
13:33:56 <joehuang> so that peoples are easy to be involved in
13:34:36 <zhipeng> do we need one BP to cover all that? Or BP per patch?
13:35:08 <gampel> BP link please review  https://blueprints.launchpad.net/tricircle/+spec/new-design
13:35:39 <zhipeng> #link https://blueprints.launchpad.net/tricircle/+spec/new-design
13:36:30 <joehuang> great
13:36:52 <gampel> can we push it to master branch ?
13:36:55 <joehuang> we can shares the bp in the mail-list, so it will be visible for all people.
13:37:17 <joehuang> ok. please merge on master branch
13:37:57 <joehuang> and remove all old code. Zhiyuan has already put a tag on the master.
13:38:04 <saggi> joehuang: Do you want to merge after committing to "experiment" or do you want to close experiment and have me resend it based on "master"?
13:38:26 <saggi> OK I'll add a rearrange the patches.
13:38:40 <saggi> And clean up the code base.
13:38:46 <joehuang> how about nova patch
13:39:01 <joehuang> nova patch in Nova-API
13:39:35 <joehuang> sorry that I was on business trip, not review so much
13:39:40 <saggi> joehuang: It's in progress. It's a very complex API.
13:40:11 <joehuang> for monday and tuesday
13:40:29 <joehuang> to saggi, you mean boot-vm?
13:40:47 <joehuang> this is the complex one indeed
13:40:58 <gampel> we could collaborate on the other cascading service building blocks
13:41:07 <joehuang> sure
13:41:15 <saggi> joehuang: For now I'm trying to get a skeleton up so we could start collaborating.
13:41:34 <joehuang> understand
13:42:14 <joehuang> I talked with core member Lingxian of Mistral
13:42:31 <gampel> Yes what does he think
13:42:43 <joehuang> Mistral doesn't provide lib, but work as standalone serice
13:43:18 <joehuang> that means can only call restful API or use its python client (the client to call restful API)
13:43:19 <gampel> so using rest api
13:43:48 <joehuang> it'll be great challenge
13:44:35 <joehuang> add one more restful API calling hop
13:44:52 <joehuang> some extra complexity will introduce
13:45:06 <gampel> if we will do run time query  caching  an way , this is not a big problem
13:45:12 <joehuang> especiall for status refresh. etc
13:45:51 <joehuang> we need a BP and detail infor about run time query
13:46:22 <saggi> joehuang: Did you change the topic? Also add an action item for that BP
13:46:29 <joehuang> There is a lot of chllenge we found for status
13:46:48 <saggi> joehuang: We could bypass minstral for status
13:46:57 <saggi> since it doesn't require any transaction
13:47:49 <joehuang> could you please share your ideas how to handle the status
13:48:07 <gampel> we need to add to the   design  the way we handle run time query
13:48:09 <joehuang> especially if a VM is in live-migration, or migrate volume
13:49:35 <joehuang> if migration task is running in mistral
13:49:57 <gampel> live migration is not a relevant use case  in my opinion
13:50:26 <saggi> We could define a caching policy per VM or VM group. That is good for high latency situations. When we detect a low latency situation we will start refreshing the cache in shorter intervals. Whatever the case the user always get the information for the cache.
13:50:37 <joehuang> so I am wondering to see if the ideas work for status uery
13:50:56 <saggi> Status query should bypass minstrall IMHO
13:51:21 <saggi> I don't think they even have that use case.
13:51:29 <saggi> Since it's not a real 'task'
13:51:47 <joehuang> where to define the policy, in cascade service?
13:51:56 <saggi> joehuang: Yes
13:52:16 <joehuang> there are multi-worker multi-processes
13:52:36 <joehuang> so which one will do the cacahe, cache in memory or in DB
13:53:18 <saggi> joehuang: Depends on scale. We might need to use a caching service like redis.
13:54:08 <saggi> joehuang: It's not information we care about persisting. But I suspect that for a huge amount of nodes it will get to big to keep in memory.
13:54:16 <joehuang> so which cascade service will be responisble for which VM/VM group cacahed status refreshment
13:54:44 <saggi> joehuang: Sharding, either by site or by VM_ID % cascade_service
13:54:52 <gampel> I suggest that we will add an action item to add design for cache and task exec and then discuss if it make  sense
13:55:28 <joehuang> agree
13:55:33 <zhiyuan> +1
13:55:43 <zhipeng> +1
13:55:49 <saggi> +1
13:55:51 <joehuang> status is very important.
13:56:16 <gampel> I think that we need to fix our CI to include pep8 testing
13:56:33 <saggi> gampel: Or I can keep fixing your pep8 errors :)
13:56:44 <gampel> :-D
13:57:15 <joehuang> ci included pep8, is n't it? Zhiyuan
13:57:51 <saggi> We need to add tox configuration
13:57:56 <saggi> for it to run
13:57:56 <zhiyuan> not setup yet. I am coding for DAL these days. But enabling pep8 is easy
13:58:39 <zhipeng> what is DAL?
13:58:53 <zhiyuan> just add a script to run flake8 in our third party CI
13:59:14 <joehuang> #action design for cache and task exec and then discuss
13:59:46 <saggi> zhipeng: Data Access Layer - And abstraction of all the data sources in the top layer.
14:00:02 <saggi> zhipeng: The cascade service DB the nova DB the neutron DB etc.
14:00:08 <zhipeng> saggi thx :)
14:01:23 <joehuang> let's have a short summary
14:01:29 <zhiyuan> I will take some time to enable pep8 tomorrow
14:01:53 <saggi> zhiyuan: great
14:02:25 <joehuang> #action pep8 enable in ci
14:02:51 <joehuang> 3 action, is there any more, or something missed?
14:04:32 <zhipeng> should be ok for now joehuang
14:04:40 <zhipeng> let's wrap up for the day :)
14:04:52 <joehuang> pls.
14:05:34 <joehuang> I am not family to use the irc
14:05:43 <zhipeng> just #endmeeting
14:05:46 <gampel> thank you
14:05:52 <zhipeng> but you should be the one typing it
14:05:53 <joehuang> ok ,thank you all
14:06:01 <gampel> bye
14:06:03 <joehuang> #endmeeting