16:04:46 <dougwig> #startmeeting kosmos 16:04:47 <openstack> Meeting started Tue Aug 2 16:04:46 2016 UTC and is due to finish in 60 minutes. The chair is dougwig. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:04:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:04:51 <openstack> The meeting name has been set to 'kosmos' 16:05:28 <RamT_> 0/ 16:05:30 <RamT_> o/ 16:05:42 <dougwig> today we were going to review the current API doc. have folks had time to read that? 16:05:45 <mpbnka> o/ 16:06:01 <RamT_> yes 16:06:10 <mpbnka> yes 16:06:41 <RamT_> agenda https://review.openstack.org/#/c/217413/ 16:06:49 <dougwig> does someone have a link handy? 16:07:07 <mpbnka> https://github.com/openstack/kosmos-specs/blob/master/specs/liberty/api.rst 16:07:12 <RamT_> https://github.com/openstack/kosmos-specs/blob/master/specs/liberty/api.rst 16:07:16 <mpbnka> :) 16:07:25 <dougwig> perfect. 16:07:49 <venkat> :) 16:08:24 <RamT_> 1. do we call the policies gslbs? 16:08:41 <RamT_> I think they need to be called as policies 16:08:44 <RamT_> rather than gslbs 16:09:04 <dougwig> the basic notion is root object of gslb with attached pools, which have attached members and monitors. 16:09:35 <dougwig> by policy, you mean what? the dns algorithm/meta-data? or other? 16:09:48 <RamT_> no the api name itself 16:10:04 <RamT_> like http://example.gslb.openstack.org/v0.1/policies 16:10:10 <RamT_> rather than http://example.gslb.openstack.org/v0.1/gslbs 16:11:05 <RamT_> as we are not creating loadbalancers but we are creating policies on gslbs 16:11:18 <dougwig> well, we have a structure of gslb -> pools -> { monitors, members }. i could see a structure of gslb -> { policies, pools -> { monitors, members } }, but i think that policy -> pool would be a little odd. 16:11:53 <dougwig> it has to start somewhere, and policy seems a little generic for a root object? 16:12:35 <RamT_> I don't have an issue with gslbs except that the name seems odd 16:12:37 <RamT_> moving on 16:12:45 <RamT_> what is flavor? 16:13:19 <RamT_> in loadbalancer snippet 16:13:37 <RamT_> https://github.com/openstack/kosmos-specs/blob/master/specs/liberty/api.rst#load-balancer-json-snippet 16:13:57 <mpbnka> is it same as nova flavor or something else 16:13:58 <dougwig> meant to differentiate between types of lb's, e.g. an open source VM vs a hardware accelerated box. but i don't think we use it for anything yet. 16:14:16 <dougwig> RamT_: right, i hear you on the name. names are hard. :) 16:14:40 <RamT_> dougwig: :) 16:14:58 <RamT_> moving on to https://github.com/openstack/kosmos-specs/blob/master/specs/liberty/api.rst#pool-member-json-snippet---neutron-lbaas 16:15:12 <RamT_> I do not agree with the type in there 16:15:17 <venkat> flavour is available in load balancer and pool.. 16:15:46 <RamT_> why is it important for tenant to know which type it is 16:15:51 <dougwig> one sec, digging for models review. 16:16:06 <RamT_> ok 16:16:37 <dougwig> please also refer to this, which was mugsie's first review actually implementing the models: 16:16:38 <dougwig> https://github.com/openstack/kosmos/tree/master/kosmos/objects 16:18:18 <dougwig> RamT_: ok, i don't think 'type' meant to be as implementation specific as those examples. it could've also been "gold" vs "silver", and mapped to the appropriate backend driver, e.g. similar to "provider" in neutron. 16:18:29 <mugsie> yeah, that waas the idea 16:18:58 <RamT_> provider could be a10 or att or f5 16:19:00 <dougwig> venkat: no idea why flavor is in both, good catch. mugsie, did you have a reason for that? 16:19:19 <dougwig> RamT_: or whatever the operator wants to name it, since it's user visible. 16:19:31 <mugsie> eh 16:19:36 <mugsie> dougwig: I could have? 16:20:01 <RamT_> how does that impact the api? 16:20:45 <mugsie> it can be removed entirely, as it is not written yet :) 16:21:04 <dougwig> RamT_: in /etc/kosmos/kosmos.conf, you'd have a list of provider/types and what driver they map to, and presumably some way to discover the list of types (if not, we need to add that, if we keep it.) that'd let horizon create a dialog box with a drop-down of the list of available types, and kosmos would be able to map it to a specific backend 16:21:04 <dougwig> implementation. 16:21:30 <dougwig> venkat: sounds like a follow-up review for the spec is in order for that one. :) 16:22:40 <mpbnka> do we intend to keep the records even after they are deleted? 16:23:04 <mugsie> mpbnka: well, most openstack projects keep stuf in the DB post delete 16:23:05 <RamT_> dougwig: I think tenants should be masked out of that information? 16:23:20 <mugsie> RamT_: masked out of what? 16:23:56 <RamT_> backend provider be it a10 or f5 tenant has not impact 16:23:59 <dougwig> the tenant is usually the one that picks which, otherwise you'd write yourself a little multiplexing driver. 16:24:15 <RamT_> hmm 16:24:20 <dougwig> presumably if there is only one, it gets a default, and the UI hides it. 16:24:28 <RamT_> ok 16:24:31 <RamT_> got it 16:24:37 <venkat> dougwig/mugsie, if type is not there then how to have stats or to segregate altogether if multiple providers enabled in kosmos 16:24:53 <mugsie> well, I would say if there is one, it gets called "default" and it is shown 16:25:06 <mugsie> i do not know why you would not show it 16:25:21 <mugsie> (most of openstack will show this level of detail) 16:25:46 <dougwig> api or cli, maybe. most of openstack doing something doesn't make it good UI, IMO. :) 16:26:20 <venkat> mugsie, may be you can mask it to visible or not based on user roles like RBAC 16:26:51 <venkat> only admin can view type in responses etc 16:26:56 <mugsie> dougwig: oh, I don;'t really care about horizon at this point - its just the API 16:27:13 <RamT_> mugsie: not that I have any opposition to showing the detail, thought it is of no use for the tenant. 16:27:18 <mugsie> I am not sure why you would want to hide it 16:27:25 <mugsie> well, it could be 16:27:42 <RamT_> but if the tenants can choose where they want to create the lb then it's a different scenario.. 16:27:50 <dougwig> even if it wasn't selectable, wouldn't you want it in the GET output? even nova will tell you your compute host. 16:27:52 <mugsie> "why does gslb #1 cost me 3x gslb #2" 16:28:09 <mpbnka> :) 16:28:53 <RamT_> I was under the impression that driver is a one time configuration in kosmos.conf 16:29:00 <RamT_> not something that user can pick and choose 16:29:00 <mugsie> we should not remove detail based on what "is not useful" to end users - end users will always surprise you ;) 16:29:08 <dougwig> btw, i'm going to assume that if anyone feels strongly about any of these changes, said person will file a review change. ok? 16:29:21 <RamT_> ok 16:29:21 <mugsie> there was a goal to support multiple drivers 16:29:32 <RamT_> interesting 16:30:08 <venkat> Yes.. its good to have multiple drivers in one place.. I agree 16:30:19 <RamT_> should we have another api to activate or deactivate the gslb? 16:30:38 <mugsie> activate / deactivate? 16:30:42 <mugsie> delete / create? 16:30:52 <RamT_> create but in inactive state 16:30:58 <RamT_> like turned off for the time being 16:31:13 <dougwig> an up/down? i think that's useful. i don't know if it needs to go into the first implementation, as long as we're not painting ourselves into a corner. which i don't think we are. 16:31:15 <mugsie> i think that could be added as a streach goal - that is defnitly not soime thing that would be easy to do for a first version 16:31:31 <dougwig> mugsie: it's easy on the neutron side. :) 16:31:43 <RamT_> ok 16:32:01 <mugsie> it is worth noting I added "v0.1" as I knew that the API would have to be changed after we finish version 1 16:32:27 <dougwig> aye. 16:32:28 <RamT_> when are we aiming to finish version 1? 16:32:59 <dougwig> depends on how active we can all be. i think newton is not possible at this point. 16:33:19 <RamT_> few of us are only planning to work on kosmos 16:33:20 <dougwig> does anyone have a major problem with the overall flow of the API, aside from the earlier toplevel name issue? 16:33:46 <RamT_> nope.. no problem with the api on first cut 16:33:53 <dougwig> should be pretty quick then, and ocata should definitely be possible. 16:33:56 <RamT_> we can always go back and review once again 16:34:49 <dougwig> switching api gears slightly, while mugsie is around, we had talked about pecan, because most of openstack uses pecan. i've used both a bit, and definitely have a preference for flask (and someone else pointed me at falcon as being even simpler.) why were you recommending against flask again? anyone else have opinions? 16:35:50 <RamT_> flask is my option 16:36:14 <mpbnka> I do support both, we are moving with pecan with most openstack I suppose 16:36:37 <RamT_> I don't mind pecan as well.. 16:37:49 <dougwig> i won't die with any of them, certainly. 16:38:11 <RamT_> and I want to ask about one more thing 16:38:22 <RamT_> endpoint and kosmos-status-check services 16:38:31 <dougwig> o 16:38:32 <dougwig> ok 16:38:33 <dougwig> shoot 16:38:34 <mugsie> dougwig: sorry got distracted 16:38:53 <mpbnka> https://github.com/openstack/kosmos-specs/blob/master/specs/mitaka/sysarch.rst#overview-diagram 16:38:53 <mugsie> go with whatever imho now. 16:39:17 * mugsie has gotten slightly disalussioned with the consistancy of openstack 16:40:03 <RamT_> can one of you elaborate on these services? 16:40:15 <dougwig> mugsie: you drove a flaming torch into a hornet's nest, man. :) 16:40:24 <mugsie> endpoints - HTTP API endpoints 16:40:30 <mugsie> dougwig: that I did 16:41:14 <mugsie> status check would be binaries that are run to check the that the endpoints are running 16:41:49 <mugsie> so, the idea would be to allow different types of checks 16:41:57 <RamT_> status check is not for gslb endpoint? 16:42:02 <mugsie> HTTP(S), TCP, ping, etc 16:42:04 <mugsie> no 16:42:12 <mugsie> for the endpoints being loadbalanced 16:42:29 <RamT_> ok so not api end points 16:42:37 <RamT_> loadbalanced end points 16:42:47 <mugsie> yeah 16:42:55 <mugsie> sorry. with the diagram it is easier 16:43:13 <RamT_> I was thinking in the same lines 16:43:29 <mugsie> if you git clone, then run tox -e docs, in docs/html/ there will be these pages with the diagrams on th epage 16:43:32 <RamT_> but we are going to use the status on lbaas for checking the status... 16:43:41 <RamT_> ok 16:43:46 <mugsie> I can render them and put them up onlijne somewhere 16:44:05 <RamT_> ok 16:44:07 <RamT_> that would help 16:44:42 <RamT_> do you have any specific reason for having conductor? 16:44:59 <venkat> yes.. that will help to understand better. 16:45:30 <mugsie> it is easier to limit DB access to a single service 16:45:33 <dougwig> RamT_: our first pass was going to be single worker, but eventually we have to decouple api/work if we expect it to scale. 16:45:33 <RamT_> do you see db being accessed anyother service apart from engine? 16:45:50 <mugsie> engine would not access the DB 16:46:01 <mugsie> it would call the conductor to access the DB 16:47:13 <RamT_> ok 16:47:36 <RamT_> I think I have all questions answered 16:47:45 <RamT_> if not I'll reach out to you guys 16:47:49 <mugsie> do 16:48:27 <dougwig> ok, next steps, please take a look at this review: 16:48:27 <dougwig> https://review.openstack.org/#/c/272565/ 16:48:28 <RamT_> lets move forward with api implementation 16:48:42 <RamT_> dougwig: ok 16:48:43 <dougwig> also, if someone wants to take a crack at a flask/pecan implementation around the stuff we have, please do. 16:49:20 <venkat> mugsie, when there are new git reviews on kosmos.. include us as code reviewers.. so that we will be in sync on activities etc 16:49:39 <venkat> dougwig, nice evaluation (pecan vs falcon) by zaqar team - https://wiki.openstack.org/wiki/Zaqar/pecan-evaluation 16:49:59 <mugsie> venkat: sure. you can also add a project to your "watched projects list" 16:50:02 <dougwig> venkat: you can subscribe to the project, and you'll get emails automatically. 16:50:06 <mugsie> that way you get an email 16:50:39 <dougwig> alright, anything else, or shall we adjourn? 16:50:53 <mpbnka> thats all 16:51:26 <dougwig> alright, thanks folks, i'm excited to see activity. 16:51:43 <mugsie> ++ 16:51:47 <RamT_> thank you. Anyone anything else? 16:51:58 <dougwig> #endmeeting