Thursday, 2017-07-20

*** wanghao has joined #openstack-mogan00:48
*** litao__ has joined #openstack-mogan00:57
*** wanghao has quit IRC00:58
*** wanghao has joined #openstack-mogan00:58
zhenguomorning mogan!01:10
wanghaomoring01:18
litao__morning01:23
liushengmorning01:26
*** harlowja has quit IRC01:32
openstackgerritwanghao proposed openstack/mogan master: Support keypair quota in Mogan(part three)  https://review.openstack.org/48511201:37
zhenguoliusheng: sems the placement rp query only support member_of for aggregates, which means retrieving rp associated with at least one of the aggregates01:55
zhenguoliusheng: but in fact, when filtering the aggregates ourselves, we may need nodes in all aggregates01:56
liushengzhenguo: let me check01:58
zhenguoliusheng: thanks01:58
zhenguoliusheng: but in some senario, we need one of them01:59
zhenguoliusheng: aggregates maybe got one same metadata key/value pair01:59
zhenguoliusheng: like agg1 has az=bj and GPU, agg2 has az=bj, then requesting, we want to get all of them.02:00
zhenguohi all, the weekly meeting will happen soon, please move to openstack-meeting02:00
liushengzhenguo: for the rps and aggregates, they are N:N relationship, right ?02:00
zhenguoliusheng: yes02:00
liushengzhenguo: hah, almost forget that02:01
zhenguoliusheng: hah, we can discuss later02:01
*** harlowja has joined #openstack-mogan02:52
wanghaoha I'm later,   internal meeting....03:06
zhenguowanghao: hah03:12
zhenguoliusheng: member_of will get those resource providers that are associated with any of the list of aggregate uuids provided03:13
zhenguoliusheng: is that what we expected?03:13
wanghaozhenguo: I hava a question,  I find that keypair didn't have project_id,  I think maybe we need it.03:15
wanghaozhenguo: since we can control the quota based the project_id03:15
zhenguowanghao: seems liusheng copied that from nova, not sure how03:16
liushengzhenguo: I think there can meet our requirement, if the list of aggregates are only 1 item, we can get the rps of a specified aggregate, right ?03:16
wanghaoliusheng: god sheng,  do you think we need project_id in keypair?03:17
liushengwanghao: seems in Nova, keypair only have user_id03:17
zhenguoliusheng: right, but if not one aggregate, how can we handle it?03:17
wanghaoliusheng: if without project_id,  so we will have a overall quota for all projects.03:17
liushengzhenguo: do we need that situation ?03:17
zhenguoliusheng: like we may have an agg1 with az=1 and another agg2 with GPU=true03:18
liushengwanghao: :( not very sure about the Nova implemenation03:18
zhenguoliusheng: then we request a server in az1 with GPU03:18
zhenguoliusheng: but if az1 and GPU in one agg with different metadata is ok03:18
wanghaoliusheng: zhenguo: well,  I prefer to add the project_id in keypair,  that will no harm for mogan.03:19
liushengzhenguo: if so, how about split that process to two steps ?03:19
wanghaoliusheng: zhenguo: I will add it in quota patch,  you can have a look about it.03:19
zhenguoliusheng: wdyt about wanghao's proposal?03:19
liushengzhenguo: we can filter with ag1 and then filter with ag2, then select the intersection03:20
liushengzhenguo, wanghao let me check how Nova does03:20
zhenguoliusheng: but the reason we choose placement is for one sql, hah03:21
liushengzhenguo: hah, will confirm the placement api, seems placement api is far from perfect03:31
zhenguoliusheng: I just read the code, it can only support list rps in any of the agg03:32
liushengwanghao: just disscussed with my colleagues, not very sure, seems Nova can support quotas by user03:34
liushengzhenguo: hmm...03:34
wanghaoliusheng: I see,  but now Mogan just support quota in project level03:35
wanghaoliusheng: so a little change here,  support project level first?03:35
wanghaoliusheng: zhenguo: make sense for you guys?03:36
zhenguowanghao, liusheng: a user can list all keypairs of that project, right?03:36
zhenguowanghao, liusheng: or, when list keypairs, we filter the user id?03:37
wanghaozhenguo: only get the keypairs belong to this user03:38
zhenguowanghao, liusheng: maybe that's the concern, even if in same tenant, users want to keep their private key secret03:38
liushengzhenguo: no, just current user's , but admin can specify a user_id to query other users, I left a TODO for that :(03:38
wanghaozhenguo: It's ok now I think,  we just limit the quota in project level,  no influence in creation or getting.03:39
zhenguowanghao: seems it's ok, wdyt liusheng?03:41
zhenguowanghao, liusheng: so the project id is only for quotas03:41
liushengzhenguo, wanghao I am ok, but just thinking, besides quota, is there other meaning of adding a project field?03:42
liushengwanghao: hah, same question03:42
zhenguowanghao, liusheng: seems not03:42
wanghaozhenguo: yes, we just record the project_id in keypair object  and use it for quota only03:42
liushengwanghao: so please go a head :)03:42
zhenguohah03:42
wanghaozhenguo:  or...  we can got all keypairs for admin rule in one project...03:43
zhenguobefore we introduce user level  quota, it's ok for that03:43
wanghaoliusheng: just a little thinking03:43
zhenguowanghao: maybe yes, need to find real requirements for that, hah03:43
wanghaozhenguo: yes03:44
wanghaookay, so I will add project_id in keypair object, and use it in quota.03:44
openstackgerritwanghao proposed openstack/mogan master: Support keypair quota in Mogan(part three)  https://review.openstack.org/48511203:48
zhenguoliusheng: seems we usually define an aggregate with one metadata key03:52
liushengzhenguo: you mean in Nova ?03:53
zhenguoliusheng: but there's no restrict03:54
zhenguoliusheng: yes03:54
liushengzhenguo: for now, the restrict is in placement03:54
zhenguoliusheng: sigh03:54
*** harlowja has quit IRC04:18
*** harlowja has joined #openstack-mogan04:30
zhenguohttps://www.mirantis.com/blog/the-first-and-final-word-on-openstack-availability-zones/05:11
zhenguohi all, do you find az is confusion? I plan to get rid of notion of availability zone in mogan instead you can add a specific key in host aggregate metadata05:15
zhenguothen when you request a server, you can specify az, and for you can'05:15
zhenguolist of availability zones then05:16
zhenguoaz will also be a special key for host aggregate, which used for affinity and anti-affinity05:16
zhenguothen if you define a server group which means the server within it will be in the same availability zone or not05:17
zhenguohere availability zone can be rack,  power source, cluster, or whatever defined by opeartors05:18
zhenguozhangyang: please confirm with liudong about the proposal, thanks!05:19
openstackgerritMerged openstack/mogan master: Add filters to server list API  https://review.openstack.org/47332305:20
zhenguoliusheng, shaohe_feng, wanghao, litao__: wdyt?05:20
*** harlowja has quit IRC05:33
zhenguoor maybe for history reason, we can leave the availability zone, then introduce a new affinity_zone which used for server group affinity and anti-affinity policies05:35
openstackgerritZhenguo Niu proposed openstack/mogan master: Add support for DBDeadlock handling  https://review.openstack.org/48502105:36
wanghaozhenguo: In fact,  I didn't get what the AZ used for?06:03
wanghaozhenguo: I mean if user want to choose a server as they want,  they can use host aggregate or server group06:04
wanghaozhenguo: So it's a bigger scope in logic to organize the servers or something else?06:06
zhenguowanghao: yes, currently az is just a metadata of host aggregate06:08
zhenguowanghao: but the difference is that users can list the az and request server in that az06:08
zhenguowanghao: but other metadata not exposed to common users06:08
zhenguomaybe some deployments have already used az06:09
wanghaozhenguo: well I see,  so from user side,  they can't see the host aggregate in fact, but they can see the az, right?06:15
zhenguowanghao: yes06:16
wanghaozhenguo: they can use az to specify where to put their server, but in fact they just choose a host aggregate.06:16
zhenguowanghao: correct06:16
zhenguowanghao: but as the defination, az can be a rack or power source06:16
zhenguowanghao: but server group can't use anti-affinity based on az to provide high availbility06:17
zhenguowanghao: so I want to introduce a new affinity-zone notion for server group06:17
wanghaozhenguo: I didn't see why we need a new affinity-zone06:21
wanghaozhenguo: let me check the concept of anti-affinity server group,06:21
zhenguowanghao: you mean we can based on availability zone?06:21
wanghaozhenguo: if you want a anti-affinity server group,  you mean the servers can not be in the same host aggregat, right?06:22
wanghaozhenguo: I mean maybe we didn't need az either.06:22
zhenguowanghao: not host aggregate06:22
zhenguowanghao: but host aggregate with special key maybe affinity-zone06:23
zhenguowanghao: a key indicates what the aggregate is06:23
wanghaozhenguo: okay,  I see,  you mean a key to tell mogan this aggregate is a affinity-zone, like have a same power source or something.06:24
zhenguowanghao: yes06:25
zhenguowanghao: like it's a rack or with same Tor or power source or whatever06:25
wanghaozhenguo: so mogan shouldn't create two servers in this aggregate if they are in a anti-affinity server group?06:25
wanghaozhenguo: emm, I see.06:25
zhenguowanghao: yes06:26
wanghaozhenguo: well,  then  affinity-zone sounds like a different level scope.06:26
zhenguowanghao: hah, which is for affinity06:26
zhenguowanghao: I think az can be used to do that, but someone may already used az for other purpose06:27
wanghaozhenguo: that's sure,   bascially I think AZ is more bigger scope than affinity-group06:28
zhenguowanghao: yes, I think so06:28
wanghaozhenguo: you can including many host aggreates in somewhere to a single AZ,  like beijing AZ06:28
zhenguowanghao: that's true06:29
wanghaozhenguo: emm,  then I think affinity-zone sounds good,  just another special key in host aggregates to indicate some detail information.06:30
wanghaozhenguo: but the name.....   kind of like az.....06:30
zhenguowanghao: hah06:31
zhenguowanghao: as az can also used for this purpose06:31
zhenguowanghao: the notion of az is not only for large zone06:32
zhenguowanghao: you can use it for whatever purpose like other aggregate metadata :(06:32
wanghaozhenguo: that's true,  but that means  operators should create the azs according the affinity rule.06:32
zhenguowanghao: yes06:33
zhenguowanghao: https://www.mirantis.com/blog/the-first-and-final-word-on-openstack-availability-zones/ here explains availability zone06:33
wanghaozhenguo: consider this case,  a user want two server in beijing az,  but those servers also should also have high-available.06:34
wanghaozhenguo: okay, I will check it.06:34
zhenguowanghao: then there should be an aggregate with metadata az=beijing including all nodes there06:34
zhenguowanghao: and there maybe some small aggregaes with affinity-zone06:35
zhenguowanghao: az maybe for a DC, but affinity zone for rack in the DC06:35
wanghaozhenguo: yes06:35
wanghaozhenguo: but there is another question, now can't be many aggreagtes have a same az?06:36
zhenguowanghao: they can06:36
wanghaozhenguo: as this case,   I thinks there should be many aggreagtes have same az=beijing, not one aggreagte including all nodes.06:37
zhenguowanghao: yes06:37
wanghaozhenguo: and mogan just need to pick two aggregates from them with different affinity-zone06:37
zhenguowanghao: only for server groups06:38
wanghaozhenguo: of course06:38
zhenguowanghao: when you request a server and specify a server groups with anti-affinity policy, then we will check the servers in the group and find the aggregates they are in, then scheduler to an aggregate not same with others06:39
wanghaozhenguo: so the important thing is  how to find 'not same with others',  that means different affinity-zone.06:40
zhenguowanghao: yes06:41
wanghaozhenguo: I think there cloud be many host aggreates can be within same affinity-group06:41
zhenguowanghao: if the policy is affinity, then we will schedule to the same aggregate06:41
zhenguowanghao: yes, they can06:42
wanghaozhenguo: for affinity,  you mean just same aggregate? or same affinity-zone06:42
zhenguowanghao: but there's restrictions, one node can only be in one affinity_zone aggregates06:42
wanghaozhenguo: that should be06:43
zhenguowanghao: affinity_zone is a metadata for an aggregate06:43
zhenguowanghao: same affinity zone06:43
wanghaozhenguo: that's clear more.06:43
zhenguowanghao: hah, I will prepare a spec soon06:44
wanghaozhenguo: okay, cool06:44
zhenguowanghao: and please help to review the aggregates patches, that must be done first :D06:45
wanghaozhenguo: okay, sure06:45
zhenguowanghao: thanks06:45
openstackgerritwanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three)  https://review.openstack.org/48546106:48
openstackgerritwanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three)  https://review.openstack.org/48546106:54
liushengzhenguo: sorry, I didn't get you thoughts very clearly06:55
liushengzhenguo: just scaned above conversation, seems you just want to merge the az conception and server group in mogan, right ?06:56
zhenguoliusheng: no, I want to introduce a new key affinity_zone06:57
liushengzhenguo: yes, I know06:58
zhenguoliusheng: which will be used for server group affinity06:58
zhenguoliusheng: as az is an old notion in openstack, so it maybe used for other purpose06:58
liushengzhenguo: so that looks an concept of az+server group06:58
liushengzhenguo: yes, hah, I mean them in Nova06:59
zhenguoliusheng: yes, at first, I just want to use az06:59
zhenguoliusheng: yes06:59
liushengzhenguo: I didn't think out clearly yet, so what is the node aggregate for ?07:00
openstackgerritwanghao proposed openstack/mogan master: Miss the argument in db/driver.py  https://review.openstack.org/48547107:00
zhenguoliusheng: both az and affinity zone are metadata of aggregate07:00
liushengzhenguo: yes07:00
zhenguoliusheng: aggregate is just mean groups without meanings07:01
liushengzhenguo: an affinity zone may inludes serveral aggregates, right ?07:01
zhenguoliusheng: only the metadata associated the aggregate have meanings07:01
wanghaoliusheng: yes, an affinity zone can include serveral host aggregates.07:01
zhenguoliusheng: here if we say az or affinity zone with aggregate, we will be confused07:02
zhenguoliusheng: we can use nodes for availability zone and affinity zone07:02
zhenguoliusheng: as aggregaets is just some relationship07:02
zhenguoliusheng: it can be az or affinity zone07:03
liushengzhenguo: but for actual usages, an affinitiy zone maybe a rack, right ?07:03
zhenguoliusheng: yes07:03
zhenguoliusheng: or tow racks with the same power source07:03
liushengzhenguo: affinity may not only related with power source07:04
liushengzhenguo: e.g. network trafic07:04
zhenguoliusheng: sure07:04
zhenguoliusheng: so we use affinity zone instead of rack or power source07:04
zhenguoliusheng: as my understanding, aggregates are just some easy way to add traits to nodes, hah07:05
liushengzhenguo: so there maybe a possible problem, users may don't how that it represent for of the affinity zone07:05
liushengzhenguo: what the affinity means of an specified affinity zone07:06
zhenguoliusheng: users can't define aggregate07:06
zhenguoliusheng: cloud providers define that for users07:06
zhenguoliusheng: aha, I see, you mean users don't know what affinity means07:07
liushengzhenguo: but at least users need to know that07:07
liushengzhenguo: correct!07:07
liushengzhenguo: as we talked the affinity may means different things07:07
zhenguoliusheng: that depends  on what a deployment define it07:08
zhenguoliusheng: it can provide some description to users, just like users don't know what availability zone mean07:08
zhenguoliusheng: in different cloud deployments, it may got different definations07:10
liushengzhenguo: hmm.. seems availability zone is related logical concept, but affinity zone means real thing07:10
zhenguoliusheng: we don't want to fix it to real thing as well07:11
liushengzhenguo: for an example, user may want to create two servers which should be putted into an affinity zone which can have high speed to acess each others, but users may don't how if the affinity zone provided by cloud operator have the capability07:12
zhenguoliusheng: why users want to do that, we only tell them that we can provide server group affinity to let your server with high availbility07:13
zhenguoliusheng: we don't want to show the backend details to them07:14
liushengzhenguo: affinity zones related with hight availability ?07:14
zhenguoliusheng: just logical means07:15
zhenguoliusheng: you can told your users what does that mean in your cloud07:15
liushengzhenguo: for server group, I think it is a common use that users want the servers in a gourp have a high speed to acess each other07:16
liushengzhenguo: ...07:16
zhenguoliusheng: not only07:16
zhenguoliusheng: that's only for affinity,07:17
liushengzhenguo: yes, just a example07:17
zhenguoliusheng: I talked with our product team yesterday, they just want anity-affinity for different rack for high availability07:17
liushengzhenguo: let me think more07:18
zhenguoliusheng: ok07:18
zhenguoliusheng: we just provide the ability, but what does that mean decided by cloud deployers07:18
liushengzhenguo: I am just think another way, just personal thoughts07:19
liushengzhenguo: how about just we just provide the ability for tagging a aggregate, but we don't have concept of affinity zone, availabity zones07:20
liushengzhenguo: and our scheduler can provide the ability to filter the specific tags of the aggregate ?07:20
liushengzhenguo: what do you think07:21
zhenguoliusheng: the question is how does users know what tags to specify07:23
liushengzhenguo: as your proposal, cloud operator add the tags07:24
zhenguoliusheng: we don't want to show that to users which will let users know more details of our backend07:24
zhenguoliusheng: we even don't show aggregate metadata to users07:24
zhenguoliusheng: we don't want to bring more complex to users07:25
liushengzhenguo: it is more like to show users the ability of aggreagtes.07:25
zhenguoliusheng: why we want to show that to common users07:26
liushengzhenguo: I am afraid there maybe more procise scheduling requirement in the future07:26
liushengzhenguo: why don't, users pass scheduling parameter according to their requirement07:27
liushengzhenguo: but just personal thoughts07:27
liushengzhenguo: you can please to propose your spec firstly, hah07:28
zhenguoliusheng: then users will got how many aggregates you have with what metadata, and maybe how many servers you got, then that's not cloud07:28
liushengzhenguo: not exactly, the situation like we providing flavors with resource classes to usrs07:30
zhenguoliusheng: I also plan to not show resource classes to users07:31
zhenguoliusheng: and just the description07:31
zhenguoliusheng: as users can't read what the resource classes mean and it will confuse users07:31
openstackgerritwanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three)  https://review.openstack.org/48546107:31
liushengzhenguo: yes, that is reasonable, but users need to know what's ability of a specified flavor07:32
zhenguoliusheng: all I want to tell users include in the description with a human readble context07:32
liushengzhenguo: even not exact info07:32
liushengzhenguo: ok07:34
zhenguoliusheng: hah07:34
zhenguoliusheng: we can just provide the ability, if there's really user requirements, we can discuss then07:35
zhenguoliusheng: wdyt?07:35
liushengzhenguo: sure, thanks07:35
liushengzhenguo: but your proposal also make sense to me, hah07:36
zhenguoliusheng: lol07:36
zhenguoliusheng: for server groups, placement aggregates well fit it07:36
liushengzhenguo: I think some thinks most openstack project are "infrastructure"07:36
*** yushb has joined #openstack-mogan07:36
liushengzhenguo: ok07:37
wanghaoguys, https://review.openstack.org/#/c/485461/  quota for keypairs has ready and tested pass. you can have a look07:37
zhenguoliusheng: one sql can satisfy both affinity and antiy-affinity requirements07:37
zhenguowanghao: thanks07:38
wanghaoand I find an issue in db/driver.py about filter function, we can fix quickly: https://review.openstack.org/#/c/485471/07:38
zhenguowanghao: hah, sure.07:39
liushengwanghao: +2ed for the easy one, hah07:39
zhenguohah, will +A when the gate truns green07:39
zhenguobut maybe don't need to wait, as it will must pass the gate :D07:40
liushenghah07:49
wanghaohaha07:52
openstackgerritMerged openstack/mogan master: Miss the argument in db/driver.py  https://review.openstack.org/48547108:19
openstackgerritwanghao proposed openstack/mogan master: Support quota for keypairs in Mogan(part-three)  https://review.openstack.org/48546108:20
zhenguoliusheng: I have added the first there aggregate patches but without tests, you can help to test if got time08:41
liushengzhenguo: ok, will try08:42
zhenguoliusheng: thanks, I will add a following up patch to make availability zone list by aggregate metadata08:43
liushengzhenguo: ok,08:44
openstackgerritliusheng proposed openstack/mogan master: Add support for scheduler_hints  https://review.openstack.org/46353409:00
openstackgerritliusheng proposed openstack/mogan master: Add support for scheduler_hints  https://review.openstack.org/46353409:11
openstackgerritZhenguo Niu proposed openstack/mogan master: Adds aggregates DB model and API  https://review.openstack.org/48278609:17
openstackgerritZhenguo Niu proposed openstack/mogan master: Add aggregate API  https://review.openstack.org/48469009:17
openstackgerritZhenguo Niu proposed openstack/mogan master: Add aggregate object  https://review.openstack.org/48463009:17
openstackgerritZhenguo Niu proposed openstack/mogan master: Retrieve availability zone from aggregate  https://review.openstack.org/48550609:17
zhenguoliusheng: seems nova will get rid of reschdule09:18
liushengzhenguo: really ?09:22
zhenguoliusheng: matt said that during last meeting09:22
liushengzhenguo: just git rid of or a alternative way ?09:22
zhenguoliusheng: seems a scheuling will got a list of alternatives host to use09:23
zhenguoliusheng: when failed, pick one from that09:23
liushengzhenguo: oh, seems like a alternative way09:23
zhenguoliusheng: then, there will not rescheuling09:24
zhenguoliusheng: I'm not sure how to make that happen for us, as the alternative nodes may get used by other requests09:24
liushengzhenguo: just talked with zhenyu, he said seems, rescheduling only exist in cell009:25
zhenguoliusheng: yes09:26
zhenguoliusheng: do you know how many time left for Pike?09:28
*** wanghao has quit IRC09:35
openstackgerritliusheng proposed openstack/mogan master: Add support for scheduler_hints  https://review.openstack.org/46353409:35
liushengzhenguo: let me confirm09:35
liushengzhenguo: https://releases.openstack.org/pike/schedule.html#p-release09:41
liushengzhenguo: sees 24.Jul~28.Jul09:42
zhenguoliusheng: thanks, so we still have one month09:43
liushengzhenguo: s/sees/seems/09:43
liushengzhenguo: no09:44
zhenguoliusheng: The Pike coordinated release will happen on 30 August 201709:44
liushengzhenguo: I am not sure about the coordinated release09:45
liushengzhenguo: RC1 target week¶09:45
liushengThe week of 7-11 August 2017 is the target date for projects following the release:cycle-with-milestones model to issue their first release candidate, with a deadline of 10 August 2017.09:45
*** wanghao has joined #openstack-mogan09:45
liushengzhenguo: ^^, usaully, the rc1 is the release date09:45
zhenguoliusheng: but we are not official, hah09:45
zhenguoliusheng: we can release at anytime09:46
liushengzhenguo: you can download the calendar09:46
liushengzhenguo: https://releases.openstack.org/09:46
liushengzhenguo: on the first line of this page09:46
liushengzhenguo: hah, yes09:47
zhenguoliusheng:hah09:48
*** wanghao has quit IRC09:51
*** yushb has quit IRC10:02
*** yushb has joined #openstack-mogan10:10
*** yushb has quit IRC10:12
openstackgerritZhenguo Niu proposed openstack/mogan master: Adds aggregates DB model and API  https://review.openstack.org/48278611:45
openstackgerritZhenguo Niu proposed openstack/mogan master: Retrieve availability zone from aggregate  https://review.openstack.org/48550611:45
openstackgerritZhenguo Niu proposed openstack/mogan master: Add aggregate API  https://review.openstack.org/48469011:45
openstackgerritZhenguo Niu proposed openstack/mogan master: Add aggregate object  https://review.openstack.org/48463011:45
zhenguoliusheng: hi, for aggregate nodes, which value would you like to be returned, name or uuid?11:56
liushengzhenguo: is name unique ?11:56
zhenguoliusheng: seems yes11:56
liushengzhenguo: I may prefer name11:56
zhenguoliusheng: hah11:56
zhenguoliusheng: and we should add a node list API back :(11:57
liushengzhenguo: why ?11:57
zhenguoliusheng: as we need to add node to aggregate11:57
zhenguoliusheng: but it can be simple11:58
liushengzhenguo: how about just retrive nodes from placement ?11:58
zhenguoliusheng: that's resource providers11:58
liushengzhenguo: yes11:58
zhenguoliusheng: we can add a simple API, maybe just return a list of node name?11:59
zhenguoliusheng: read from our cache11:59
zhenguoliusheng: for security reason, in production env, we may not let admins access placement directly12:00
liushengzhenguo: not sure if we can just use ironic node-list12:00
zhenguoliusheng: ironic node list can be different from our node list12:00
zhenguoliusheng: as we will filter it12:00
liushengzhenguo: yes... hoped we can get rid of nodes api :(12:01
zhenguoliusheng: but it works like a proxy API, not quite like it12:01
liushengzhenguo: yes, I am not sure it is good idea to retive nodes from cache, may it effects if we want to support multiple workers in the future12:03
zhenguoliusheng: but if we don't use ironic, but other drivers, the API is needed12:03
liushengzhenguo: how about a proxy api to placement api ?12:03
liushengzhenguo: yes12:03
zhenguoliusheng: multiple workers also need t o address the current cache problem12:03
zhenguoliusheng: not only for us12:04
*** litao__ has quit IRC12:04
liushengzhenguo: sigh... a service with in-memory cache is not stateless12:05
liushengzhenguo: we'd better to get rid of that12:05
zhenguoliusheng: cache will sync with placement, so it's ok12:05
zhenguoliusheng: as placement data is also sync by mogan12:06
zhenguoliusheng: so the cache is always the newest12:06
liushengzhenguo: oh12:06
liushengzhenguo: if we only need node name and node id, we adding a proxy api which can directly call placement api and avoid a call to mogan-engine12:07
zhenguoliusheng: why that's better?12:08
zhenguoliusheng; placement is work like DISK, we are MEMEORY, why not read from memory directly?12:09
liushengzhenguo: not better, just think an api to retrive info from the cache of a servers looks  a bit strange. hah12:10
zhenguoliusheng: hah12:10
zhenguoliusheng: I don't think it's strange, as when changing the placement data, we first update the cache, there's no problem12:11
liushengzhenguo: ok12:11
zhenguoliusheng: cache works like a fast db, if there's no others change the backend db12:12
liushengzhenguo: ok, make sense12:13
zhenguoliusheng: ok12:13
zhenguoliusheng: I have to go, bye12:13
liushengzhenguo: bye, I also have to go soon12:14
*** shaohe_feng has quit IRC17:09
*** shaohe_feng has joined #openstack-mogan17:09
*** harlowja has joined #openstack-mogan17:23
*** harlowja has quit IRC17:27
*** shaohe_feng has quit IRC17:28
*** shaohe_feng has joined #openstack-mogan17:28
*** harlowja has joined #openstack-mogan18:06
*** openstack has joined #openstack-mogan20:00
*** openstackgerrit has quit IRC20:17

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!