Wednesday, 2017-07-05

zhenguomorning mogan!00:45
*** wanghao has joined #openstack-mogan00:48
*** openstackgerrit has joined #openstack-mogan01:00
openstackgerritZhenguo Niu proposed openstack/mogan master: Make floatingip can be saved to db and can be displayed  https://review.openstack.org/47562201:00
*** litao__ has joined #openstack-mogan01:13
*** liujiong has joined #openstack-mogan01:14
*** wanghao has quit IRC01:17
*** wanghao has joined #openstack-mogan01:18
zhenguolitao__: sorry, I was out of office yesterday01:19
litao__zhenguo: np, you can review my patch again01:22
zhenguolitao__: ok01:22
openstackgerritwanghao proposed openstack/mogan master: Refactor exception raise by using mogan exception  https://review.openstack.org/47597101:50
liushengzhenguo: what do you think we need to research and try to use placement to implement node aggregates now ?01:55
zhenguoliusheng: I talked with alex about the aggregates traits01:56
zhenguoliusheng: he said we can list all rps and set traits to it for now01:57
liushengzhenguo: yes, sure, I have try some apis about placement01:58
zhenguoliusheng: great01:58
liushengzhenguo: but I have a doubt that how to set the aggretates, does we need to add a api to call placement to set the rp trats01:59
zhenguoliusheng: no, I don't like proxy api01:59
liushengzhenguo: hah, yes01:59
zhenguoliusheng: admins should call placement api or CLI to manage resources01:59
liushengzhenguo: yes, now there is a placement-osc repo in git.openstack.org. but only a base frame02:01
zhenguoliusheng: yes, we can help to enrich it02:01
zhenguoliusheng: only for some that we need, lol02:02
liushengzhenguo: really, we need to improve the placement osc firstly ?02:02
liushengzhenguo: hah02:02
zhenguoliusheng: no rush02:02
liushengzhenguo: ok, we can try to improve mogan now, to use node aggregates traits02:03
zhenguoliusheng: yes02:03
zhenguoliusheng: the placement work on mogan side is almost done, right?02:06
zhenguoliusheng: if so, please also help to enrich the specs :D02:06
liushengzhenguo: yes, but didn't consider traits yet02:07
liushengzhenguo: ok02:07
zhenguoliusheng: traits should be done with another specs02:07
zhenguoliusheng: the nodes aggregates one02:07
liushengzhenguo: ok02:07
liushengzhenguo: how about only enable placement service without nova in our gate  ?02:18
zhenguoliusheng: I remember there's something worong with the tempest if we not enable nova02:19
zhenguoliusheng: not sure if you can succeed to just enable placement02:19
liushengzhenguo: oh, yes, seems it need the "baremetal" flaovr of Nova02:19
zhenguoliusheng: hah02:19
liushengzhenguo: we'd better to try to get rid of Nova, play mogan standalone :D02:20
zhenguoliusheng: the problem is ironic depends on nova with tempest02:21
liushengzhenguo: not sure if we can fix that02:21
zhenguoliusheng: seems not now, but for gate, it's ok to run them together,02:22
liushengzhenguo: ok, we can try in future02:23
zhenguoliusheng:yes02:23
zhenguoliusheng: I left a comment here https://review.openstack.org/#/c/476325/1602:29
zhenguoliusheng: if you will delete the RP when node is unavailable, why not just keep the original get_avaialbe_nodes02:31
liushengzhenguo: actually, the unavailable != not available, hah02:31
liushengzhenguo: it is not similar with previous implementation02:32
zhenguoliusheng: you mean more filter options?02:32
liushengzhenguo: the unavailable nodes mean the nodes in abnormal states, cannot be provisioned02:33
liushengzhenguo: addtionally02:33
liushengzhenguo: I planed to treat  the unavailable nodes and fake nodes in placement in different ways02:34
zhenguoliusheng: how02:34
zhenguoliusheng: as far as I can see, you will just delete the rp02:34
liushengzhenguo: for the fake nodes(nodes in placement, but have gone in ironic), should be removed from placement, with the allocations inventories02:34
liushengzhenguo: for the unavailable nodes, we should only delete the resource providers, without allocatios deleted02:35
liushengzhenguo: that because placement now didn't provide a interface to retrive allocations by a resource provider02:36
litao__zhenguo: I pull the latest code and  mogan raised  the db error: "Field 'port_type' doesn't have a default value"02:36
liushengzhenguo: wdyt ?02:36
zhenguoliusheng: need to restack02:36
zhenguoliusheng: didn't get02:37
litao__zhenguo: what is the default value of port_type?02:37
zhenguoliusheng: we have removed it02:37
zhenguoliusheng: from db02:37
zhenguolitao__02:37
litao__zhenguo:yes02:38
liushengzhenguo: yes, because I think the unavailable node should be seleted by scheduler02:38
zhenguoliusheng: why unavailable node should be selected?02:38
zhenguoliusheng: it can't be provisoned02:38
liushengzhenguo: but maybe a abonormal node can be recover in rionic02:38
liushengzhenguo: sorry, shouldn't02:38
zhenguoliusheng: what's the benefit02:39
liushengzhenguo: so for the abnormal nodes (nodes in abnormal state) should keep the allocations in placement02:39
zhenguoliusheng: why should it be kept02:40
zhenguoliusheng: seems in placement we only need to track available and deployed nodes, all others should be removed02:42
liushengzhenguo: imagine this, a server running on a node, but because some unexpected reason, the node truned into maintance state, we should keep the allocations in placement, (but seems for this situation we don't need to delete resource provide, since one node can only used by one server), once the node recoverd in ironic, periodic task will report the resource provider to placement02:43
liushengzhenguo: yes02:43
liushengzhenguo: that's why unavailable != not available02:43
liushengzhenguo: or, maybe we should mark the nodes in abnormal state shouldn't be seleted in placement, but there isn't a mechanism to avoid seleting node with specific tratis.02:44
zhenguoliusheng: currently, when you call delete_resource_provider, it will keep the allocations?02:47
liushengzhenguo: yes, there is a TODO in "def delete_resource_provider", and there is a cascade=False paremeter to perform that02:48
liushengzhenguo: but now, it is need to provider the consumer_id(server_id) to delete allocations02:49
zhenguoliusheng: let me check it02:50
liushengzhenguo: may it better to implement the interface of  "delete allocations of a rp" in placement02:50
zhenguoliusheng: my interpretation is that you will keep all allocations presently, because if some deployed node goes to maintence and come back again, it can't be scheduled to rebuild, right?02:57
liushengzhenguo: yes, even the provisioned nodes cannot by selected by currect placement, but we can treat the provisioned and unprovisioned nodes together03:01
zhenguoliusheng: but we can only deploy the nodes in 'available' state03:01
liushengzhenguo: for the fake nodes in placement, which means node maybe removed from ironic, we should remove all the info of node in placement03:01
liushengzhenguo: yes, but placement record all the nodes03:03
zhenguoliusheng: for nodes in other states other than available should all can't be selected03:03
zhenguoliusheng: the only way to stop it selected it's to set the allocations, right?03:04
liushengzhenguo: so what's your suggestion ?03:04
zhenguoliusheng: set abnormal state nodes allocations?03:04
liushengzhenguo: that is right based on current implementation03:05
zhenguoliusheng: seems currently we just delete resource providers for both nodes not list from ironic and in abnormal state03:05
liushengzhenguo: both we and nova have the requirement in placement to provide an interface about "avoiding select nodes with specific tratis"03:05
liushengzhenguo: yes03:06
liushengzhenguo: actually in ironic driver in Nova, it ignore this, it just report all the nodes to placement, that I think is a bug03:06
zhenguoliusheng: you mean we should set a STATE trait to resource provider and filter it?03:07
liushengzhenguo: yes, filter out the rps with specific traits03:07
zhenguoliusheng: noav ironic driver will report the node as 0 CPU, 0 MEM for abnormal nodes03:08
liushengzhenguo: no, if there is "resource_class" setted of the nodes, nova will report inventories like we done03:09
zhenguoliusheng: but not scheduled based on that now03:09
zhenguoliusheng: it also report the VCPU, RAM03:10
zhenguoliusheng: as flavor in nova can't pass resource class information now03:10
liushengzhenguo: oh, yes03:11
zhenguoliusheng: seems it's tricky to handle nodes in abnormal state03:13
liushengzhenguo: but for us, there is only one way the scheduler can avoid seleting a node is removing the rp in placement or the nodes already allocated03:13
*** Xinran has quit IRC03:13
zhenguoliusheng:yes03:14
liushengzhenguo: I have tried to set the "total" field in inventory to be 0, but placement api don't allow that03:14
liushengzhenguo: maybe Nova guys didn't test that, the total in inventory body can not be set to 003:17
zhenguoliusheng: maybe that's what expected03:18
zhenguoliusheng: allocations means to do that03:18
liushengzhenguo: https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/handlers/inventory.py#L3903:18
zhenguoliusheng: my concern about the current list all nodes is that03:19
zhenguoliusheng: we set detail=True, and limit =003:19
zhenguoliusheng: if we have many nodes backend, the API response body will be very huge03:19
liushengzhenguo: yes, how about improve that in future with pagination query03:20
zhenguoliusheng: sounds good03:20
zhenguoliusheng: for allocation, I think it's ok to always keep it until we delete the server03:23
zhenguoliusheng: no need to treat the nodes differently03:23
zhenguoliusheng: it make sense to delete allocation when delete server03:23
liushengzhenguo: delete sever should delete the allocation03:24
zhenguoliusheng: so it's ok03:24
liushengzhenguo: that have impment in the patches, hah03:24
zhenguoliusheng: ok, so for my comments03:24
zhenguoliusheng: I think we don't need to treat nodes unavailble and nodes not in list differently03:25
zhenguoliusheng: just one get_available_interface from driver, and remove rp not in that is ok03:25
liushengzhenguo: ok, will modify that03:26
zhenguoliusheng: ok, thanks03:27
zhenguoliusheng: and we can move maintenance filter to node list API03:27
liushengzhenguo: ok03:27
openstackgerritMerged openstack/mogan-specs master: Track resources using Placement  https://review.openstack.org/47570004:03
openstackgerritwanghao proposed openstack/mogan master: Refactor exception raise by using mogan exception  https://review.openstack.org/47597104:27
litao__liusheng, zhenguo06:01
litao__liusheng, zhenguo : we have removed port_type, but mogan raised 'not found port_type' error in db query.06:02
zhenguoliutao__: a bug?06:08
zhenguolitao__06:08
litao__zhenguo: I solve it by06:09
litao__restarting mogan-scheduler service06:09
litao__zhenguo: hah06:09
zhenguolitao__: so, it's not an issue, right?06:09
litao__zhenguo: yes06:10
zhenguolitao__: ok06:10
openstackgerritJeremy Liu proposed openstack/mogan-ui master: Drop MANIFEST.in - it's not needed by pbr  https://review.openstack.org/48036106:14
openstackgerritliusheng proposed openstack/mogan master: Clean the compute_node and compute_port objects and db interfaces  https://review.openstack.org/47840606:19
openstackgerritliusheng proposed openstack/mogan master: Get rid of listing availability zone api and clean some legacy code  https://review.openstack.org/47840306:19
openstackgerritliusheng proposed openstack/mogan master: Reporting nodes resource to placement service  https://review.openstack.org/47632506:19
openstackgerritliusheng proposed openstack/mogan master: Clean the methods about updating node resources to Mogan db  https://review.openstack.org/47835706:19
openstackgerritliusheng proposed openstack/mogan master: Get rid of node listing api of Mogan  https://review.openstack.org/47836106:19
openstackgerritliusheng proposed openstack/mogan master: Consume nodes resource in placement  https://review.openstack.org/47782606:19
openstackgerritliusheng proposed openstack/mogan master: Refactor the scheduler to use placement service  https://review.openstack.org/47742606:19
openstackgerritMerged openstack/mogan-ui master: Drop MANIFEST.in - it's not needed by pbr  https://review.openstack.org/48036106:37
openstackgerritTao Li proposed openstack/mogan master: Fix reverting the network when plug vif failed  https://review.openstack.org/47744006:46
*** Xinran has joined #openstack-mogan07:57
*** Xinran has quit IRC07:58
*** Xinran has joined #openstack-mogan07:59
openstackgerritwanghao proposed openstack/mogan master: Manage existing BMs: Part-1  https://review.openstack.org/47966008:18
liushengzhenguo: hi zhenguo, do you think if it is easy to misleading if we use "available resource" >08:41
zhenguoliusheng: seems it's ok08:50
zhenguoliusheng: it's a driver interface08:50
liushengzhenguo: hah, I tried to avoid that misleading08:51
zhenguoliusheng: we call that interface to get avalable resources for scheduling08:51
liushengzhenguo: ok,08:51
zhenguoliusheng: hah08:51
liushengzhenguo: do you prefer get_available_resources than get_available_nodes ?08:54
zhenguoliusheng: I'm fine with both,08:54
zhenguoliusheng: maybe we need to get other resources like ports, harddrives, so maybe get_availble_resources is better?08:55
liushengzhenguo: nova call it get_available_nodes, hah08:56
zhenguoliusheng: that's because nova only get node resources, hah08:56
liushengzhenguo: ok, will update patch as your wish sir. lol09:00
zhenguoliusheng: thanks, hah09:00
openstackgerritliusheng proposed openstack/mogan master: Clean the compute_node and compute_port objects and db interfaces  https://review.openstack.org/47840609:09
openstackgerritliusheng proposed openstack/mogan master: Get rid of listing availability zone api and clean some legacy code  https://review.openstack.org/47840309:09
openstackgerritliusheng proposed openstack/mogan master: Reporting nodes resource to placement service  https://review.openstack.org/47632509:09
openstackgerritliusheng proposed openstack/mogan master: Clean the methods about updating node resources to Mogan db  https://review.openstack.org/47835709:09
openstackgerritliusheng proposed openstack/mogan master: Get rid of node listing api of Mogan  https://review.openstack.org/47836109:09
openstackgerritliusheng proposed openstack/mogan master: Consume nodes resource in placement  https://review.openstack.org/47782609:09
openstackgerritliusheng proposed openstack/mogan master: Refactor the scheduler to use placement service  https://review.openstack.org/47742609:09
zhenguoliusheng: will it break the gate if just replace the get_availalbe_resources implementation09:12
zhenguoliusheng: as the scheduler still depends on the original get_available_resources method09:12
liushengzhenguo: hmmm, yes..09:13
zhenguoliusheng: hah, maybe you can just use get_available_nodes ..09:13
zhenguoliusheng: at least in Pike we will not introduce new resources09:13
liushengzhenguo: I also considered that, so let name it as get_available_nodes, and we can change it in flowing patch if we need09:14
zhenguoliusheng: yes09:14
openstackgerritliusheng proposed openstack/mogan master: Clean the compute_node and compute_port objects and db interfaces  https://review.openstack.org/47840609:21
openstackgerritliusheng proposed openstack/mogan master: Get rid of listing availability zone api and clean some legacy code  https://review.openstack.org/47840309:21
openstackgerritliusheng proposed openstack/mogan master: Reporting nodes resource to placement service  https://review.openstack.org/47632509:21
openstackgerritliusheng proposed openstack/mogan master: Clean the methods about updating node resources to Mogan db  https://review.openstack.org/47835709:21
openstackgerritliusheng proposed openstack/mogan master: Get rid of node listing api of Mogan  https://review.openstack.org/47836109:21
openstackgerritliusheng proposed openstack/mogan master: Consume nodes resource in placement  https://review.openstack.org/47782609:21
openstackgerritliusheng proposed openstack/mogan master: Refactor the scheduler to use placement service  https://review.openstack.org/47742609:21
*** wanghao_ has joined #openstack-mogan09:23
*** wanghao_ has quit IRC09:24
*** wanghao has quit IRC09:26
* zhenguo brb09:39
*** liujiong has quit IRC10:07
shaohe_fenghttps://review.openstack.org/#/c/478406/18..15//COMMIT_MSG10:29
shaohe_feng^ liusheng: first line need to fix? Why need to fix?10:30
zhenguoshaohe_feng: seems the recommended length should be less than 5010:48
shaohe_fengzhenguo: got it.10:52
zhenguoshaohe_feng: but not compulsory10:52
openstackgerritZhenguo Niu proposed openstack/mogan master: Reporting nodes resource to placement service  https://review.openstack.org/47632511:21
zhenguoshaohe_feng: are you still around?11:22
zhenguoshaohe_feng: please have a look at this https://review.openstack.org/#/c/475302/ and https://review.openstack.org/#/c/475622/ when you got time11:22
zhenguoshaohe_feng: and please help to review the first placement patch https://review.openstack.org/#/c/476325/ , let's try to land placement stuff by this week.11:31
shaohe_fengzhenguo: OK.11:34
zhenguoshaohe_feng: thanks11:34
shaohe_fengreally good we can land placement stuff by this week.11:34
zhenguoshaohe_feng: on my test, it works well now11:35
zhenguoshaohe_feng: we need to leave enough time to prepare the first release11:35
*** dims has quit IRC12:29
*** dims has joined #openstack-mogan12:44
*** wanghao has joined #openstack-mogan14:10
openstackgerritWangliangyu proposed openstack/python-moganclient master: Drop MANIFEST.in - it's not needed by pbr  https://review.openstack.org/48063715:02
*** liujiong has joined #openstack-mogan15:03
*** liujiong has quit IRC15:04
*** wanghao has quit IRC16:10
*** wanghao has joined #openstack-mogan16:11
*** wanghao has quit IRC16:11
*** wanghao has joined #openstack-mogan16:11
*** litao__ has quit IRC18:17
*** wanghao has quit IRC22:24

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!