Tuesday, 2017-08-22

*** wanghao_ has quit IRC00:13
*** wanghao has joined #openstack-mogan00:13
openstackgerritOpenStack Proposal Bot proposed openstack/mogan master: Updated from global requirements  https://review.openstack.org/49483100:38
*** wanghao_ has joined #openstack-mogan00:51
*** litao__ has joined #openstack-mogan00:52
*** wanghao has quit IRC00:54
openstackgerritwanghao proposed openstack/mogan master: Manage existing BMs: Part-1  https://review.openstack.org/47966001:14
zhenguomorning mogan!01:48
openstackgerritXinran WANG proposed openstack/mogan master: Get rid of flavor access  https://review.openstack.org/49575801:58
openstackgerritZhong Luyao proposed openstack/python-moganclient master: Add interface attach/detach support  https://review.openstack.org/46616802:01
openstackgerritMerged openstack/mogan master: Do not allow to remove flavor which is still in use.  https://review.openstack.org/49341202:12
litao__morning02:17
zhenguolitao__: o/02:22
zhenguoXinran: hi, do you want to take this https://bugs.launchpad.net/mogan/+bug/1712234 :D02:23
openstackLaunchpad bug 1712234 in Mogan "No keypair info in server object" [Undecided,New]02:23
Xinranzhenguo,  sure02:25
zhenguoXinran: hah, thanks!02:25
Xinranzhenguo,  np02:25
Xinranzhenguo,  is the key info only visible for admin users ?02:26
zhenguoXinran: no, for users02:26
Xinranzhenguo,  all users ?02:27
zhenguoXinran: as users can create and specify keys02:27
zhenguoXinran: yes02:27
Xinranzhenguo,  ok, got it02:27
zhenguoXinran: we just save what the users specified when claiming a server02:27
Xinranzhenguo,  so save key info as a field of server ?02:28
zhenguoXinran: yes, maybe just key name is enough02:28
Xinranzhenguo,  ok02:28
*** wanghao_ has quit IRC02:29
*** wanghao has joined #openstack-mogan02:31
openstackgerritXinran WANG proposed openstack/mogan master: support specifying port_id when attaching interface  https://review.openstack.org/49412102:33
litao__zhenguo,liusheng : see my comments https://review.openstack.org/#/c/476862/02:37
*** afantijiangxl has joined #openstack-mogan02:50
zhenguoafantijiangxl: hi :D02:51
afantijiangxlhi02:51
*** afantijiangxl is now known as afanti02:56
openstackgerritMerged openstack/mogan master: Updated from global requirements  https://review.openstack.org/49483102:58
zhenguolitao__: I'm ok with your comments :D02:58
litao__zhenguo: thanks02:59
zhenguolitao__: np02:59
zhenguoliusheng: please have a look at the managing running servers patch 1, we can land it first then litao__ can update the second one03:01
zhenguoliusheng: need more reviews on that03:01
liushengzhenguo: ok, will review soon03:01
zhenguoliusheng: thanks!03:01
liushengzhenguo: np03:01
*** wanghao has quit IRC03:03
*** wanghao has joined #openstack-mogan03:03
litao__liusheng: see my reply03:11
liushenglitao__: ok03:11
liushenglitao__: replied inline03:14
openstackgerritZhenguo Niu proposed openstack/mogan master: Return addresses with server API object  https://review.openstack.org/49582603:16
* zhenguo brb03:18
liushengzhenguo: still here ?03:19
liushengwanghao: hi, why don't raise an exception when cannot connect to ironic in listing manageable nodes ?03:20
*** luyao has joined #openstack-mogan03:28
zhenguoliusheng: yes03:34
liushengzhenguo: the question I asked wanghao, do you know if ther is any reason ? ^^03:34
zhenguoliusheng: maybe we can just treat it as a []03:35
liushengzhenguo: but user may assume there is not manageable servers03:36
liushengzhenguo: i have left some comments on that patch, may you can check again03:38
zhenguoliusheng: ok03:51
openstackgerritZhenguo Niu proposed openstack/mogan master: Return addresses with server API object  https://review.openstack.org/49582605:29
wanghaozhenguo: liusheng: yeah,  it should raise a exception if we cannot get node from ironic.05:59
openstackgerritwanghao proposed openstack/mogan master: Manage existing BMs: Part-1  https://review.openstack.org/47966006:02
openstackgerritwanghao proposed openstack/mogan master: Manage existing BMs: Part-1  https://review.openstack.org/47966006:11
openstackgerritwanghao proposed openstack/mogan master: Add some missing logs in Mogan  https://review.openstack.org/49291706:16
openstackgerritMerged openstack/mogan master: Replace server fault_info with fault  https://review.openstack.org/49513806:44
openstackgerritMerged openstack/mogan master: Add missed testing requirements  https://review.openstack.org/49405006:46
openstackgerritMerged openstack/mogan master: Remove the duplicate flavor disable check  https://review.openstack.org/49477106:56
openstackgerritMerged openstack/python-moganclient master: Improve flavor set command to support updating flavor  https://review.openstack.org/48686507:17
zhenguoliusheng: please have a look at this https://review.openstack.org/#/c/493495/ when you got time07:27
liushengzhenguo: oh, sorry forgot that :(07:28
zhenguoliusheng: hah07:28
liushengzhenguo: will review :)07:28
zhenguoliusheng: thanks!07:29
-openstackstatus- NOTICE: Gerrit is going to be restarted due to slow performance07:36
openstackgerritZhenguo Niu proposed openstack/python-moganclient master: Use new keyword *baremetalcompute*  https://review.openstack.org/49613807:39
-openstackstatus- NOTICE: Gerrit has been restarted successfully07:41
openstackgerritliusheng proposed openstack/mogan master: WIP:Add affinity_zone field for server object  https://review.openstack.org/49572508:30
openstackgerritliusheng proposed openstack/mogan master: Add support for scheduler_hints  https://review.openstack.org/46353408:30
openstackgerritliusheng proposed openstack/mogan master: WIP: use server group in scheduler  https://review.openstack.org/49615108:30
openstackgerritwanghao proposed openstack/mogan master: Manage existing BMs: Part-1  https://review.openstack.org/47966009:02
zhenguoliusheng: I added a patch to replace the client keyword https://review.openstack.org/496138 :D09:18
liushengzhenguo: ok, will look it :)09:19
zhenguowangaho, litao__, shaohe_feng, Xinran, luyao: please have a look at the above patch09:19
zhenguowanghao^^09:19
wanghaozhenguo: sure09:19
wanghaozhenguo: looks great09:20
zhenguowanghao: hah,  I tried it, it works fine except a bit long09:21
wanghaozhenguo: yeah haha09:23
liushengzhenguo: seems it is introduce much inconvenience that a node can be in different aggregates :(09:32
zhenguoliusheng: but it's convenience to users :D09:32
liushengzhenguo: maybe, but maybe don't support that is also reasonable :)09:33
zhenguoliusheng: you mean a node can only be in one aggregate?09:34
*** wanghao has quit IRC09:34
*** wanghao has joined #openstack-mogan09:35
zhenguoliusheng: that's not flexible09:35
*** wanghao has quit IRC09:35
*** wanghao has joined #openstack-mogan09:35
liushengzhenguo: yes, currently, the aggregates is like classifying tags on nodes I think, we can use aggregate metadata to do that.09:35
zhenguoliusheng: who09:36
*** wanghao has quit IRC09:36
zhenguoliusheng: how09:36
*** wanghao has joined #openstack-mogan09:36
liushengzhenguo: admins or operators09:37
*** wanghao has quit IRC09:37
*** wanghao has joined #openstack-mogan09:37
zhenguoliusheng: I may need to define several groups like high_mem, FPGA, GPU09:37
liushengzhenguo: acutally, I think the currectly implemenation is also complex for admins09:37
zhenguoliusheng: why?09:38
*** wanghao has quit IRC09:38
*** wanghao has joined #openstack-mogan09:38
zhenguoliusheng: why do you think it's complex for admins09:38
liushengzhenguo: yes, that is reasonable, but a node can be in different aggregates make things more complex09:38
*** wanghao has quit IRC09:38
zhenguoliusheng: a node can have high_mem and GPU at the same time09:38
zhenguoliusheng: so it must be in different groups09:38
*** wanghao has joined #openstack-mogan09:39
zhenguoliusheng: or there's maybe test_team and dev_team aggregate09:39
*** wanghao has quit IRC09:39
zhenguoliusheng: you may want some nodes can be consumed by both09:39
liushengzhenguo: yes, we can add multiple tags/metadata on a aggreagete09:40
*** wanghao has joined #openstack-mogan09:40
liushengzhenguo: I remember in Nova, the aggregate initially indicats a set of nodes with similar properties09:40
zhenguoliusheng: we are same with nova for admins09:41
liushengzhenguo: so we can add different tags on a aggregate including similar nodes09:42
liushengzhenguo: similar properties09:42
zhenguoliusheng: sure09:42
liushengzhenguo: so I think it is no effects if a node only be in one aggregate09:43
zhenguoliusheng: that wil let admins must know which node in which aggreagte09:43
zhenguoliusheng: of course you can use that way to define aggregates if you like09:43
zhenguoliusheng: but we should not hard limit that09:44
liushengzhenguo: maybe I didn't descripted clearly, but a node in a affinity zone, the node can be in one or more aggregates which with same metadata key (affinit_zone) and value, and this node can also be in otheres aggregate without affinity_zone metadata. in this situation, all the nodes in the "others" aggregates cannot be in a affinity_zone, or must be in same affinity zone of the node09:50
zhenguoliusheng: yes09:51
liushengzhenguo: and it is hard for admins to clarify the relationship, I guess you will say admins don't need care about this :)09:52
zhenguoliusheng: hah09:52
zhenguoliusheng: admins should define availability zone and affinity zone first09:53
liushengzhenguo: once a problem occured, it may be hard to clarify this relationship09:53
zhenguoliusheng: if admins define things that complex, it's not our problem09:53
zhenguoliusheng: they make the mistake, not us09:54
zhenguoliusheng: but if you let one node can only be in an aggregate09:54
liushengzhenguo: sometimes, when a develop env running a long time, many things may will become complex, admins cannot controll, hah09:55
zhenguoliusheng: that means you have only one aggregate in a deployment with only one az09:55
liushengzhenguo: why ?09:55
zhenguoliusheng: assume you have define one az in your deployment09:56
zhenguoliusheng: then you got an aggregate09:56
zhenguoliusheng: you can't define another aggregate, as the node only can in one aggregate09:56
zhenguoliusheng: I mean for a deployment with only one az09:57
liushengzhenguo: sorry, i didn't understant.. :) in this situation why we cannot define another aggregate ?09:58
zhenguoliusheng: oh, seems it's not09:59
liushengzhenguo: we can also add the az metadata for another aggregate09:59
zhenguoliusheng: I think the problem is that, you make the aggregate defination based on node not group10:01
zhenguoliusheng: as my understanding, as an cloud deployer, they may define groups first, then classify node to those groups10:02
zhenguoliusheng: that will make one node in different groups happen, as nodes will got traits that some groups have at the same time10:02
zhenguoliusheng: your proposal is more like placement traits which on rp, but our metadata is on group10:06
liushengzhenguo: hmm.. seems, yes, if all things are based on traits on rp, that also make things clear, we introduce I think it is for convenience to add metadata on a set of nodes (an aggregate) than every nodes.10:08
liushengzhenguo: may need to think more :)10:08
zhenguoliusheng: yes, if you want to based on traits/tags, we don't need to introduce aggregate notion anymore10:09
zhenguoliusheng: btw, I got an error with placement10:10
zhenguoliusheng: when updating resource, I got Unable to refresh my resource provider redord10:11
zhenguoliusheng: and it repeated for many times10:11
liushengzhenguo: I just found server_group and your aggregate matedata patches looks a complex logical and just concern that may lead the complexity for admins in the future :)10:11
liushengzhenguo: connection failure ?10:12
zhenguoliusheng: no, just happend after I enrolled a new node10:12
zhenguoliusheng: the code complexity is just aims to make users convience10:13
liushengzhenguo: did the reporting for enrolled node works correctly ?10:13
zhenguoliusheng: no, there's already a record there10:13
zhenguoliusheng: as I delete the node and then enrolled it again10:14
zhenguoliusheng: it first got this https://github.com/openstack/mogan/blob/master/mogan/scheduler/client/report.py#L33410:16
liushengzhenguo: may need to debug10:16
zhenguoliusheng: then https://github.com/openstack/mogan/blob/master/mogan/scheduler/client/report.py#L52010:17
zhenguoliusheng: seems it's not added in _resource_providers cache10:17
liushengzhenguo: did you check the inventory, resource_providers, allocations, resource_class tables after deleting the rp10:21
zhenguoliusheng: it's not deleted10:21
zhenguoliusheng: I just delete the ironic node, but seems before the update resources happen, I added the node back10:22
zhenguoliusheng: maybe that's because the rp name is the node name instead of uuid10:23
liushengzhenguo: it is possible10:24
liushengzhenguo: sorry, I have to go :)10:25
zhenguoliusheng: ok, bye10:26
* zhenguo brb10:28
openstackgerritZhenguo Niu proposed openstack/mogan master: Clean up server node uuid on task revert  https://review.openstack.org/49620711:21
openstackgerritZhenguo Niu proposed openstack/mogan master: Clean up server node uuid on task revert  https://review.openstack.org/49620711:43
openstackgerritOpenStack Proposal Bot proposed openstack/mogan master: Updated from global requirements  https://review.openstack.org/49621111:48
openstackgerritOpenStack Proposal Bot proposed openstack/python-moganclient master: Updated from global requirements  https://review.openstack.org/49490211:56
*** litao__ has quit IRC11:58
*** afanti has quit IRC15:55
*** lin_yang has joined #openstack-mogan18:41

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!