Friday, 2017-07-07

*** wanghao has joined #openstack-mogan00:33
*** wanghao has quit IRC00:42
zhenguomorning mogan!00:43
openstackgerritZhenguo Niu proposed openstack/mogan master: DNM: test tempest gate
openstackgerritZhenguo Niu proposed openstack/mogan master: Reporting nodes resource to placement service
zhenguoliusheng: wrt aggregates, maybe we need to do it ourselves and leverage placement aggregate's grouping ability01:12
liushengzhenguo: you mean expose a proxy api ?01:12
zhenguoliusheng: in fact, it's not a proxy API01:13
zhenguoliusheng: we manage the node aggregate ourselfves, save the aggregate-metadata information in mogan01:13
zhenguoliusheng: we can update the node aggregates spec later to futher discuss this01:15
liushengzhenguo: ok, how about only record aggregate metadata in Mogan, and still leverage placement aggregate01:17
zhenguoliusheng: yes, I just mean that01:18
liushengzhenguo: ok, let's  listen other's opinions01:18
zhenguoliusheng: and the aggregate uuid from placement should be saved with metadata in mogan01:18
zhenguoliusheng: as placement aggregate just provide the group ability, we should do some extra thing ourselves01:19
zhenguoliusheng: the group defination should be decide by mogan01:19
liushengzhenguo: hmm, sounds reasonable01:20
zhenguoliusheng: will elaborate the proposal after placement landed.01:20
liushengzhenguo: ok, thanks, but gate still borken, sigh :(01:21
zhenguoliusheng: I try to fix it, but need to wait for a while01:22
zhenguoliusheng: can you run tempest successfully in local env?01:22
liushengzhenguo: didn't try now01:22
zhenguoliusheng: not sure why tempest concurrency is 4 in our gate01:23
liushengzhenguo: let me try01:23
zhenguoliusheng: and according to the log, seems we also run neutron tempest01:23
liushengzhenguo: which log show that ?01:24
zhenguoliusheng: the console log pring many neutron tempest deprecation warning01:24
zhenguoliusheng: and seems it's stuck there01:24
zhenguoliusheng: then we got a timeout01:24
liushengzhenguo: I think that is unrelated01:25
liushengzhenguo: let me try locally01:25
zhenguoliusheng: not sure why that's presented01:26
liushengzhenguo: seems sime problem occured locally, will  try to debug, but I have to attend a internal meeting now01:30
zhenguoliusheng: ok01:30
*** liujiong has joined #openstack-mogan01:34
openstackgerritZhenguo Niu proposed openstack/mogan master: DNM: test tempest gate
openstackgerritZhenguo Niu proposed openstack/mogan master: Reporting nodes resource to placement service
zhenguoliusheng: my try doesn't work :(01:55
liushengzhenguo: I guess so01:55
liushengzhenguo: hah01:55
zhenguoliusheng: hah01:55
zhenguoliusheng: hope your local test can give some clues01:56
liushengzhenguo: I have got the killer, but I don't know why01:56
liushengzhenguo: it because this change, when I checkout to this change, the problem occured01:56
zhenguoliusheng: let me check it01:57
liushengzhenguo: when I checkout the last change before this change, the tempest can pass01:57
liushengzhenguo: but the condition of this change is False, so I don't know why01:57
liushengzhenguo: this change shouldn't effect01:58
zhenguoliusheng: can you debug locally to figure out why?01:58
zhenguoliusheng: really not quite famliar with tempest :(01:58
liushengzhenguo: not yet, this change add a parameter, but the parameter is False as default, so in theoretically, this change has no effect, it is strange02:00
zhenguoliusheng: yes, I find if we don't pass save-state, there's just a pass02:01
liushengzhenguo: yes02:01
zhenguoliusheng: when you run tempest locally, is there some more logs/clues?02:02
liushengzhenguo: no, just stucked02:02
zhenguoliusheng: really weird02:03
liushengzhenguo: will try to debug more02:03
zhenguoliusheng: thanks02:03
*** litao__ has joined #openstack-mogan02:05
zhenguoliusheng: what would happen if you add --save-state with the tempest comd02:06
litao__zhenguo: Currently´╝îmust  we set resource class in ironic nodes?03:06
zhenguolitao__: yes03:21
zhenguolitao__: if no resource class it will never be scheduled03:22
zhenguoliusheng: how's the debug going? any new clues?03:23
liushengzhenguo: something wrong about eventlet, but I am not very fimilar with this, still debuging03:23
zhenguoliusheng: eventlet?03:24
zhenguoliusheng: because the concurrency?03:24
liushengzhenguo: if I commented "from tempest.cmd import cleanup_service" and "from tempest.common import credentials_factory as credentials" in tempets/cmd/run.py03:24
liushengzhenguo: everything is ok03:24
liushengzhenguo: seems because socket eventlet03:25
zhenguoliusheng: why other projects run tempest with no such issue..03:25
liushengzhenguo: ... I have no idea03:26
zhenguoliusheng: really a tricky one03:26
openstackgerritliusheng proposed openstack/mogan master: DNM: try gate
zhenguoliusheng: does that related to tempest concurrency?03:47
zhenguoliusheng: what cmd you run locally?03:47
liushengzhenguo: tempest run -t --regex "^mogan\.tests\.tempest\.api\.test_flavors\.BaremetalComputeAPITest\.test_flavor_show"03:48
zhenguoliusheng: how does this 'tempest run --regex ^mogan\. --concurrency=2'03:49
liushengzhenguo: not relate with concurrency03:49
zhenguoliusheng: copied from the console log03:49
liushengzhenguo: -t parameter has disabled the concurrency03:49
zhenguoliusheng: I find ironic also run such cmd on console log, but not problems with it03:50
liushengzhenguo: same situation running with  --concurrency=203:51
zhenguoliusheng: seems we are quite familiar with aodh,
zhenguoliusheng: but there's no problems with it03:52
liushengzhenguo: yes, many projects run tempest like this03:53
zhenguoliusheng: you can try on you local env eith other regex to see03:54
liushengzhenguo: seems not related with the regex03:54
zhenguoliusheng: you mean the tempest itself?03:55
zhenguoliusheng: I mean run regex=aodh03:55
zhenguoliusheng: or ironic03:55
* zhenguo brb03:56
zhenguoliusheng: I'm on your vm now :D04:52
*** liujiong is now known as liujiong|away04:52
liushengzhenguo: hah04:52
liushengzhenguo: I found a way to fix04:52
zhenguoliusheng: how04:52
liushengzhenguo: but need to change the tempest04:52
zhenguoliusheng: I just run the cmd, it's ok04:52
liushengzhenguo: yes, I just modify the tempest04:53
liushengzhenguo: import eventlet04:53
liushengzhenguo: add the above two lines in tempest/cmd/run.py04:53
liushengzhenguo: that can works04:53
liushengzhenguo: I probobly got the reason04:53
zhenguoliusheng: maybe we can add that to mogan temepst?04:54
liushengzhenguo: just because some libs didn't be turned to greenthread, tempest will start a thread to run tests, and the thread will never be scheduled to run04:54
liushengzhenguo: let me try04:55
zhenguoliusheng: I also encountered that problem at the begining of mogan creation04:55
liushengzhenguo: it works, just remove the "os=False" in mogan/__init__.py04:56
zhenguoliusheng: hah04:56
zhenguoliusheng: awesome04:57
liushengzhenguo: lol04:57
zhenguoliusheng: not sure what's os means there,04:57
zhenguoliusheng: but anyway it fixes our problems04:57
liushengzhenguo: that seems means don't turn the os lib to green04:58
zhenguoliusheng: that's no side effect if we just remove it?04:59
zhenguoliusheng: seems other projects all have such a os=False04:59
liushengzhenguo: really, let me check05:00
zhenguoliusheng: aha, maybe not05:01
zhenguoliusheng: just because we add that in the root dir of mogan05:01
zhenguoliusheng: seems others add that in sub dir05:01
liushengzhenguo: yes, I see, we just need to move that to cmd/ dir05:01
zhenguoliusheng: yes05:02
liushengzhenguo: let me tryh05:02
zhenguoliusheng: ok, thanks05:02
zhenguoliusheng: also needed in tests dir05:04
liushengzhenguo: ok05:06
liushengzhenguo: in test/ dir or tests/unit/05:06
zhenguoliusheng: if in test/ it's same as before I think05:07
liushengzhenguo: ok, let me try05:07
zhenguoliusheng: maybe just in cmd/, tests/unit, tests/functional ?05:10
liushengzhenguo: yes, just tested, if we put it in tests/, it still cannot work05:11
zhenguoliusheng: yes, as it will inherit from that05:11
openstackgerritliusheng proposed openstack/mogan master: Fix the tempest gate by moving the monkey patch statements
*** wanghao has joined #openstack-mogan05:37
*** wanghao has quit IRC05:45
*** wanghao has joined #openstack-mogan05:46
*** shaohe_feng has quit IRC05:50
*** wanghao has quit IRC05:50
*** wanghao has joined #openstack-mogan05:51
*** shaohe_feng has joined #openstack-mogan05:51
*** wanghao has quit IRC05:57
*** wanghao has joined #openstack-mogan05:57
*** wanghao has quit IRC06:03
*** wanghao has joined #openstack-mogan06:04
*** wanghao has quit IRC06:08
*** wanghao has joined #openstack-mogan06:08
*** wanghao has quit IRC06:11
*** wanghao has joined #openstack-mogan06:11
*** shaohe_feng has quit IRC06:14
*** wanghao has quit IRC06:16
*** wanghao has joined #openstack-mogan06:17
litao__liusheng: If you removed compute_nodes from mogan,  How can i get the compute nodes object ?06:17
liushenglitao__: why you need to get compute nodes object ?06:17
*** wanghao has quit IRC06:19
litao__I need the baremetal node information06:24
*** wanghao has joined #openstack-mogan06:24
litao__liusheng: should i call ironic api to get baremetal node?06:24
*** shaohe_feng has joined #openstack-mogan06:25
liushenglitao__: yes, you can query nodes from driver06:26
*** liujiong|away is now known as liujiong06:26
*** wanghao has quit IRC06:27
litao__liusheng: that's all06:27
*** wanghao has joined #openstack-mogan06:27
litao__liusheng: Is the resource class  the flavor name in mogan?06:30
liushenglitao__: no, it is a attribute of flavor06:30
litao__liusheng: The resource attribute in flavor?06:31
*** wanghao has quit IRC06:32
*** wanghao has joined #openstack-mogan06:32
*** wanghao has quit IRC06:34
liushenglitao__: yes06:34
*** wanghao has joined #openstack-mogan06:35
*** wanghao has quit IRC06:38
*** wanghao has joined #openstack-mogan06:38
*** wanghao has quit IRC06:39
*** wanghao has joined #openstack-mogan06:40
openstackLaunchpad bug 1575935 in Ironic "Rebuild should also accept a configdrive" [Wishlist,Triaged]06:40
openstackgerritMerged openstack/mogan master: Fix the tempest gate by moving the monkey patch statements
zhenguoour rebuild seems can't support user data as ironic's limitation06:41
zhenguoliusheng: it got landed, hah06:42
liushengzhenguo: hah, thanks06:43
zhenguoliusheng: hah06:43
*** wanghao has quit IRC06:43
liushengzhenguo: do you think should we support running api server under uwsgi ?06:43
zhenguoliusheng: if all other projects change to use uwsgi, we should06:44
*** wanghao has joined #openstack-mogan06:44
liushengzhenguo: seems it is a tendency now, hah06:44
zhenguoliusheng: if so, it's ok06:44
zhenguohi all, please recheck your patches if it's related to the tempest issue we just fixed, sorry for the inconvenience06:47
zhenguoliusheng: should rebuilding support specifying image?06:48
liushengzhenguo: I remember in nova, image is a required parameter06:48
*** wanghao has quit IRC06:48
zhenguoliusheng: lol06:48
zhenguoliusheng: even if using the same image as before?06:49
*** wanghao has joined #openstack-mogan06:49
liushengzhenguo: yes06:49
zhenguoliusheng: ok06:49
zhenguoliusheng: our rebuild API is really simple now, can't specify anything06:49
*** wanghao has quit IRC06:54
*** wanghao has joined #openstack-mogan06:54
liushengzhenguo: yes, so I am not sure if ironic can support specifying image ?06:55
zhenguoliusheng: ironic just need to update the image info in properties when run set provision state to rebuild, but I never test it06:56
zhenguoliusheng: the second placement patch needs to be updated06:58
zhenguoliusheng: I agree that we should call scheduler.queryclient.select_destinations instead of just rpc api on mogan engine06:59
*** wanghao has quit IRC06:59
*** wanghao has joined #openstack-mogan07:00
zhenguoliusheng: I'm wondering why we don't call it mogan conductor, lol07:00
*** liujiong has quit IRC07:01
*** wanghao has quit IRC07:05
*** wanghao has joined #openstack-mogan07:06
liushengzhenguo: hah, Mogan don't have a conductor service07:08
zhenguoliusheng: hah07:09
*** wanghao has quit IRC07:10
*** wanghao has joined #openstack-mogan07:11
*** wanghao has quit IRC07:16
*** wanghao has joined #openstack-mogan07:17
*** wanghao has quit IRC07:22
*** wanghao has joined #openstack-mogan07:22
*** wanghao has quit IRC07:30
*** wanghao has joined #openstack-mogan07:31
*** wanghao has quit IRC07:40
*** wanghao has joined #openstack-mogan07:40
*** wanghao has quit IRC07:47
*** wanghao has joined #openstack-mogan07:47
openstackgerritMerged openstack/mogan master: Reporting nodes resource to placement service
*** wanghao has quit IRC07:53
*** wanghao has joined #openstack-mogan07:54
zhenguoliusheng: hi, do you know why I can't connect our physical env with VPN?07:55
liushengzhenguo: yes, I also cannot07:56
zhenguoliusheng: and our API doc can't access via the public IP07:59
*** wanghao has quit IRC08:00
*** wanghao has joined #openstack-mogan08:01
*** wanghao has quit IRC08:05
*** wanghao has joined #openstack-mogan08:06
liushengzhenguo: not sure why cannot acess physical env08:08
*** wanghao has quit IRC08:15
*** wanghao has joined #openstack-mogan08:15
*** wanghao has quit IRC08:20
*** wanghao has joined #openstack-mogan08:21
liushengzhenguo: seems, the input paramters of scheduler_client.select_destinations is different with the rpc api, may we can keep to use rpcapi and improve in future ?08:42
zhenguoliusheng: I find query select destinations also call rpc api with the same parameter it got08:44
zhenguoliusheng: you mean we don't have a spec obj?08:44
liushengzhenguo: yes, in Nova, there is a request spec, maybe we can also add a object definition, and the 3rd parameter, it is instance_uuids08:45
liushengzhenguo: currently, it is filter_properties08:45
zhenguoliusheng: maybe we can just change the query client select_destinations08:46
zhenguoliusheng: and let engine manager call the client08:46
liushengzhenguo: seems Nova tend to get rid of filter_properties, and  use request_spec08:46
liushengzhenguo: ok08:46
zhenguoliusheng: we an improve it in the following up patches08:47
Xinranwhy I can't see liusheng's words? :(08:51
liushengXinran: really ?  lol08:52
zhenguoXinran: liusheng just ping you, still can't get it?08:52
Xinranliusheng,  no..08:53
zhenguoreally weird08:53
* liusheng guess there is a blacklist...08:53
zhenguohah, yes, Xinran must add you to the blacklist08:53
liushengzhenguo: I guess so :D08:54
Xinranzhenguo, liusheng  ......08:54
Xinranliusheng,  sorry ..08:54
* zhenguo brb08:55
Xinranliusheng,  can you see me?08:55
liushengXinran: sure, I can see you08:56
openstackgerritliusheng proposed openstack/mogan master: Clean the compute_node and compute_port objects and db interfaces
openstackgerritliusheng proposed openstack/mogan master: Get rid of listing availability zone api and clean some legacy code
openstackgerritliusheng proposed openstack/mogan master: Clean the methods about updating node resources to Mogan db
openstackgerritliusheng proposed openstack/mogan master: Get rid of node listing api of Mogan
openstackgerritliusheng proposed openstack/mogan master: Consume nodes resource in placement
openstackgerritliusheng proposed openstack/mogan master: Refactor the scheduler to use placement service
Xinranliusheng,  ping09:06
liushengXinran: pong09:07
liushengXinran: so you still cannot see me ?09:07
Xinranliusheng,  yes I can see you now09:07
Xinranliusheng, hah09:07
liushengXinran: hah09:08
Xinranliusheng,  zhenguo said you ping me ?09:08
liushengXinran: just because you said you cannot see me, I tested09:11
Xinranliusheng,  hah okay09:11
Xinranliusheng,  thanks09:12
liushengXinran: np :D, is you tempest ok now ?09:12
Xinranliusheng,  yes09:14
Xinranliusheng,  I use net_id directly instead of using get_net_id() function09:15
Xinranliusheng,  it doesn't work when service=compute09:16
liushengXinran: cool09:16
liushengXinran: you didn't installed nova ?09:16
Xinranliusheng,  hah, no, I think it is because of that09:17
liushengXinran: I think so, hah09:18
Xinranliusheng,  tempest consider by default that nova is running in env09:18
zhenguoliusheng: btw, do we have a tempest for multi creation?09:21
liushengzhenguo: no09:22
liushengXinran: yes09:22
zhenguoliusheng: can we add a one for that?09:22
zhenguoliusheng: I'm a bit concerned our changes will break that09:22
liushengzhenguo: since for now, the server creation is time consuming, we avoided to create many servers in tempests09:22
zhenguoliusheng: for multi creation, seems the time should be same with single09:23
zhenguoliusheng: as it's parallel09:23
liushengzhenguo: maybe we can try, but another reason, since tests maybe run parallely, we'd better to avoid create server repeatly, since we can only create at most 3 servers09:26
zhenguoliusheng: yes, just create 2 for that seems enough09:26
openstackgerritTao Li proposed openstack/mogan master: [WIP] Manage existing BMs: Part-2
liushengzhenguo: will try09:28
zhenguoliusheng: thanks09:28
zhenguoliusheng: the 2ed and 3ed placement patches seems should be land at the same time, otherwise it will schedule to the same rp again09:31
liushengzhenguo: hmm, seems yes09:31
zhenguoliusheng: hah, will make sure to merge them together09:31
openstackgerritXinran WANG proposed openstack/mogan master: Detach interface for server
*** openstackgerrit has quit IRC09:48
zhenguozhangyang: what senario will you use server group?09:51
zhenguozhangyang: affinity and anti-affinity is for racks here?09:51
zhangyangzhenguo: affinity and anti-affinity for different ceph backends or rooms perhaps.09:58
zhangyangzhenguo: i'll ask liudong about the server group thing.09:58
zhenguozhangyang: so you need a server group with affinity and anti-affinity policies like nova09:59
zhenguozhangyang: but how the filter would be like?09:59
zhangyangzhenguo: i thought so, but i read the nova server group codes. It's not working the way i thought...10:00
zhenguozhangyang:  seems in nova, it just keep the instnace in or not in the same host10:01
zhenguozhangyang: but we can expand it, need you to provide more use cass10:01
zhangyangzhenguo: yes, they pass same/different host parameter and a instance uuid list to build affinity groups. am i right?10:02
*** wanghao has quit IRC10:02
zhenguozhangyang: they just create a sg with policy affinity or anti-affinity10:02
zhenguozhangyang: when you create instance with --scheulder-hints group_id, the instance uuid will added as a member of that sg10:03
zhenguozhangyang: when you create another instance with the group_id, it will check the member list for scheuling policies10:03
zhenguozhangyang: so, I think with nova's server group. it can't meet your requirements10:04
zhenguozhangyang: maybe server group policy should also be based on metadata like aggregates.10:05
zhenguozhangyang: I have to go, ttyl10:06
* zhenguo away10:06
*** litao__ has quit IRC12:04
*** wanghao has joined #openstack-mogan13:29
*** wanghao has quit IRC15:51
*** wanghao has joined #openstack-mogan15:51
*** wanghao has quit IRC15:51
*** wanghao has joined #openstack-mogan15:52
*** wanghao has quit IRC15:52
*** wanghao has joined #openstack-mogan15:52
*** wanghao has quit IRC15:53
*** wanghao has joined #openstack-mogan15:53
*** wanghao has quit IRC15:54
*** wanghao has joined #openstack-mogan15:54
*** wanghao has quit IRC15:55
*** wanghao has joined #openstack-mogan15:55
*** wanghao has quit IRC15:55
*** wanghao has joined #openstack-mogan15:56
*** wanghao has quit IRC15:56
*** wanghao has joined #openstack-mogan15:56
*** wanghao has quit IRC15:57
*** wanghao has joined #openstack-mogan15:57
*** wanghao has quit IRC15:58
*** wanghao has joined #openstack-mogan15:58
*** wanghao has quit IRC16:03
*** wanghao has joined #openstack-mogan16:59
*** wanghao has quit IRC17:04
*** wanghao has joined #openstack-mogan19:31
*** wanghao has quit IRC19:36
*** openstackstatus has quit IRC21:56
*** openstack has joined #openstack-mogan22:00
*** wanghao has joined #openstack-mogan23:09
*** wanghao has quit IRC23:14
*** wanghao has joined #openstack-mogan23:16
*** wanghao has quit IRC23:16
*** wanghao has joined #openstack-mogan23:17
*** wanghao has quit IRC23:21

Generated by 2.15.3 by Marius Gedminas - find it at!