Monday, 2017-06-26

*** wanghao has joined #openstack-mogan00:34
*** wanghao has quit IRC00:36
*** wanghao has joined #openstack-mogan00:37
*** wanghao_ has joined #openstack-mogan00:45
*** wanghao has quit IRC00:48
*** liujiong has joined #openstack-mogan01:11
wanghao_zhenguo: ping01:15
zhenguowanghao_: pong01:15
wanghao_zhenguo: do you liusheng's comments in manage spec?01:15
zhenguowanghao_: sorry, I'm in a internal meeting, not got time to review that01:16
wanghao_zhenguo: okay, never mind.01:17
wanghao_zhenguo: take your time, it's not a ruhs01:18
wanghao_liusheng: ping01:22
*** liusheng has quit IRC01:23
openstackgerritwanghao proposed openstack/mogan master: Using assertFalse/True(A) instead of assertEqual(False/True, A)
*** liusheng has joined #openstack-mogan02:24
liushengwanghao_: pong, sorry just restarted my system02:30
zhenguoliusheng: hi, please remember to update the etherpad02:48
liushengzhenguo: ok02:49
zhenguoliusheng:  to help wangao collect the report02:49
liushengzhenguo: really appreciate wanghao to do that02:49
zhenguoliusheng: sure02:50
liushengzhenguo: are you in the meeting ? :D02:50
zhenguoliusheng: yes02:50
zhenguoliusheng: hah02:50
liushengzhenguo: I am just want to refactor our scheduler, maybe a big change, wdyt ?02:50
liushengzhenguo: lol02:50
zhenguoliusheng: yes,02:50
zhenguoliusheng: wrt the placement patch, how about just make one for reporting02:51
zhenguoliusheng: which can be get reviewed and we can land it when it's ready02:51
liushengzhenguo: yes, sure, will split them to serveral patches02:51
zhenguoliusheng: ok, thanks, we can quickly move to placement02:52
zhenguoliusheng: maybe by this week, lol02:52
liushengzhenguo: may I will push two basic PoC patches firstly to verify the whole function02:52
liushengzhenguo: yes, maybe02:52
zhenguoliusheng: sure02:52
zhenguoliusheng: but the reporting patch can be land first02:53
liushengzhenguo: yes02:53
zhenguoliusheng: I will help to review that, and maybe also update the patch02:53
liushengzhenguo: yes, please go ahead to update if you want to do any change :D02:54
zhenguoliusheng: hah, maybe after the splitting, hah02:55
liushengzhenguo: ye02:55
wanghao_zhenguo: liusheng, hi03:06
liushengwanghao_: hi03:06
wanghao_liusheng: I want to talk your comment in mange spec.03:07
wanghao_liusheng: that I get that may impact 'rebuild' operation if we didn't specify network info in API.03:07
wanghao_liusheng: but I didn't get the security risk you said.03:08
wanghao_liusheng: could you explain it more?03:08
liushengwanghao_: just my guess, I thought may before we adopting a baremetal server, the server may have itself networks, and after we adopt it, we can attach a tenant network interfaces, so the server may have the possbility that have the acess to tenant network  and another unknown netowrk03:10
liushengzhenguo, wanghao_ do you think if this situation have any risk03:11
wanghao_liusheng: I think it's possible, but not sure have any rish,  even we specify network_info in API, it still could be,  since bm server could didn't change it network connection.03:13
wanghao_liusheng: Just create a vitrual network in Neutron to map the physical network.03:13
wanghao_liusheng: that will not change the physical connection that existing before managing it.03:14
zhenguoliusheng, wanghao: yes, instead of an empty field of server network info, we may better to let operators map the existing physical networks to neutron.03:14
*** litao__ has joined #openstack-mogan03:15
wanghao_so operators should map the network before invoking managing api in mogan. But if he didn't do it, we still can let user call mogan api to manage it, but just this server could be 'rebuilded' or 'attach a new interface' I think.03:17
wanghao_since this server will not have any network info field.03:18
liushengzhenguo, wanghao_ just talked with zhaobo, we think in this situation, the server mahbe the acesses both a tenant networ and another network(uncontrolled by openstack), if we have other servers in the tanent networ, it may have the risk that other servers may also can acecess the unknow network, unless, administrator know and can acess the risk.03:20
openstackgerritTao Li proposed openstack/mogan master: Fix the stunk in *ing in powering servers.
zhenguoyes, we don't want to let users aware a server is adopted or create after it land to mogan03:21
liushengwanghao_: you mean also create a provider network of the network the server created ?03:21
zhenguoso maybe only let servers with network in neutron can be managed?03:22
zhenguowe can make the process simple now03:22
zhenguoif we really want to adopt a server, the operators should fully prepared before that03:22
liushengzhenguo: yes, if so, that will have no risk03:22
zhenguoas for bareemtals the servers physical networks can be easily get into mogan, right?03:23
zhenguooh, we may need to get the vlanid03:23
wanghao_liusheng: yes03:24
zhenguoon first step. we can make the requirements for manageable servers strict03:24
wanghao_yes, we need a check list to operators.03:25
zhenguoyes, operators should do more things, then for mogan, the process can be easier03:25
wanghao_zhenguo: So,  we didn't support the empty network info in API?03:25
zhenguowanghao_: yes, I think so03:25
zhenguowanghao_: if there's really customers requirments, we can do it later03:26
liushengso it's better to add warning notes in doc for operators of this feature03:26
wanghao_zhenguo: got it, will updat it.03:26
wanghao_liusheng: yes, I will write it in spec too.03:26
liushengwanghao_: thanks03:26
wanghao_we don't need to figure the list out now,  but at least we should do it after merging the code.03:27
zhenguowanghao_, liusheng: I will be in shenzhen for internal meeting in the whole week, maybe don't got enough time for reviewing, you guys can discuss more about this03:27
wanghao_zhenguo: sure03:27
liushengzhenguo: have a great trip :D03:27
wanghao_zhenguo: you're already here, do you?03:28
zhenguoand the review hour, you can prepare a list before that, thanks03:28
zhenguowanghao_: yes03:28
wanghao_zhenguo: cool03:28
wanghao_zhenguo: you can eat some shaguozhou03:28
liushengcasserole porridge03:29
openstackgerritZhenguo Niu proposed openstack/mogan-specs master: Track resources using Placement
liushengzhenguo: I found if we using placement, we don't need filters, cannot use weighters, the scheduler will be very very simple :(06:48
zhenguoliusheng: yes07:00
zhenguoliusheng: but they think the atomic resources just should be scheduled that simple :D07:00
liushengzhenguo: hah, if wo, seems I will remove most code of scheduler :D07:01
zhenguoliusheng: sure, just leave two methods, report and query, lol07:01
liushengzhenguo: ... ok07:01
zhenguoliusheng: I just thought, should placement client be under scheduler?07:05
liushengzhenguo: now, I put it under the scheduler07:05
zhenguoliusheng: yes, we use placement for resource tracking, but maybe not just for scheduling?07:06
zhenguoliusheng: I plan to add physical ports resource there as well, but not used for scheduling07:06
zhenguoliusheng: placement is somewhere we track resources, not only for scheuling?07:06
liushengzhenguo: if we don't put it under scheduler, why we need a scheduler service...07:07
liushengzhenguo: just very very smallllll07:07
liushengzhenguo: hah07:07
zhenguoliusheng: hah07:07
zhenguoliusheng: not sure if a separated service can make scheduling faster in parallel model?07:08
liushengzhenguo: maybe we only need mogan-engine to call placement api, and don't need scheduler service, hah07:08
liushengzhenguo: but actually, our scheduler hard to support multiple worker07:09
zhenguoliusheng: seems yes07:09
liushengzhenguo: and we have synchronized lock07:09
zhenguoliusheng: yes, that's the current painpoint of our scheduler07:10
zhenguoliusheng: if there's no lock, I don't know how to keep it work07:11
zhenguoliusheng: many parallel requests may scheduled to one node07:11
liushengzhenguo: and if we switch to use placemnt, and we almost only have a resources query filter, it will be  very fast, we may don't have performence burden07:12
liushengzhenguo: only a api call to placement api07:12
zhenguoliusheng: I think so, maybe after moving to placement, we can just get rid of the scheuler07:12
zhenguoliusheng: currently not, we can just focus on the placement stuff now, wdyt?07:13
liushengzhenguo: seems it is a part of work of switching to placement, hah07:13
zhenguoliusheng: getting rid of scheduler?07:13
zhenguoliusheng: we can just keep the service there07:13
liushengzhenguo: hah, yes07:14
zhenguoliusheng: with the two methods07:14
liushengzhenguo: ok, will try07:14
zhenguoliusheng: thanks, seems we will get there soon.07:14
liushengzhenguo: hah, yes07:14
liushengzhenguo: do you think should we use weighters now ?07:15
zhenguoliusheng: no, for scheduling stuff, we only use one query for resource class, that's all07:15
liushengzhenguo: ok07:15
zhenguoliusheng: aha, and the aggregates traits we will add later07:15
zhenguoliusheng: that's enough for scheduling atomic resouces07:16
liushengzhenguo: ok, so things be come clear07:16
liushengzhenguo: ok07:16
zhenguoliusheng: ok07:16
openstackgerritliusheng proposed openstack/mogan master: WIP: import placement service
openstackgerritliusheng proposed openstack/mogan master: WIP: Refactor the scheduler to use placement service
openstackgerritTao Li proposed openstack/mogan master: Fix reverting the network when plug vif failed
openstackgerritTao Li proposed openstack/mogan master: Fix reverting the network when plug vif failed
openstackgerritliusheng proposed openstack/mogan master: WIP: Refactor the scheduler to use placement service
liushengzhenguo: dou you think should we change the resources field in the default flavor to be "CUSTOM_SMALL" ?09:08
liushengzhenguo: or just convert it in code logical09:08
zhenguoliusheng: not sure if we need to convert both resource class and flavor resources, but they should be consistent09:15
zhenguoliusheng: as operators define the node's resource class and the flavor resources09:16
zhenguoliusheng: CUSTOM is only a concept of placement09:16
zhenguoliusheng: we can hide that part09:16
liushengzhenguo: we must convert resource class  when reporting, since Mogan have the constraint09:16
liushengzhenguo: Mogan can only accept the name with "CUSTOM_" prefix09:17
zhenguoliusheng: yes, so if the node's resource is small, then the flavor should be small09:17
liushengzhenguo: s/Mogan/Placement09:17
liushengzhenguo: ok, so we convert it in code09:17
zhenguoliusheng: yes, so we need the same converting logic for both node resources class and flavor resources09:17
liushengzhenguo: ok09:18
zhenguoliusheng: maybe we should change the current flavor name from 'small' to 'pretend'09:19
liushengzhenguo: why this word ?09:20
zhenguoliusheng: to indicate that the vm is a pretend baremetal, lol09:22
liushengzhenguo: hah09:23
zhenguoliusheng: aha, sorry not name, it's the resoruce class09:23
liushengzhenguo: maybe just change the flavor name, not the resources field09:23
zhenguoliusheng: hah, yes09:23
zhenguomaybe we don't even expose resources/resources traits to users09:23
liushengzhenguo: we  can do it later09:24
liushengzhenguo: yes09:24
liushengzhenguo: but not sure if we have special requirement09:24
zhenguoliusheng: no requirements at all09:24
liushengzhenguo: hah09:25
*** wanghao_ has quit IRC09:31
openstackgerritliusheng proposed openstack/mogan master: WIP: Refactor the scheduler to use placement service
openstackgerritMerged openstack/mogan master: Using assertFalse/True(A) instead of assertEqual(False/True, A)
*** liujiong has quit IRC10:05
liushengzhenguo: does taskflow persistent the tasks ?10:10
liushengzhenguo: sorry, ignore me, my mistake :D10:26
openstackgerritliusheng proposed openstack/mogan master: WIP: Refactor the scheduler to use placement service
openstackgerritliusheng proposed openstack/mogan master: WIP: Refactor the scheduler to use placement service
openstackgerritliusheng proposed openstack/mogan master: WIP: Refactor the scheduler to use placement service
*** litao__ has quit IRC12:02
*** ChanServ has quit IRC19:30
*** ChanServ has joined #openstack-mogan19:35
*** sets mode: +o ChanServ19:35

Generated by 2.15.3 by Marius Gedminas - find it at!