Friday, 2017-08-25

*** wanghao has joined #openstack-mogan00:28
wanghaozhenguo: ping01:04
wanghaozhenguo: I saw your review in managing existing,01:04
wanghao  ports [01:04
wanghao    {01:04
wanghao      "uuid": xx,01:04
wanghao      "address": xx,01:04
wanghao      "port_id": xx  # which is extracted from extra or internal_info in ironic driver.01:04
wanghao    }01:04
wanghao01:04
wanghaozhenguo: there port_id means neutron's port id, right?01:04
wanghaozhenguo: and I also think we should add mac address too.01:05
wanghaozhenguo: sorry, mac address has been here.01:05
*** wanghao_ has joined #openstack-mogan01:06
*** wanghao has quit IRC01:10
*** litao__ has joined #openstack-mogan01:20
zhenguowanghao: yes, address is mac and port_id is neutron port01:36
zhenguowanghao: as different driver may save neutron port in different way, so we need to make the API object more generic01:37
*** openstackgerrit has joined #openstack-mogan01:47
openstackgerritMerged openstack/python-moganclient master: Add commands for aggregate node actions  https://review.openstack.org/49237001:47
openstackgerritZhenguo Niu proposed openstack/mogan-specs master: Add server group support  https://review.openstack.org/48954101:56
openstackgerritTao Li proposed openstack/mogan master: Manage existing BMs: Part-2  https://review.openstack.org/48154401:57
zhenguoliusheng: please remember to move ironic_states import out from engine01:57
liushengzhenguo: ok, will do it, I am testing the server_group + scheduler_hings :D01:58
zhenguoliusheng: ok01:58
liushengzhenguo: seems I made a mistak for the aggregate node commands: "baremetalcompute aggregate list node", it will conflict with "baremetalcompute aggregate list", sorry :(02:02
liushengzhenguo: maybe "baremetalcompute aggregate node <ACTIONK>"02:03
zhenguoliusheng: you can add a following up patch to address that02:03
liushengzhenguo: ok02:03
zhenguoliusheng: seems nova doesn't have such commands02:05
zhenguoliusheng: should we add nodes to aggregate object02:05
liushengzhenguo: hmm, maybe it is similar with Xinran's problem02:06
zhenguoliusheng: yes02:06
zhenguoliusheng: do we have other similar problems02:06
liushengzhenguo: maybe aggregate list node command, we can intergrated to aggregate object, but add/remove node for an aggregate cannot02:07
zhenguoliusheng: yes02:07
liushengzhenguo: so that may need to be done in API side firstly02:08
zhenguoliusheng: it's ok to keep add/remove aggregate nodes/ flavor access in separated controller02:08
liushengzhenguo: yes02:08
zhenguoliusheng: ok, I will try later02:09
liushengzhenguo: how about just change the commands firstly for now ?02:09
zhenguoliusheng: you mean just get rid of that command02:09
zhenguoliusheng: aggregate list node02:09
liushengzhenguo: no I mean switch the lacation of "node" and "ACTION" in the 3 node aggregeate commands02:09
liushengbaremetalcompute_aggregate_node_add = moganclient.osc.v1.aggregate:AggregateAddNode02:10
zhenguoliusheng: seems we don't need to change that02:10
liusheng    baremetalcompute_aggregate_node_list = moganclient.osc.v1.aggregate:AggregateListNode02:10
liusheng    baremetalcompute_aggregate_node_remove = moganclient.osc.v1.aggregate:AggregateRemoveNode02:10
zhenguoliusheng: just follow nova's way02:10
zhenguoliusheng: ACTION node02:10
liushengzhenguo: ok, let me check how Nova does02:10
liushengzhenguo: en, yes02:11
liushengzhenguo: let's keep it02:11
*** wanghao_ has quit IRC02:11
openstackgerritliusheng proposed openstack/mogan master: Add support for scheduler_hints  https://review.openstack.org/46353402:24
openstackgerritliusheng proposed openstack/mogan master: WIP: Use server group in scheduler  https://review.openstack.org/49615102:24
openstackgerritliusheng proposed openstack/mogan master: Use server group in scheduler  https://review.openstack.org/49615102:24
openstackgerritZhenguo Niu proposed openstack/mogan-specs master: Add server group support  https://review.openstack.org/48954102:26
openstackgerritliusheng proposed openstack/mogan master: Use server group in scheduler  https://review.openstack.org/49615102:29
openstackgerritTao Li proposed openstack/mogan master: Rollback to the original status when server powering action failed  https://review.openstack.org/47686202:38
*** wanghao has joined #openstack-mogan02:53
litao__zhenguo: I found the managing servers spec is updated, we needn't pass port id in API,So we needn't check the network?03:01
*** wanghao has quit IRC03:08
zhenguolitao__: we don't pass all information through API, but you can get the manageable server  object, which includes everything you want03:08
*** wanghao has joined #openstack-mogan03:10
wanghaozhenguo: sure03:10
wanghaozhenguo: do you think neutron_port_id is more better03:10
zhenguolitao__, wanghao: maybe we just need to pass name, description, metadata03:10
zhenguowanghao: yes03:10
litao__zhenguo, wanghao : so I should call the ironic api to get the network information?03:11
zhenguolitao__: no, you don't need to call ironic03:11
litao__zhenguo: If you don't pass network, how can i get it?03:12
zhenguolitao__: y ou need to fetch the manageable server object via wanghao's API03:12
zhenguolitao__: manage API is just verity informations like port_id, then create a server db record, that's all03:12
zhenguoalthough it's not that smart for now, but it works at least03:13
litao__zhenguo: you mean , I should call wanghao's get API in managing server APi ?03:13
zhenguolitao__: oh, not API, but internal API03:14
wanghaozhenguo: reuse my process to get manageable nodes information.03:14
zhenguoyes03:14
litao__zhenguo: I get03:14
litao__zhenguo: also pass the available_zone03:17
zhenguolitao__: our availablility_zone is used for scheduling, not sure how to handle that03:17
zhenguolitao__: but seems cinder can support that, and get manageable volumes will include availabiilty_zone information03:18
zhenguoliusheng: seems manageable servers will also go to placement, right?03:19
zhenguoliusheng: how can we handle that03:19
litao__zhenguo: Yes, we can check the resource of the bare metal node through placement03:19
zhenguolitao__: if such nodes can go to placement, it maybe scheduled :(03:20
liushengzhenguo: yes, it should be managed by placement03:20
zhenguoliusheng: so it can be scheduled?03:21
liushengzhenguo: that seems has a problem03:22
liushengzhenguo: :(03:22
zhenguoliusheng: need to refactor the placement part before Pike03:23
zhenguoliusheng: maybe not avaialble nodes, should go to placement with an allocations03:24
zhenguoliusheng: but nos sure who allocated it03:24
*** wanghao has quit IRC03:24
liushengzhenguo: not sure the "reserved" field in inventory can be used in this situation03:24
zhenguoliusheng: we can try03:25
*** wanghao has joined #openstack-mogan03:25
litao__I think we can treat bare metal server active without instance_uuid as unallocated resources03:26
zhenguolitao__: yes, but we need to find a way to set in placement03:27
zhenguolitao__: currently only allocations there can archieve this03:27
zhenguolitao__: maybe inventories.reserved can also help with this03:27
zhenguoliusheng: not only for active nodes, all unavailble states should be set reserved 1 there03:28
zhenguoliusheng:and remove reserved when it turns avaialble03:28
liushengzhenguo: not sure if reserved is designed for that03:29
zhenguoliusheng: reserved is desined for vm host reserved cpu, mem03:29
liushengzhenguo: oh, yes, I remember it03:29
zhenguoliusheng: but for us, we only got 1 resource_class there, so we can reserve it as we want03:30
liushengzhenguo: so it is also can used in  our situation :)03:30
zhenguoliusheng: yes03:30
liushengzhenguo: sounds a good idea03:30
zhenguoliusheng: hah, that's also address my concerns like we talked03:30
liushengzhenguo: hah, seems yes03:31
zhenguolitao__, liusheng,  wanghao: so, seems manageable servers can also return availability zone information if it added to the aggregate03:32
zhenguolitao__: but seems we don't need to specify az in API03:32
liushengzhenguo: you mean the manageable server already with az info ?03:32
zhenguolitao__: if the manageable server contains az, you can add it to server db record03:32
zhenguoliusheng: as the node can be added in aggregates with az03:33
zhenguoliusheng: we just make it not be abled to scheduled03:33
zhenguoliusheng: but our node list, and aggregates node add can also do that03:33
liushengzhenguo: oh, does that will conflict with your aggregate metadata check ?03:33
zhenguoliusheng: seems not03:34
liushengzhenguo: cool03:34
zhenguoliusheng: that check will only care about node in different aggregate, regardless of it's managteable or deployed by mogan03:34
liushengzhenguo: oh, yes,03:35
zhenguoliusheng, litao__: but we may need a following up patch for managing servers03:35
zhenguolitao__, wanghao: for now, we don't need to handle az03:35
litao__zhenguo: So you mean we shouldn't pass az in managing server API?03:36
zhenguolitao__: yes03:36
zhenguolitao__: we desined to pass az for scheduling,03:36
litao__zhenguo: just name ,  description, metadata and node_uuid can pass>03:37
litao__zhenguo: ?03:37
zhenguolitao__: I think so03:37
openstackgerritliusheng proposed openstack/mogan master: Get rid of Ironic state in engine manager layer  https://review.openstack.org/49772403:37
litao__zhenguo: OK03:39
zhenguolitao__: will try to land wanghao's patch by today03:39
litao__zhenguo: great, so I can rebase it03:40
zhenguolitao__: yes, hah03:40
openstackgerritliusheng proposed openstack/mogan master: Add affinity_zone field for server object  https://review.openstack.org/49572503:54
*** wanghao has quit IRC04:04
*** wanghao has joined #openstack-mogan04:08
*** wanghao has quit IRC04:31
*** wanghao has joined #openstack-mogan04:32
openstackgerritwanghao proposed openstack/mogan master: Manage existing BMs: Part-1  https://review.openstack.org/47966005:52
openstackgerritwanghao proposed openstack/mogan master: Add some missing logs in Mogan  https://review.openstack.org/49291706:04
litao__wanghao: If i get your internal method,  should you add a get_one method?06:09
openstackgerritliusheng proposed openstack/mogan master: Add support for scheduler_hints  https://review.openstack.org/46353406:10
openstackgerritliusheng proposed openstack/mogan master: Use server group in scheduler  https://review.openstack.org/49615106:10
openstackgerritliusheng proposed openstack/mogan master: Get rid of Ironic state in engine manager layer  https://review.openstack.org/49772406:12
wanghaolitao__: I think it's no need,  you just need to choose what you want from the list.06:13
wanghaolitao__: need to expose the get_one to user.06:13
wanghaoneed/no need06:14
litao__wanghao: The manage API passed only the node_uuid, so you mean I get all the node list and get the one by the node_uuid?06:18
wanghaolitao__: yes06:19
litao__wanghao: Maybe a little low efficiency06:20
litao__wanghao: But it works06:20
wanghaolitao__: since get_one is useless for getting managebale servers,  if I support get_one, it's not different you call the ironic api.06:20
litao__wanghao: Ok06:21
wanghaolitao__: besides, considering we may can support multi-manage at once,  so I think it's better you just use the get_all api.06:21
litao__wanghao: yeah06:22
litao__zhenguo If we don't pass flavor, so is it  necessary to check resource class?06:26
zhenguolitao__: seems not06:27
litao__zhenguo: ok06:27
zhenguolitao__: I'm considering whether we need to expose resource_class through server06:28
zhenguoliusheng, wanghao: do you think we should add resource_class to server object to expose it to users?06:28
zhenguooh, seems we dont's how it with flavor to common users06:29
zhenguos/how/show06:29
liushengzhenguo: hah, yes06:29
wanghaozhenguo: I think it's no harm to expose it to users.06:30
wanghaozhenguo: it could be a tag-like thing to servers.06:31
zhenguoseems yes, but maybe we can think more, and do it after managing server landed06:31
liushengflavor is also something like tag for end users06:32
wanghaozhenguo: yeah06:32
litao__Maybe we can add tags for server later06:32
litao__rather than flavor06:33
zhenguosure, tags should be supported06:33
litao__if the flavor is not used currently, we don't expose it to users firstly06:33
zhenguolitao__: you can check whether it's allowed to be null in server object06:34
liushengfor users, they may don't have an accurate concept of the size of server, but a fuzzy concept of the class of server, e.g. gold, silver06:35
litao__zhenguo: yes, it is nullable in server object06:36
* litao__ zhenguo: the get_manageable_servers06:36
* litao__ zhenguo: The flavor_uuid06:37
zhenguolitao__: if so, it's ok06:37
wanghaotags is an attributes from mogan side,  but resource_class is from underlay side.06:38
zhenguoliusheng: maybe we should try to add cpu/mem/disks information with the server, lol06:38
liushengzhenguo: ... we disscussed to avoid expose that info. :P06:39
zhenguoliusheng: we don't have a way to expose that06:39
liushengzhenguo: you mean add that in server than flavor ?06:40
zhenguoonly a flavor uuid with the server, seems not userful to users06:40
wanghaozhenguo: I suggest we find a way to expose those information to users.06:40
zhenguonot sure, maybe add some human readble information, e,g flavor description?06:40
zhenguowanghao: yes, seems users care about that06:41
wanghaozhenguo: or hardware description06:41
zhenguowanghao: yes06:41
liushengzhenguo: I think may we can add another unchangeable field in flavor, and desription for other usage06:41
wanghaozhenguo: yes,  since Mogan focuse on "baremetal",  we can add some hardware information in server.06:42
wanghaoliusheng: maybe not in flavor06:42
liushengwanghao: so all things in server ?06:42
wanghaoliusheng: no, a new fieid in server06:43
zhenguowhen you get a server, I think that's the basic informations users want to know06:43
wanghaoliusheng: zhenguo, like 'hardware' in server object06:43
litao__Yes, maybe we should add a new filed06:43
zhenguoyes06:44
zhenguoif driver can provide such properties, we can get that from the server associated node06:44
liushengwanghao, zhenguo litao__ maybe at most cases, users don't want that  info, how about add that info in flavor, when sometimes users want that info, they can query flavor and server and merge the flavor info and server info ?06:46
liushengand mostly, users may face their server by dashboard, dashboard plugin can assemble info from several APIs06:47
zhenguoliusheng: not sure it's convenient, but if I'm a user, I only care about that informations, lol;06:47
liushengzhenguo: yes, but usually, users may don't direcly use the Mogan API06:48
zhenguoliusheng: doesn't it better if we can save a API call?06:48
zhenguoliusheng: and maybe the informaition can be different from flavor06:48
liushengzhenguo: no, I mean in many situations, uses may don't need that info, so maybe this way si more save. hah06:49
zhenguoliusheng: why users don't need that info?06:49
liushengzhenguo: users don't need to care that06:49
zhenguoliusheng: why06:49
liushengzhenguo: why users need ?06:50
zhenguoliusheng: if you buy a server, you don't are about the hardware specifications?06:50
zhenguo*care06:51
liushengzhenguo: no, I thinks as we desinged, usually, users may don't need that accurate hardware info. and in a few cases, users may need to know that06:51
zhenguoliusheng: not sure..06:52
liushengjust personal thoughts06:53
litao__ Currently, users can't get hardware information from mogan API06:53
litao__do they?06:54
zhenguoalso can't get from flavor, as flavor description maybe not hardware info06:54
liushenglitao__: yes, that is useful, what I think is may we'd better to avoid return many info when query servers, but we can provide the way to query total info according to users real needs06:55
zhenguohah, but personally, I think hardware specifications is more important than everything returned with the flavor, lol06:56
liushenglol06:56
litao__zhenguo: Yes, I think this is another issue, we could discuss it later06:56
zhenguohah, we can continue to discuss this later, let's try to focus on Pike release06:57
zhenguoit's Friday, let's try to land more things this week06:58
litao__Currently, whether expose resource class doesn't matter me, i don't care it06:58
wanghao+1, this could be discuss more in next meeting for queens.06:59
liushengseems need doc patch for server managing06:59
wanghaozhenguo: liusheng: I have updated the managing patch and miss log patch, plz have a look07:00
liushengwanghao: ok, will review07:00
zhenguoI think wanghao's patch is almost good, will test it later, please have a look, liusheng07:00
zhenguowanghao: ok07:00
zhenguoI mean the managing servers patch !07:00
zhenguolol07:00
zhenguoliusheng, wanghao^^07:00
liushengzhenguo: yes, so I will believe your test, hah07:00
zhenguohah07:00
litao__wanghao: The get_manageable_servers should expose image uuid07:01
wanghaoI did07:01
wanghaoimage_source07:01
litao__wanghao: ok07:02
wanghaothanks07:02
*** Xinran has quit IRC07:03
liushengzhenguo: btw, server group +scheduler_hints patches are ready for review: https://review.openstack.org/#/c/496151/07:04
liushengzhenguo: :)07:04
*** Xinran has joined #openstack-mogan07:04
zhenguoliusheng: cool07:05
liushengzhenguo: I have tested several cases in my env, but my devstack only have 3 nodes, and I tried to install another env failed. maybe need more nodes to test another cases.07:06
zhenguoliusheng: you can test with hardware env07:07
liushengzhenguo: hah, how many nodes available in our hardware env ?07:07
zhenguoliusheng: I don't know, lol07:07
zhenguoliusheng: but only two now, I remember seems more than 10?07:07
liushengzhenguo: seems only 207:07
zhenguoliusheng: what the others, they got it back?07:08
liushengzhenguo: no, seems we have 7 nodes07:09
liushengzhenguo: need to added07:09
zhenguoliusheng: we can add all, then do some tests in the next few days before Pike07:09
liushengzhenguo: ok07:09
zhenguoliusheng: seems moganclient still use server update --add-extra for metadata, need to change to use properties?07:14
liushengzhenguo: yes, I will update my patch07:15
zhenguoliusheng: ok07:15
openstackgerritwanghao proposed openstack/mogan-specs master: No need flavor and network in managing server  https://review.openstack.org/49498007:18
zhenguoliusheng: I like the is_node_consumable method here https://review.openstack.org/#/c/497724/ , please remember to consider to leverage inventory 'reserved' to make consumable more accurate later07:21
liushengzhenguo: hah, OK, will try07:31
zhenguoliusheng: thanks07:31
liushengzhenguo: np07:31
litao__liusheng: If then, It is not necessary to schedule the node to placement when managing  servers, right?07:36
liushenglitao__: yes, we should avoid scheduler to select the manageable server for server creating07:38
zhenguoliusheng, litao__: so, if we mange the servers, and create a mogan server, do we need to create an allocation?07:40
liushengzhenguo, litao__ for the managed server, we can also delete the server to relase the node, right ?07:42
zhenguoliusheng: yes07:42
liushengzhenguo: so I think we'd better to create an allocation07:43
zhenguoliusheng: ok07:44
litao__liusheng: But create an allocation now is in scheduler07:45
liushenglitao__: hmm, yes, so maybe we need to add that in create manageable server process07:46
zhenguoliusheng, Xinran: after a second think, maybe we should keep flavor accesses/ aggregate nodes list in the seperated controller07:48
zhenguoliusheng, Xinran: for aggregate nodes, the list can be very large07:48
liushengzhenguo: seems yes07:49
zhenguoliusheng, Xiran: maybe better to just keep this list method there07:49
liushengzhenguo: that is a problem, sigh :(07:50
litao__liusheng: maybe  we can use scheduler process using uuid parameter07:50
zhenguoXinran: if you haven't started to work on that, please just abandon the patch, sorry :(07:50
zhenguoliusheng: let's just keep that, and focus on other things, not much time left07:51
liushenglitao__: seems that is only a placement call, it is very simple, cannot add that to your process ?07:52
liushengzhenguo: yes, sure07:54
litao__liusheng: of course  i can07:57
litao__liusheng: I just wan to keep it from scheduler, If not care, I will add it07:59
liushenglitao__: maybe it is bit more complex to call scheduler, currently our process is unrelated with scheduler, right ?08:00
litao__liusheng: But the process has been implement in scheduler,  add a new one is duplicated.08:03
litao__liusheng: The 'put_allocations' method08:04
liushenglitao__: you don't need to implement the method again, just call the method of mogan.scheduler.client.report.SchedulerReportClient#put_allocations, just like how we call the method in report.py in resource updating task08:06
litao__liusheng: OK08:06
zhenguowanghao: on my test, there are still some problems with the listing manageable servers patch, but after some hacks, it works, please see my comments :D08:40
zhenguoliusheng, litao__: please also help to look at it, if no other objections, we can land it by today08:41
litao__zhenguo: ok08:42
liushengzhenguo: ok08:42
* zhenguo brb08:44
zhenguoliusheng: please help to go though all the up patches, then I will call shaohe_feng to make good-to-go ones landed.08:58
liushengzhenguo: ok08:58
liushengzhenguo: still around ?09:14
zhenguoliusheng: yes09:14
liushengzhenguo: for you patch: https://review.openstack.org/#/c/49582609:14
liushengzhenguo: seems there is problem, we store the network name in server object09:14
liushengzhenguo: but the network name can be update09:14
zhenguoliusheng: oh, not sure how nova handle that09:15
zhenguoliusheng: they also save it09:15
liushengzhenguo: maybe Nova query from neutron and then combine it into server body, just guess09:15
zhenguoliusheng: it also save it in to db09:16
zhenguoliusheng: do you have a env to test?09:16
zhenguoliusheng: or somebody else09:16
liushengzhenguo: just asked, Nova query the network name from info_cache, and info_cache can be synced with neutron09:17
zhenguoliusheng: oh, when does that sync happen?09:18
liushengzhenguo: when neutron update network, it will sent a notification, and Nova will update its info_cache09:18
zhenguoliusheng: we also need to catch that notification?09:18
liushengzhenguo: that seems a bit complex :(09:19
zhenguoliusheng: yes09:19
zhenguoliusheng: info cache is also in db?09:20
liushengzhenguo: yes09:23
zhenguoliusheng: does nova only send notifications to nova, or we can also catch it09:24
liushengzhenguo: not sure, seems it depens on how nova consume the notification message, there is a "requeue" parameter when consume a notification message09:25
liushengzhenguo: seems neutron only consider Nova as the notification consumer :(09:26
zhenguoliusheng: sigh09:27
wanghaozhenguo: sure09:33
wanghaozhenguo: I will fix it tonight09:35
zhenguoliusheng: or we jsut use nework id?09:35
zhenguowanghao: thanks09:35
zhenguowanghao: liusheng and I will still in office tomorrow :D09:36
liushengzhenguo: hah09:36
liushengzhenguo: small weekend09:36
wanghaozhenguo: liusheng:  I see,  hahahahah09:36
zhenguoliusheng: but I didn't get the email09:36
wanghaoalmost fogot it.09:37
zhenguolol09:37
liushengzhenguo: now the company policy it reducing the group sending emails, hah09:37
zhenguoliusheng: but how can I know which is samll weekend09:38
zhenguoliusheng: I don't know ,lol09:38
liushengzhenguo: usually, last weekend of a month09:38
zhenguoliusheng: I may assume next week is the last week...09:38
liushengzhenguo: hah09:39
openstackgerritMerged openstack/mogan master: Add some missing logs in Mogan  https://review.openstack.org/49291709:39
*** wanghao has quit IRC09:40
openstackgerritMerged openstack/mogan master: Updated from global requirements  https://review.openstack.org/49703909:40
openstackgerritZhenguo Niu proposed openstack/mogan master: Leverage _detach_interface for destroying networks  https://review.openstack.org/49656909:48
zhenguoliusheng: do you have suggestions for the addresses patch?09:49
liushengzhenguo: maybe use network id is trade-off09:50
zhenguoliusheng: yes, on the dashbaord it can get the network name09:51
liushengzhenguo: yes09:52
zhenguoliusheng: ok09:52
openstackgerritliusheng proposed openstack/python-moganclient master: Unify the commands for server updating  https://review.openstack.org/49781509:59
liushengzhenguo: just submitted a patch to change the "server update" command ^10:01
zhenguoliusheng: thanks10:01
zhenguoliusheng: I'd like to call it a day and head home, bye!10:02
*** openstackgerrit has quit IRC10:02
* zhenguo away10:03
liushengzhenguo: hah, bye10:03
*** wanghao has joined #openstack-mogan10:56
*** wanghao has quit IRC11:11
*** wanghao has joined #openstack-mogan11:17
*** liusheng has quit IRC11:41
*** ChanServ has quit IRC11:41
*** RuiChen has quit IRC11:41
*** wanghao has quit IRC11:41
*** luyao has quit IRC11:41
*** litao__ has quit IRC11:41
*** lin_yang has quit IRC11:41
*** Jeffrey4l has quit IRC11:41
*** dims has quit IRC11:41
*** zhangyang has quit IRC11:41
*** zhenguo has quit IRC11:41
*** zhuli has quit IRC11:41
*** harlowja has quit IRC11:41
*** kong has quit IRC11:41
*** Kevin_Zheng has quit IRC11:41
*** Xinran has quit IRC11:41
*** wxy has quit IRC11:41
*** shaohe_feng has quit IRC11:41
*** openstack has joined #openstack-mogan12:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!