Friday, 2017-06-09

*** wanghao has joined #openstack-mogan00:57
*** wanghao has quit IRC01:08
*** wanghao has joined #openstack-mogan01:09
wanghaozhenguo: ping01:14
wanghaoI send the tasks update email to you01:14
wanghaopls have a look01:14
*** wanghao_ has joined #openstack-mogan01:15
*** wanghao has quit IRC01:18
*** litao__ has joined #openstack-mogan01:25
zhenguowanghao: thanks for the proposal, how about also including 'no updates' tasks?01:30
zhenguohi all, I changed to use PATCH method for flavor update and get rid of the extraspecs endpoint
zhenguoliusheng: how about renaming server.extra to server.metadata02:10
liushengzhenguo: yes, extra looks werid, and the extra field now is in servers table, but many flavor's fields like flavor extra is in separated tables, what your thoughts ?02:14
zhenguoliusheng: yes, I think we should refactor it02:15
liushengzhenguo: which way you prefer ?02:15
zhenguoliusheng: separate extra table out and change to use metadata in api object :D02:15
liushengzhenguo: too many tables to store the properties of an object may make the combination logical and handling prcess more complex, and may easy to cause issues like db deadlock02:18
liushengzhenguo: just personal concern :)02:19
zhenguoliusheng: not sure, but seems nova split all such tables out02:21
zhenguoliusheng: but if we just store data in the table maybe it's not needed, I think it's benefit for indexing, right?02:23
liushengzhenguo: maybe it is better to only separate the relative independent part like server_fault and server_nics02:24
liushengzhenguo: not sure if that is common use case02:25
zhenguoliusheng: why those are independent? what's the difference between them with metadata?02:25
liushengzhenguo: I am just found that the flavor model has been separated into many tables :D02:26
zhenguoliusheng: hah, yes, I don't see the difference with server_nics for server and nics for flavor :D02:27
liushengzhenguo: the fault and nic themself include many fields and looks like a independent object02:27
liushengzhenguo: like flavor_cpu, flavor_memory, flavor_disks, flavor_project, they all in separated tables02:29
zhenguoliusheng: yes, as I think they are just like flavor extra specs, but not sure02:30
liushengzhenguo: not very  sure about the advantage and disadvantage, but personally think it's better to keep things simple to avoid potential issues, since we ofen met problems like db deadlock in productive environment02:33
zhenguoliusheng: you mean just save a compext dict in to one column?02:34
zhangyangzhenguo: i noticed now we  have availability zone property in compute node object. if we manage it via aggregate, shall we change it in the compute node table when we add a node to a aggregate or we just get the availability zone info from the aggregate table.02:36
liushengzhenguo: not only about this, but metadata looks better to be in separated table :D02:37
zhenguozhangyang: not quite sure, which do you perfer?02:39
zhenguozhangyang: or maybe just  get rid of it from compute node, so we don't need the backend  driver provide such property02:39
zhenguoliusheng: really not quite sure about this02:40
zhenguoliusheng: but flavor things now seems really weird02:40
liushengzhenguo: haha02:41
liushengzhenguo: does the ubuntu image in our env has default passord ?02:46
zhenguoliusheng: no, you can add a keypair02:46
liushengzhenguo: lol02:46
zhenguoliusheng: I already make it posiible to connect to private network through that mogan server02:47
liushengzhenguo: cool02:49
wanghao_zhenguo: sure02:52
zhenguowanghao_: thanks!02:52
wanghao_zhenguo: will send the email ASAP02:52
zhenguowanghao_: ok02:52
zhangyangzhenguo: we get availability zone from ironic driver? I thought we were just using the default availability zone...02:55
zhenguozhangyang: hah, yes, we ineed get az from the driver02:55
zhenguozhangyang: I need to be away for mins, ttyl02:56
* zhenguo brb02:56
openstackgerritMerged openstack/mogan master: [DOC] Add sample config and policy files to mogan docs
*** wanghao_ has quit IRC06:27
*** wanghao has joined #openstack-mogan06:27
*** wanghao has quit IRC06:29
*** wanghao has joined #openstack-mogan06:29
*** wanghao_ has joined #openstack-mogan06:39
zhenguoliusheng: maybe we just need a flavor like this defination
*** wanghao has quit IRC06:42
zhenguoliusheng: add a resources property including the node resource class to match06:44
liushengzhenguo: for now, it is the name of flavor ?06:47
zhenguoliusheng: yes06:47
zhenguoliusheng: a new resources can include not only one resurce class06:47
zhenguoliusheng: and maybe just get rid of the current ram, cpu,disks, nics properties, only leave a description like before, lol06:48
liushengzhenguo: hah06:48
zhenguoliusheng: so a simple flavor with name, description, resources, extra_specs06:48
liushengzhenguo: seems, the ram, cpu, disks is intuitive for end users especially use horizon06:50
zhenguoliusheng: we can add that information to description06:51
zhenguoliusheng: like the hardware specs of a server06:51
zhenguoliusheng: really no need to add so many tables....06:51
zhenguosorry to refactor this over and over again06:52
zhenguoand I don't want to take the refactor work again, lol06:52
zhenguoliusheng: oh, seems valence need such things06:53
liushengzhenguo: if we put them into description, user may need to filter query flavors that met them required cpus, ram, disks06:54
liushengzhenguo: ok, maybe just leave it as now, if we have different requirement, we refactor it in the future06:54
zhenguoliusheng: as I understand, users just  want to see the details before claiming a server06:55
zhenguoliusheng: but I don't really like it06:55
liushengzhenguo: yes06:55
zhenguoliusheng: the currently implementation is to complex for just a flavor06:55
zhenguoliusheng: but we can make the resources filed more nimble06:56
zhenguoliusheng: for valence we can define a flavor.resources = {rsd: true, ...}06:57
liushengzhenguo: how about don't use resource class to match node when scheduling like nova dose ?06:59
liushengzhenguo: just using as Nova's flavor, comparing the cpus, ram, disks, and other properties in request with node's07:00
zhenguoliusheng: check them all equal to the node's properties07:00
liushengzhenguo: yes07:01
zhenguoliusheng: if so, why not check check the resource class07:01
liushengzhenguo: users will don't care the resource class07:01
liushengzhenguo: scheduler will choose a node against the creating request according the required size07:02
*** liujiong has joined #openstack-mogan07:03
zhenguoliusheng: for baremetal is not like vms07:03
zhenguoliusheng: a node is a resource provider, and users can see what resouce it can provide, then just choose one flavor is ok07:04
zhenguoliusheng: check every detail property seems a bit useless07:05
zhenguoliusheng: just like that ironic specs said07:06
liushengzhenguo: for baremetal, we can imagine there is a nodes pool with different capacity, user choose a flavor with required capacity, and scheduler select a proper node07:07
liushengzhenguo: user don't need to care resource class07:07
liushengzhenguo: and we don't need to set the resource_class/node_type when adding new nodes07:07
zhenguoliusheng: hah, but for harwares, you can't just check the quantities, the type is also important07:08
zhenguoliusheng: how can you filter every hardeare specs, it's not easy to control07:09
liushengzhenguo: just similar to how Nova does now. lol07:10
zhenguoliusheng: nova only are about quantity07:10
zhenguoliusheng: and the current community efforts is also want to use resource class and a single flavor math with it07:11
liushengzhenguo: may flags in flavor's metadata to indicate the hardware's type ?07:11
zhenguoliusheng: so many types will make it more complex07:11
liushengzhenguo: placement service is dessigned to accept and handle a request with filters which include's users required capacity,07:14
zhenguoliusheng: placement provides inventory management, we can define which things to store to it and the way we get it07:15
liushengzhenguo: I am not sure what's api looks like if ironic want to add node resource into placement in the future07:15
zhenguoliusheng: ironic resources already save to placement now07:15
liushengzhenguo: hah, does ironic only provide the filter support only "resource class" in placement ?07:16
zhenguoliusheng: but not ironic call placement api to save the resource07:16
liushengzhenguo: or support query by filtering cpus, ram, disks ?07:16
zhenguoliusheng: no, nova fetch ironic resources then save it to placement07:17
zhenguoliusheng: nova's flavor must provide that things and it doesn't include a property like resouces, so for now, it only filter like before07:17
liushengzhenguo: got it07:18
zhenguoliusheng: Placement is a separate service, independent of Nova. It tracks Ironic nodes as individual resources, not as a "pretend" VM. The Nova integration for selecting an Ironic node as a resource is still being developed, as we need to update our view of the mess that is "flavors", but the goal is to have a single flavor for each Ironic machine type, rather than the current state of flavors pretending that an Ironic07:19
zhenguonode is a VM with certain RAM/CPU/disk quantities.07:19
liushengzhenguo: yes07:19
zhenguoliusheng: copied from here
liushengzhenguo: maybe the ram, cpu, disk of Mogan's flavor should be a range than a exact number, wdyt07:23
zhenguoliusheng: I prefer to keep it simple, lol07:24
liushengzhenguo: e.g. we provide golden, silver, copper server, but users may want to now what's capacity of each class of servers07:25
zhenguoliusheng: we can have flavor baremetal-gold, baremetal-silver, operators have to divide the backend nodes in to such groups by resource_class, of course every property can be a range07:25
zhenguoliusheng: that can control by operators by how to defaine a resource class07:25
liushengzhenguo: that's was what I want to say :D07:26
zhenguoliusheng: hah, so we operators need to do much more work07:26
liushengzhenguo: so the ram/cpu/disk is more like description info of flavor07:26
liushengzhenguo: hah07:26
zhenguoliusheng: yes, they will be included in the flavor description07:27
liushengzhenguo: but puting all them into description may hard to filter query by one of them07:27
liushengzhenguo: not sure if we have the requirements07:28
zhenguoliusheng: seems we don't need to filter them, just a description to show what the resouce is07:28
liushengzhenguo: users may need to filter out which flavors that met their requiremet07:29
zhenguoliusheng: we need to ask users to change their brain, hah baremetals is not like vms07:30
liushengzhenguo: hah07:31
zhenguoliusheng: maybe we can just turn to placement, I think the only things block us is the aggregates07:32
liushengzhenguo: hah, sure07:32
liushengzhenguo: we can try07:32
zhenguoliusheng: if we use placement for resouces tracking, we don't save the nodes info07:32
zhenguoliusheng: but when users list nodes, I think we can call placement api to show it07:33
zhenguoliusheng: and when they create a node aggreate, we can call placment do that07:33
zhenguoliusheng: we can pass aggregates info to placement to choose a satisfied node07:33
liushengzhenguo: that's what I want to ask, does placement api can be exposed to end users ?07:34
liushengzhenguo: or only used by scheduler07:34
zhenguoliusheng: mogan will expose the api then call placement api, we don't need users to call placement directly07:34
liushengzhenguo: lol07:34
zhenguoliusheng: I also add placement as a alternative on the node aggregates spec.07:35
liushengzhenguo: just confirmed with zhenyu, that placement api can expose to end users07:36
zhenguoliusheng: hah07:37
zhenguoliusheng: will be a way for a while, ttyl, you can talk zhenyu to help confirm whether node aggregates can leverage placement aggregates07:37
* zhenguo brb07:37
liushengzhenguo: ok07:38
shaohe_fengzhenguo: OK. seems spec is more difficult for a Chinese.07:38
*** wanghao_ has quit IRC07:39
*** wanghao has joined #openstack-mogan07:39
*** wanghao_ has joined #openstack-mogan07:43
*** wanghao has quit IRC07:47
zhenguoshaohe_feng: hah07:52
zhenguowanghao: thanks for sending out the weekly report07:53
zhenguoliusheng, shaohe_feng, wanghao: please have a look at the flavor PATCH method patch when you got time, thanks!07:55
shaohe_fengzhenguo: I'm looking at it. :)08:00
zhenguoshaohe_feng: big thanks08:00
shaohe_fengzhenguo: JSON patch the standard in 2013.08:06
zhenguoshaohe_feng: yeah, I added that rfc to the api docs08:07
shaohe_fengzhenguo: yes, I see the link from your patch. :)08:07
zhenguoshaohe_feng: hah08:08
shaohe_fengzhenguo: So I know why many old project does not sue patch.08:08
zhenguoshaohe_feng: for flavor, I think we don't need a separated extraspec endpoint with POST/PUT/DELETE, just use PATCH to update flavor is enough08:09
shaohe_fengzhenguo: yes. one PATCH is enough.08:33
zhenguoliusheng: hey, what's the benefit of split metadat to be a separated table?08:50
liushengzhenguo: maybe just because metadata can include many items, and in a deparated table, every items can be one key-value pair, it effecient to be indexed if we want to query something filtered by metadata08:55
zhenguoliusheng: seems if we use PATCH server to manage it's metadata, no need to separate it08:55
zhenguoliusheng: can we support metadat filter?08:56
liushengzhenguo: seems Nova can support08:56
zhenguoliusheng: really?08:56
zhenguoliusheng: like tag?08:56
liushengzhenguo: let me confirm08:56
liushengzhenguo: :D08:56
liushengzhenguo: nova don't support that08:57
zhenguoliusheng: hah, before we cofirm why a separated table is needed, I prefer to keep it with server and just change the api object field to metadata08:58
liushengzhenguo: sure, I am ok08:59
zhenguoliusheng: ok, so many not sure things...08:59
liushengzhenguo: lol08:59
*** wanghao_ has quit IRC09:45
*** liujiong has quit IRC09:52
* zhenguo away10:01
*** liusheng has quit IRC14:29
*** liusheng has joined #openstack-mogan14:30
-openstackstatus- NOTICE: The Gerrit service on is being restarted now to clear an issue arising from an unanticipated SSH API connection flood14:57
*** litao__ has quit IRC15:46
*** liusheng has quit IRC17:50
*** liusheng has joined #openstack-mogan17:51

Generated by 2.15.3 by Marius Gedminas - find it at!