02:00:25 #startmeeting mogan 02:00:26 Meeting started Thu Mar 16 02:00:25 2017 UTC and is due to finish in 60 minutes. The chair is zhenguo. Information about MeetBot at http://wiki.debian.org/MeetBot. 02:00:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 02:00:30 The meeting name has been set to 'mogan' 02:00:38 o/ 02:02:45 zhangjl, litao, luyao, wanghao_, liusheng, zhangyang: hi 02:02:59 zhenguo:hi, i`am here 02:03:01 okay let's roll 02:03:05 hi 02:03:13 here 02:03:27 as always, the agenda: 02:03:28 #link https://wiki.openstack.org/wiki/Meetings/Mogan#Agenda_for_next_meeting 02:03:31 o/ 02:03:41 let's jump in 02:03:46 #topic announcements and reminders 02:04:01 Our topic is not selected for Boston summit:( 02:04:25 :( 02:04:33 seems some people don't like our way to mange baremetals, we should prove the benefits. 02:04:36 :-( 02:04:39 what's a pity 02:04:42 hello mogan 02:05:04 yuntongjin: o/ 02:05:30 and some people agree 02:05:49 yes 02:06:08 there's a discussion about adding support for RAID configuration for Ironic instance in Nova 02:06:18 #link https://review.openstack.org/#/c/408151/ 02:07:01 Please keep an eye on that, and we may also need to prepare a spec after the refacoring of flavor/scheduler 02:07:13 * zhenguo gives folks a few minutes to review 02:07:35 China Mobile is looking for host AGG in baremetel cloud 02:08:13 zhenguo: is Roman a ironic guy ? 02:08:15 yuntongjin: seems they have many downstream hacks now 02:08:20 https://review.openstack.org/#/c/441924/ 02:08:38 liusheng: no, a mirantis guy working on nova 02:08:39 zhenguo: they are trying to upstream this , 02:09:30 yuntongjin: that's not easy possible 02:10:43 zhengguo: that's true, not easy job. 02:11:03 ok, let's move on 02:11:11 Mogan can really be helpful here to address this requirement 02:11:31 yuntongjin: sure, but seems they can't wait us 02:11:46 yuntongjin: we can discuss this in the open discussion topic 02:11:49 they just want to use nova 02:11:56 may because people just know little about Mogan 02:12:13 yes, and we are not that mature now 02:13:41 Ok, really many refactors during last week ans so many conflicts on the in progress patches, sorry for the inconvenience. 02:14:53 As the Placement service is not ready now, we will save node resources to db instead of cache, and split out scheduler later 02:15:06 wdyt? 02:15:44 in fact, we have already save them to db now :D 02:16:26 I found the placement service in the N verion 02:16:51 yes, but there's still a long way to be mature 02:17:52 so maybe we use it in next version 02:18:18 hope so, hah 02:18:34 it is better to make our scheduler easy to be migrated to the future placement 02:19:17 yes, with placement we still have a scheduler, just need to save resources there and use it for scheduling 02:19:20 more closer to the placement service implementation 02:19:30 sure 02:20:15 okay, the last thing is about Valence 02:20:28 Valence integration spec maybe need to update after the new flavor landed. 02:20:28 #link https://review.openstack.org/#/c/441790/ 02:21:19 aha, I have changed to use node_type instead of instance_type in ironic node's properties 02:21:56 is it still the instance type name ? 02:22:03 yes, 02:22:07 ok 02:22:43 but I still think of the flavor pattern, will draft a spec up later 02:23:19 you mean refactor the falvor model ? 02:23:25 yes 02:23:41 we need a new flavor 02:23:46 hah 02:24:39 the flavor should not only give the information about size, also type 02:24:58 that is necessary for variety drivers support 02:25:24 will replace the current flavor? 02:25:46 yes, do you like the current pattern? 02:26:22 I think a little simple 02:26:30 hah 02:26:34 agree 02:27:16 we may need to prove cpu, ram ..., as we can also compose node from Valence 02:27:26 s/prove/provide 02:27:49 i see... 02:27:52 did we have a spec about this refacoring? 02:28:01 I will do that 02:28:06 great 02:28:21 but I will split scheduler out first 02:28:41 sorry, a fool question, for Valence, we can provision instance by accurate cpu, ram .. right? 02:29:02 seems yes 02:29:08 cool 02:29:13 Valence is not a driver for us 02:29:16 maybe like nova? 02:29:30 yes 02:29:32 like vm 02:29:59 we compose hardware first, then enroll it to our driver(ironic) then provision it 02:30:19 so Valence is in the up layer of driver 02:30:54 and we don't need to scheduling node to use if go to Valence to compose nodes 02:31:43 sounds like a magical Valence, lol 02:31:51 hah, yes 02:31:59 Is Valence a pool contains storage, compute ? 02:32:13 litao: yes, a specific hardware 02:32:37 sounds intresting 02:32:47 hah, 02:33:02 anything else here? 02:33:24 ok, let's move on 02:33:31 #topic task tracking 02:34:00 hi guys, how's everything going on with your task 02:35:10 Quota patch has some issues, I'm figuring them out. 02:35:39 wanghao_: got it, thanks 02:35:43 Not a big deal. 02:35:52 My multi-instance support is over 02:36:03 console patches are ready to review 02:36:09 litao: hah, do you want to do more works :P 02:36:19 liusheng: ok, thanks 02:36:20 zhenguo: 02:36:30 zhenguo: yes 02:36:48 hah 02:37:19 I need to take on some other work 02:37:33 litao: thanks very much, 02:38:34 as there are many refactors comming, all tasks seems blocked, sorry 02:38:45 I will try to finish taht as soon as possible 02:39:24 but it's almost there, next week we can focus on flavor design 02:39:42 awesome works 02:39:59 zhenguo, I do research on my thesis in past several days so I don't carry my work forward. I'll finish the patch about instance_fault next 02:40:02 but I'm pretty sure, there will be many small issues 02:40:16 luyao: oh, thanks 02:41:05 zhenguo: yes I have found several ones. lol 02:41:24 liusheng: please feel free to fix them, thanks 02:41:30 zhenguo: sure 02:42:15 zhangyang: how's everything going on with the cloudboot driver? 02:42:57 zhenguo: need to push change to cloudboot to fit glance, still working :D 02:42:59 zhangyang: I have made the driver interfaces more general, hope it will help you. 02:43:42 zhenguo: that's great 02:44:31 zhangyang: ok, please ping me if something still need to be changed 02:44:58 zhenguo: ok, thanks 02:45:04 np 02:45:17 ok, thanks everyone for the hard work with your task this week 02:45:22 let's move on 02:45:32 #topic open discussion 02:45:44 who's got a thing here :) 02:46:24 a little idea to listen your opinions 02:46:49 how about more our meeting to specific openstack meeting channel ? 02:46:56 s/more/move 02:47:24 is official project a requirement? 02:47:37 not sure 02:48:15 maybe we can ask someone of non-official project previously 02:48:23 I would like to, but not sure there's still suitble time slot for us. 02:48:56 you mean to invite more people to attend the meeting, right? 02:49:34 at least to make our voice can be listen to public 02:49:54 * litao maybe foreigners 02:50:10 I think our project is too close currently, that many people don't know about us 02:50:29 liusheng: yes, that's the problem 02:50:49 liusheng: we don't even send out emails to public 02:50:58 that's maybe a more effective way 02:51:04 zhenguo: yes :-( 02:51:27 lt's try to change 02:51:40 zhenguo: thanks 02:51:54 liusheng: thanks for point out that 02:52:33 liusheng: and we may need some blogs to introduce Mogan 02:53:00 I agree 02:53:17 but of course not only chinese internal blogs 02:53:56 zhenguo: yes, I think that is even more important than coding works for current period. 02:54:21 It]s best to introduce mogan to more foreigners 02:54:24 liusheng: hah 02:54:50 yes, but now we still don't have enough advanced thing to say :( 02:55:47 zhenguo: hah, yes, we can just draw a blueprint first, but just personal thougts 02:56:15 liusheng: yes, 02:56:55 maybe RAID is a good point for us, 02:57:12 zhenguo: good idea 02:58:25 and another thing is docs 02:58:54 I think it's hard for users to deploy Mogan, or use Mogan with the current little docs 02:59:08 hah, yes 02:59:10 agree 02:59:29 there's even anything about the CLI :( 02:59:43 not anything 03:01:02 now wiki is the only place we can find what's Mogan #link https://wiki.openstack.org/wiki/Mogan 03:01:12 hah 03:01:15 oh, sorry, it's over time 03:01:33 thanks everyone, we can continue the discussion here later 03:01:41 thanks, bye 03:01:45 bye 03:01:48 #endmeeting