08:00:11 #startmeeting daisycloud 08:00:12 Meeting started Fri Mar 17 08:00:11 2017 UTC and is due to finish in 60 minutes. The chair is huzhj. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:00:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 08:00:15 The meeting name has been set to 'daisycloud' 08:00:24 #topic Roll Call 08:00:38 #info Zhijiang 08:01:31 #zhuzeyu 08:02:18 @zhuzeyu, please use info command to ack the roll call 08:02:43 info luyao 08:03:02 info zhuzeyu 08:03:26 like this #info Zhijiang 08:03:42 #info luyao 08:03:43 #info zhuzeyu 08:04:04 #info zhouya 08:04:11 #topic Cinder/Ceph support 08:06:10 since Kolla already supported Cinder/Ceph , all Daisy need to do is store user config into DB and use this info to generate global.yml. also ,daisy need to do some work after OS installation and before Kolla deploy 08:06:53 question is , do we need to create new DB table to store those info or use the existing table 08:07:22 and what fields we need to use in that DB table 08:07:27 we need to add a new segement in service_disk table to store the name of sda 08:08:48 please use info command when ever you think what you said is need to be recorded into meeting minutes.please 08:09:26 #info zhouya said we need to add a new segement in service_disk table to store the name of sda 08:11:36 service_disk table was used to describe partitions used for any service that need a storage. such as nova, glance 08:12:01 do you think it is suitable for cinder/ceph to use it? 08:12:58 In my opinion, I think it is properly to use this to store cepj 08:13:01 ceph 08:14:33 Good. noticed that, although service_disk table can be used by opensource daisy to describe partitions used for services, the os intallation layer script does not use it , when install a non-tfg OS 08:15:02 absolutely 08:17:00 when it comes to centos installation, currently we only support doing partitioning on the first hard drive. So the second /third disk can be free to be used as Ceph disk 08:17:32 also we need to execute 'parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1' to label the tag of disk for kolla ceph backend 08:18:06 Yes 08:18:18 what about the LVM backend? 08:18:51 LVM backend is much more complected 08:19:20 why? does it need more table filed to describe? 08:19:33 we need to execute much more commend then CEPH backend 08:19:54 that is not really a big deal 08:20:47 the deal is we'd better to use a single db design to support both LVM and Ceph backend. can we achieve that? 08:20:48 no,like cetph backend ,we just have to add new segement to store sda name 08:21:36 Yes,we can achieve that 08:21:39 Good. 08:21:58 #info like cetph backend, we just have to add new segement to store disk name for LVM 08:22:18 luckly,the code of daisycloud-core has made all controller nodes as the storage node. 08:22:33 can we use the existing field (segment) "LUN" ? 08:22:45 Maybe not. 08:23:01 OK 08:23:06 because the LUN segment is inter 08:23:11 got it 08:23:18 integer 08:23:33 we need to add an extra segment. 08:23:47 OK, any other updates/risks about this topic? 08:24:36 lets move on 08:24:49 #topic Daisy as Kolla image version manager 08:25:17 I think luyao1 is working on this topic 08:25:27 yes 08:25:41 https://review.openstack.org/#/c/446820/ 08:25:54 Kolla is good at do upgrading between versions, but lacks of a version manager concept 08:26:22 #info luyao1 is working on Daisy as Kolla image version manager 08:26:27 #link https://review.openstack.org/#/c/446820/ 08:27:12 Yes, I saw this PS, but it needs more review 08:27:40 yes,maybe something I do not consider of 08:27:44 great 08:27:46 luyao1 can you please show us the design/spec about this work? 08:27:53 yes 08:28:31 do you prefer write it right now or write a spec file later offline and send us through email? 08:28:32 firstly,version manage like version add/list/..is already suppout 08:28:41 ok 08:28:42 OK 08:28:58 I can write a spec file later offline 08:29:40 Is the preloaded version is treated as the default version by Daisy? 08:30:27 I think we still need the preloaded version after support version manage 08:30:34 what do you think? 08:31:03 what's the meaing of preload version 08:31:21 we need proload version before deploy 08:31:36 luyao1 I mean the kolla image that loaded during daisy install 08:31:54 #info luyao1 will write a spec do describe the version manager 08:31:59 #undo 08:32:00 Removing item from minutes: #info luyao1 will write a spec do describe the version manager 08:32:00 but not necessary. 08:32:05 #action luyao1 will write a spec do describe the version manager 08:32:59 So we don not load any kolla image during daisy installation any more after your work? 08:33:26 yes,I move code in daisy version install of load version 08:33:57 when use daisy install cluster command,it can get version by path or by version_id 08:34:06 then load it 08:34:12 OK, got it. 08:34:24 SO if we do not have a version. 08:34:32 do we have a error log ? 08:34:41 so no we need to pass an additional argument to daisy install API to get the old job done, right? 08:35:37 This seems to be a backward compatible problem, so make sure to also modify the down stream code ,such as OPNFV to adapt your change... 08:35:47 when daisy install finish ,we update tecs_version_id to database to show version install success 08:36:08 yes,opnfv code need change 08:37:11 #info after luyao1's work, no more preloaded kolla image. user need to provide a version _id to call into daisy install API. This seems to be a backward compatible problem, so make sure to also modify the down stream code ,such as OPNFV to adapt your change... 08:37:35 luyao1, but how a user can know what the value of version_id is? 08:38:27 when creat cluster ,we choose version_id and update to cluster database 08:39:09 I mean if there is more that one version, how a user can tell one from another, when creat cluster 08:40:18 every version have a version_id and version_name is unique 08:40:36 in dashborad we choose version name to update version 08:40:47 to cluster database 08:41:09 so what about the command line user? such as automatic CI script? 08:42:13 first we get version_id by version-name with command "daisy version list" 08:42:27 Great 08:42:36 So daisy version list is the KEY 08:42:52 then client.cluster.add/update("tecs_version_id": "xxxx") 08:43:03 OK 08:43:11 yes 08:43:28 can we modify tecs_version_id to openstack_version_id? 08:43:29 OK, anything else for this topic? 08:43:50 oh firstly is daisy version-add version_name backend to generate version-id 08:44:23 So user should manually add version info into daisy right? 08:44:27 then use "daisy version-list" to get needed version_id 08:44:40 yes 08:44:58 it should manually add version info 08:45:12 OK, just a little bit complex 08:45:34 sorry ,I describe fuzzy 08:45:56 may be a default, pre-existed(not loaded) version can solve this 08:46:13 OK, let's move on 08:46:17 to next topic 08:46:18 ok 08:46:31 #topic OPNFV: VM/BM Deployment & Functest Integration 08:47:55 Do we have Julien/Serena online? They have done a lot of Great Jobs these days 08:48:06 good 08:48:10 also zhouya 08:49:04 So zhouya, is there any status update from your side? 08:49:06 we have some problem whild query the progress of openstack installation 08:49:45 and I am work on finding the caution of this question. 08:50:23 2.28's version is good now 08:50:34 maybe some change cause it? 08:50:46 so it is our upstream code caused the problem? 08:50:50 #info we have some problem whild query the progress of openstack installation 08:51:18 maybe the commit https://review.openstack.org/#/c/441791/ caused the problem 08:51:20 But we are tagged right? why still use new code? 08:52:23 just this patch is downloaded additional 08:52:47 and the other code is tagged 08:53:18 zhouya, if there is a doubt , just revert it and to see if bug can be solved 08:53:48 ok,I will revert the code and have a test of the reverted version. 08:54:14 not revert all changes, just the changes about the threading prepare.sh 08:54:27 ok 08:54:46 just this commit https://review.openstack.org/#/c/441791/ 08:54:48 does prepare.sh report progress to DB? 08:55:04 yes 08:55:28 if prepare.sh have done successfully,10% will write to DB 08:55:29 try not let it to report 08:55:42 but let the main thread to report 08:56:16 Oh forget it 08:56:45 maybe we should change the code to let the progress of DB update after all thread have execute successfully. 08:56:56 I thought there may be race condition, but it should be not 08:57:01 sorry ,I just say deploy successfully in 10.62.105.18 every day'jenkins triger 08:57:17 so ,it is not on 114 node? 08:58:15 zhouya, if each prepare.sh just update its own progress, then there shoud be no race condition 08:58:19 I think 114 node is deployed failed all the time. 08:58:22 the node I see is 10.62.105.18,and it be deploy every day 08:59:29 each prepare.sh just update the progress of their own node which they executed on. 08:59:34 let's double check them offline 09:00:47 #info we need to dive deep into the progress update/read mechanizm 09:01:57 #info, pass functest depends on support Ceph/Cinder 09:02:12 anything else? 09:02:37 We are running out of time 09:02:49 let's wrap this up 09:03:06 thank you all 09:03:22 no 09:03:24 thanks 09:03:31 have a good weekend 09:03:43 bye 09:03:44 #endmeeting