08:00:11 <huzhj> #startmeeting daisycloud
08:00:12 <openstack> Meeting started Fri Mar 17 08:00:11 2017 UTC and is due to finish in 60 minutes.  The chair is huzhj. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:00:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
08:00:15 <openstack> The meeting name has been set to 'daisycloud'
08:00:24 <huzhj> #topic Roll Call
08:00:38 <huzhj> #info Zhijiang
08:01:31 <zhuzeyu> #zhuzeyu
08:02:18 <huzhj> @zhuzeyu, please use info command to ack the roll call
08:02:43 <luyao1> info luyao
08:03:02 <zhuzeyu> info zhuzeyu
08:03:26 <huzhj> like this #info Zhijiang
08:03:42 <luyao1> #info luyao
08:03:43 <zhuzeyu> #info zhuzeyu
08:04:04 <zhouya> #info zhouya
08:04:11 <huzhj> #topic Cinder/Ceph support
08:06:10 <huzhj> since Kolla already supported Cinder/Ceph , all Daisy need to do is store user config into DB and use this info to generate global.yml. also ,daisy need to do some work after OS installation and before Kolla deploy
08:06:53 <huzhj> question is , do we need to create new DB table to store those info or use the existing table
08:07:22 <huzhj> and what fields we need to use in that DB table
08:07:27 <zhouya> we need to add a new segement in service_disk table to store the name of sda
08:08:48 <huzhj> please use info command when ever you think what you said is need to be recorded into meeting minutes.please
08:09:26 <huzhj> #info zhouya said we need to add a new segement in service_disk table to store the name of sda
08:11:36 <huzhj> service_disk table was used to describe partitions used for any service that need a storage. such as nova, glance
08:12:01 <huzhj> do you think it is suitable for cinder/ceph to use it?
08:12:58 <zhouya> In my opinion, I think it is properly to use this to store cepj
08:13:01 <zhouya> ceph
08:14:33 <huzhj> Good. noticed that, although service_disk table can be used by opensource daisy to describe partitions used for services, the os intallation layer script does not use it , when install  a non-tfg OS
08:15:02 <zhouya> absolutely
08:17:00 <huzhj> when it comes to centos installation, currently we only support doing partitioning  on the first hard drive. So the second /third disk can be free to be used as Ceph disk
08:17:32 <zhouya> also we need to execute 'parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1' to label the tag of disk for kolla ceph backend
08:18:06 <huzhj> Yes
08:18:18 <huzhj> what about the LVM backend?
08:18:51 <zhouya> LVM backend is much more complected
08:19:20 <huzhj> why? does it need more table filed to describe?
08:19:33 <zhouya> we need to execute much more commend then CEPH backend
08:19:54 <huzhj> that is not really a big deal
08:20:47 <huzhj> the deal is we'd better to use a single db design to support both LVM and Ceph backend. can we achieve that?
08:20:48 <zhouya> no,like cetph backend ,we just have to add new segement to store sda name
08:21:36 <zhouya> Yes,we can achieve that
08:21:39 <huzhj> Good.
08:21:58 <huzhj> #info like cetph backend, we just have to add new segement to store disk name for LVM
08:22:18 <zhouya> luckly,the code of daisycloud-core has made all controller nodes as the storage node.
08:22:33 <huzhj> can we use the existing field (segment)  "LUN" ?
08:22:45 <zhouya> Maybe not.
08:23:01 <huzhj> OK
08:23:06 <zhouya> because the LUN segment is inter
08:23:11 <huzhj> got it
08:23:18 <zhouya> integer
08:23:33 <zhouya> we need to add an extra segment.
08:23:47 <huzhj> OK, any other updates/risks about this topic?
08:24:36 <huzhj> lets move on
08:24:49 <huzhj> #topic Daisy as Kolla image version manager
08:25:17 <huzhj> I think luyao1 is working on this topic
08:25:27 <luyao1> yes
08:25:41 <luyao1> https://review.openstack.org/#/c/446820/
08:25:54 <huzhj> Kolla is good at do upgrading between versions, but lacks of a version manager concept
08:26:22 <huzhj> #info luyao1 is working on Daisy as Kolla image version manager
08:26:27 <huzhj> #link https://review.openstack.org/#/c/446820/
08:27:12 <huzhj> Yes, I saw this PS, but it needs more review
08:27:40 <luyao1> yes,maybe something I do not consider of
08:27:44 <zhouya> great
08:27:46 <huzhj> luyao1 can you please show us the design/spec  about this work?
08:27:53 <luyao1> yes
08:28:31 <huzhj> do you prefer write it right now or write a spec file later offline and send us through email?
08:28:32 <luyao1> firstly,version manage like version add/list/..is already suppout
08:28:41 <luyao1> ok
08:28:42 <huzhj> OK
08:28:58 <luyao1> I can write  a spec file later offline
08:29:40 <huzhj> Is the preloaded version is treated as the default version by Daisy?
08:30:27 <huzhj> I think we still need the preloaded version after support version manage
08:30:34 <huzhj> what do you think?
08:31:03 <luyao1> what's the meaing of preload version
08:31:21 <luyao1> we need proload version before deploy
08:31:36 <huzhj> luyao1 I mean the kolla image that loaded during daisy install
08:31:54 <huzhj> #info luyao1 will write a spec do describe the version manager
08:31:59 <huzhj> #undo
08:32:00 <openstack> Removing item from minutes: #info luyao1 will write a spec do describe the version manager
08:32:00 <luyao1> but not necessary.
08:32:05 <huzhj> #action luyao1 will write a spec do describe the version manager
08:32:59 <huzhj> So we don not load any kolla image during daisy installation any more after your work?
08:33:26 <luyao1> yes,I move code in daisy version install of load version
08:33:57 <luyao1> when use daisy install cluster command,it can get version by path or by version_id
08:34:06 <luyao1> then load it
08:34:12 <huzhj> OK, got it.
08:34:24 <zhouya> SO if we do not have a version.
08:34:32 <zhouya> do we have a error log ?
08:34:41 <huzhj> so no we need to pass an additional argument to daisy install API to get the old job done, right?
08:35:37 <huzhj> This seems to be a backward compatible problem, so make sure to also modify the down stream code ,such as OPNFV to adapt your change...
08:35:47 <luyao1> when daisy install finish ,we update tecs_version_id to database to show version install success
08:36:08 <luyao1> yes,opnfv code need change
08:37:11 <huzhj> #info after luyao1's work, no more preloaded kolla image. user need to provide a version _id to call into daisy install API. This seems to be a backward compatible problem, so make sure to also modify the down stream code ,such as OPNFV to adapt your change...
08:37:35 <huzhj> luyao1, but how a user can know what the value of version_id is?
08:38:27 <luyao1> when creat cluster ,we choose version_id and update to cluster database
08:39:09 <huzhj> I mean if there is more that one version, how a user can tell one from another, when creat cluster
08:40:18 <luyao1> every version have a version_id and version_name is unique
08:40:36 <luyao1> in dashborad we choose version name to update version
08:40:47 <luyao1> to cluster database
08:41:09 <huzhj> so what about the command line user? such as automatic CI script?
08:42:13 <luyao1> first we get version_id by version-name with command "daisy version list"
08:42:27 <huzhj> Great
08:42:36 <huzhj> So daisy version list is the KEY
08:42:52 <luyao1> then client.cluster.add/update("tecs_version_id": "xxxx")
08:43:03 <huzhj> OK
08:43:11 <luyao1> yes
08:43:28 <zhouya> can we modify tecs_version_id to openstack_version_id?
08:43:29 <huzhj> OK, anything else for this topic?
08:43:50 <luyao1> oh firstly is daisy version-add version_name backend to generate version-id
08:44:23 <huzhj> So user should manually add version info into daisy right?
08:44:27 <luyao1> then use "daisy version-list" to get needed version_id
08:44:40 <luyao1> yes
08:44:58 <luyao1> it should  manually add version info
08:45:12 <huzhj> OK, just a little bit complex
08:45:34 <luyao1> sorry ,I describe fuzzy
08:45:56 <huzhj> may be a default, pre-existed(not loaded) version can solve this
08:46:13 <huzhj> OK, let's move on
08:46:17 <huzhj> to next topic
08:46:18 <luyao1> ok
08:46:31 <huzhj> #topic OPNFV: VM/BM Deployment & Functest Integration
08:47:55 <huzhj> Do we have Julien/Serena online? They have done a lot of Great Jobs these days
08:48:06 <luyao1> good
08:48:10 <huzhj> also zhouya
08:49:04 <huzhj> So zhouya, is there any status update from your side?
08:49:06 <zhouya> we have some problem whild query the progress of openstack installation
08:49:45 <zhouya> and I am work on finding the caution of this question.
08:50:23 <luyao1> 2.28's version is good now
08:50:34 <luyao1> maybe some change cause it?
08:50:46 <zhouya> so it is our upstream code caused the problem?
08:50:50 <huzhj> #info we have some problem whild query the progress of openstack installation
08:51:18 <zhouya> maybe the commit https://review.openstack.org/#/c/441791/ caused the problem
08:51:20 <huzhj> But we are tagged right? why still use new code?
08:52:23 <zhouya> just this patch is downloaded additional
08:52:47 <zhouya> and the other code is tagged
08:53:18 <huzhj> zhouya, if there is a doubt , just revert it and to see if bug can be solved
08:53:48 <zhouya> ok,I will revert the code and have a test of the reverted version.
08:54:14 <huzhj> not revert all changes, just the changes about the threading prepare.sh
08:54:27 <zhouya> ok
08:54:46 <zhouya> just this commit  https://review.openstack.org/#/c/441791/
08:54:48 <huzhj> does prepare.sh report progress to DB?
08:55:04 <zhouya> yes
08:55:28 <zhouya> if prepare.sh have done successfully,10% will write to DB
08:55:29 <huzhj> try not let it to report
08:55:42 <huzhj> but let the main thread to report
08:56:16 <huzhj> Oh forget it
08:56:45 <zhouya> maybe we should change the code to let the progress of DB update after all thread have execute successfully.
08:56:56 <huzhj> I thought there may be race condition, but it should be not
08:57:01 <luyao1> sorry ,I just say deploy successfully in 10.62.105.18 every day'jenkins triger
08:57:17 <zhouya> so ,it is not on 114 node?
08:58:15 <huzhj> zhouya, if each prepare.sh just update its own progress, then there shoud be no race condition
08:58:19 <zhouya> I think 114 node is deployed failed all the time.
08:58:22 <luyao1> the node I see is 10.62.105.18,and it be deploy every day
08:59:29 <zhouya> each prepare.sh just update the progress of their own node which they executed on.
08:59:34 <huzhj> let's double check them offline
09:00:47 <huzhj> #info we need to dive deep into the progress update/read mechanizm
09:01:57 <huzhj> #info, pass functest depends on support Ceph/Cinder
09:02:12 <huzhj> anything else?
09:02:37 <huzhj> We are running out of time
09:02:49 <huzhj> let's wrap this up
09:03:06 <huzhj> thank you all
09:03:22 <luyao1> no
09:03:24 <luyao1> thanks
09:03:31 <huzhj> have a good weekend
09:03:43 <huzhj> bye
09:03:44 <huzhj> #endmeeting