13:00:32 #startmeeting senlin 13:00:33 Meeting started Tue Jun 21 13:00:32 2016 UTC and is due to finish in 60 minutes. The chair is yanyanhu. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:37 The meeting name has been set to 'senlin' 13:00:48 hi 13:00:53 hi 13:00:54 hi~~ 13:01:04 qiming is not here for family reason, so I will hold this meeting today 13:01:05 :) 13:01:11 hi, haiwei , elynn 13:01:25 will wait for a minute for other attender 13:01:27 ok 13:02:03 ok, lets start 13:02:14 please check the agenda and item you want to discuss :) 13:02:22 https://wiki.openstack.org/wiki/Meetings/SenlinAgenda#Weekly_Senlin_.28Clustering.29_meeting 13:02:34 #topic newton workitems 13:02:47 lets quick go through the etherpad 13:02:52 hi :) 13:02:55 okay 13:03:00 https://etherpad.openstack.org/p/senlin-newton-workitems 13:03:09 welcome rossella_s 13:03:12 hi, rossella_s 13:03:29 the first item is tempest test 13:03:41 tempest API test is almost done I think 13:03:50 elynn, I think it is? 13:03:57 yes 13:04:00 for both positive and negative cases 13:04:04 nice :) 13:04:09 Saw you add release note 13:04:25 We can move on to functional test and integration test 13:04:35 yes. If there is any case missing, plz feel free to propose patch for it :) 13:04:37 yes 13:04:49 the next step is migrating existing functional test to tempest 13:05:11 I think some existing functional test cases will be removed since they are duplicated with tempest API tests 13:05:23 will start to work on it 13:05:33 yes, a lot of them. 13:05:37 yea 13:05:50 since our tempest API test is very complete I think :) 13:06:07 any thing else about tempest test? 13:06:18 ok 13:06:23 Performance Test 13:06:30 actually no much progress this week... 13:06:42 just merged the rally plugin for cluster-resize 13:06:56 so now we have rally plugin support for cluster create/delete/resize 13:07:06 we can do some basic benchmarking now 13:07:15 I think eldon is not here 13:07:30 hope these plugins can provide some help for their test work 13:07:51 BTW, these plugins still stay in senlin repo 13:08:05 contribute them to rally repo is the plan 13:08:20 ok, HA topic 13:08:26 umm, Qiming is not here 13:08:34 xinhui, are you around? 13:09:04 guess no... 13:09:26 elynn, haiwei , do u guys have another thing want to discuss about this topic? 13:09:39 no 13:09:43 no 13:09:46 ok, lets skip it 13:09:56 profile 13:10:04 for container support 13:10:12 haiwei, any thing new :) 13:10:23 I noticed you proposed some patches to add docker driver 13:10:30 and the profile support as well 13:11:11 https://review.openstack.org/331947 13:11:18 this is the latest one 13:11:36 and this one has been merged 13:11:36 https://review.openstack.org/322692 13:12:19 haiwei, there? 13:12:56 ok, lets move on and maybe go back later 13:13:01 okay 13:13:09 ah, zaqar support 13:13:15 hasn't started I guess 13:13:26 not sure anyone has plan to work on it 13:16:02 welcome back haiwei :) 13:16:02 dropped just now 13:16:08 yes 13:16:14 :) 13:16:30 yanyanhu, just ask you about container support progress :) 13:16:54 I added container create/delete driver this week 13:17:04 if not, will try to spend some time on it after functional test migration is done 13:17:04 it's on our roadmap of newton release 13:17:05 should finish it 13:17:05 hope you can review it 13:17:06 next one is notification 13:17:07 skip it as well since Qiming is not here. 13:17:08 ok, these are all items in etherpad 13:17:10 the next topic in agenda is deploying senlin-engine and senlin-api in different hosts 13:17:10 this is a problem asked by eldon zhao from cmcc 13:17:11 guess he hasn't join meeting channel 13:17:23 haiwei, sure 13:17:29 will help to check it :) 13:17:29 wow, yanyanhu just said lots of words. 13:17:32 thanks 13:17:36 I thought my network is broken :) 13:17:47 it is real... 13:17:47 me too, yanyanhu 13:17:54 my network was broken.... 13:18:07 I'm using proxy at home 13:18:11 haha, I thought it was my network's problem 13:18:14 the proxy is setup in softlayer 13:18:18 eldon_, was joined. 13:18:33 and I have to use vpn to connect to IBM network before I can access that softlayer machine... 13:18:43 hi, eldon_ , welcome :) 13:18:49 Is there any bug when deploy api and engine on different hosts? 13:19:20 eldon_, I just made a test this afternoon and found it worked in my local env :) 13:19:21 yes 13:19:21 I am curious about deploying api and engine on different hosts 13:19:34 what does it mean? 13:19:40 just need to be careful about two things 13:19:47 maybe, you have configured host in senlin.conf @yanyanhu 13:20:03 haiwei, that means senlin-engine is running in a host while senlin-api is running in another host 13:20:09 eldon_, yes 13:20:29 the "host" option in both machines must be the same 13:20:42 why do we need this feature? 13:20:50 but, i have one api and two engine. 13:20:53 to let senlin-engine and senlin-api talk using the same rpc server 13:20:54 I thought rabbitmq is the only tool to connect them together. 13:21:02 how can I configure host~ 13:21:19 let me see 13:21:38 why you want to setup two senlin-engine services 13:21:43 and only one senlin-api services 13:21:47 service 13:22:03 if you want to add engine worker, you can configure it in senlin.conf 13:22:22 I got it, that is for some cases when nodes are too many 13:22:31 the structure of senlin is different from nova :) 13:22:39 who has one nova-api with multiple nova-compute 13:22:51 I use two engines for one api, I just want to make senlin more stronger 13:23:02 nova-compute is actually an agent which map request from API to VM creation 13:23:07 if one senlin-engine down, we have another engine 13:23:19 eldon_, nice try :) 13:23:21 it's not like nova-compute 13:23:29 just that could not match the design :) 13:23:35 eldon_, yes 13:23:40 that's the point 13:23:47 what is the difference? 13:23:54 nova and senlin 13:23:56 actually I guess you can setup two engines with one api 13:24:08 but the behavior could be unpredictable 13:24:20 senlin engine is not a proxy but some workers? 13:24:49 haiwei, I think senlin-engine is not agent like nova-compute 13:24:49 yes, I think so 13:24:55 eldon dropped... 13:25:13 hi, eldon_ 13:25:14 sorry to logout 13:25:20 no problem 13:25:22 yes, i am in~ 13:25:38 I think you can try to configure the 'host' option to let one senlin-api talk with two senlin-engines 13:25:40 can we setup two engines with one api? 13:25:45 I think so 13:25:50 how? 13:25:54 just I'm not 100% sure about the result 13:26:07 this afternoon, I changed code for this. 13:26:14 what is the 'host' option? 13:26:21 just set 'host' in senlin.conf in all three machines to the same value 13:26:28 senlin.conf, haiwei 13:26:40 and also set rabbit_host to the same one 13:26:43 yanyanhu, Why do we do that? 13:27:02 a IP address? 13:27:07 elynn, eldon_ want to make a try :) 13:27:14 to see whether that works 13:27:18 haiwei, yes 13:27:29 https://github.com/EldonZhao/senlin/blob/eldon/senlin/rpc/client.py 13:27:30 rabbit_host is a ip address where rabbit server is located 13:27:36 def __init__(self): 13:27:36 self._client = messaging.get_rpc_client( 13:27:36 topic=consts.ENGINE_TOPIC, 13:27:36 version=self.BASE_RPC_API_VERSION) 13:27:42 It seems we need a scheduler 13:27:49 I removed server param, and it works 13:27:55 haiwei, scheduler for? 13:28:02 engines 13:28:17 eldon_, if there is only one api and one engine, you don't need to remove server param 13:28:23 it worked in my local env 13:28:26 yanyanhu, I mean why do we need to set host the same in 3 machines? 13:28:31 it can be random for rabbitmq to choose which engine to send request? 13:28:38 haiwei, it can be, just no necessary I guess 13:28:47 yes 13:29:01 elynn, since eldon wants one senlin-api to talk with two senlin-engines at the same time 13:29:21 so we need to ensure all three of them interact with each other using the same rpc server.topic 13:29:27 which created in rabbit_host 13:29:32 our project wants senlin more stronger. 13:29:37 haiwei, it's broadcast 13:29:41 So we need this feature. 13:29:50 but only one engine can receive and handle the request from api 13:30:03 I think the sever must be different? 13:30:05 eldon_, I guess we don't need to revise existing code 13:30:08 really? 13:30:10 to achieve your goal :) 13:30:18 multi engines is the same as multi workers 13:30:25 let me see, elynn 13:30:26 I thought. 13:30:29 https://github.com/openstack/heat/blob/8b02644d631ea2d28b27fcc30d7b8091b002a9db/heat/rpc/listener_client.py#L37-L40 13:30:41 oh, right, if the 'host' is the same for two senlin-engines 13:30:49 I guess one of them can't start correctly 13:30:52 Here is the codes from heat, it just simply set server to different engine_id 13:31:02 elynn, a little different 13:31:31 It's just a mark for each engine I think? 13:31:32 elynn, we don't do this 13:31:54 the server for engine service is configured using 'host' option 13:31:59 yes, not exactly the same. 13:32:07 server of each dispatcher is the engine_id 13:32:27 I think that's fine. 13:32:53 so for senlin-engine service, all engine instances have the same tpc "server" 13:33:13 I am a little interested about multi-workers. 13:33:18 that's why rpc request from senlin-api can broadcast to all of them directly 13:33:29 sorry, broadcast is not exact word 13:33:40 I got what you mean. 13:33:45 so eldon_'s problem is he can't get it work when one api and 2 engines on different hosts? 13:33:56 elynn, yes, looks so 13:34:12 Unless delete host parameter as he said. 13:34:18 hi, eldon_ , I will make further test tomorrow to test that case 13:34:29 right, eldon_ ? 13:34:34 I guess it should work directly 13:34:46 but I'm not sure about it :) 13:34:51 We should make some unit tests for it after investigation... 13:35:08 now I can explain again:) 13:35:19 based on my understanding of oslo.messaging, I guess either it works directly or one senlin-engine can't start correctly 13:35:24 we have one api + two engines. 13:35:44 eldon_, yes 13:35:56 one engine is deployed in api's server, and other one in other server. 13:35:58 sorry let you explain again, I'm listening 13:36:18 ok 13:36:38 I find that, if we don't configure host in senlin.conf, api will always call engine in its server 13:37:01 eldon_, that's true since default 'host' is the host name of your machine :) 13:37:21 because, if host isn't configured, it will use hostname in get_rpc_client function. 13:37:33 yes 13:37:35 exactly 13:37:53 if i stop engine in api's server, it can't call engine in other server automatically. 13:38:16 eldon_, yes 13:38:22 because api can't resolve the hostname? 13:38:32 since by default, rpc call will be send to 'server1.topic' 13:38:39 if I remove server param in get_rpc_client, it can automatically call engine in next engine. 13:39:13 https://github.com/openstack/senlin/blob/master/senlin/common/config.py#L92 it does retrieve the hostname and set it as default. 13:39:20 because api still call engine in its host, but it has already stopped. 13:39:21 eldon_, that is because after you remove 'host', rpc request will not target to all rpc server 13:39:25 with the given topic 13:39:31 yes. 13:39:46 so the correct way is configuring two senlin-engine to listen to the server rpc server :0 13:40:24 rather than removing rpc server parameter from rpc client of senlin-api 13:40:26 if it needs configuration, we can't add senlin-engine automatically. 13:40:37 eldon_, so in api host, can it ping the other server by hostname? 13:40:41 I find heat solved in the same way. 13:40:42 eldon_, but that is the correct way :0 13:40:43 :) 13:40:54 you can't ensure the rpc topic is unique :) 13:41:12 topic is unique, i am sure 13:41:15 we can't ensure the rpc topic senlin engine used is unique in the rabbit_host 13:41:17 topic is senlin 13:41:42 it's senlin-engine I guess 13:42:03 https://github.com/EldonZhao/heat/blob/master/heat/rpc/client.py 13:42:41 so the 'host' here actually means api host? 13:43:07 elynn, it's rpc server actually 13:43:15 let me find the code 13:43:27 http://git.openstack.org/cgit/openstack/senlin/tree/senlin/engine/service.py#n87 13:43:36 http://git.openstack.org/cgit/openstack/senlin/tree/senlin/engine/service.py#n124 13:44:13 http://git.openstack.org/cgit/openstack/senlin/tree/senlin/cmd/engine.py#n42 13:44:40 and this 13:44:40 http://git.openstack.org/cgit/openstack/senlin/tree/senlin/common/consts.py#n16 13:45:23 so we use both 'host' and 'topic' together to uniquely identify the rpc topic that a senlin-engine(may include multiple workers) listens to 13:45:35 yes. 13:45:44 and by default, the rpc request of a senlin-api will be sent to this place 13:46:05 okay, if it's rpc server here, then we should use rabbitmq_host option? 13:46:24 so I mean if you want two senlin-engines receive rpc request from one api, you can configure them listen to the same rpc server.topic 13:46:33 if it is allowed by oslo.messaging :) 13:46:47 So if rabbitmq and api doesn't place on one host, we must set host to rabbitmq host, right? 13:47:15 we should set "rabbit_host" to the ip of server where rabbit server is located 13:47:21 yayanhu, our cinder project uses your way to solve the problem. 13:47:37 host and rabbit_host are two different config options :) 13:47:54 they configure the same host in three cinder-volume servers. 13:48:00 eldon_, nice, that means oslo.messaging allows this kind of configure :) 13:48:35 yes. that's just like remove key-server.HAHA 13:48:48 eldon_, er, you can say that :) 13:49:01 since if we remove key of rpc server 13:49:30 there is risk for people who just deploy one senlin-api and one senlin-engine :) 13:49:34 But compared with nova-compute, it's not the exact way to solve it. 13:49:40 they can't get any benefit 13:49:47 eldon_, exactly 13:49:52 the design is different 13:50:02 So if we have 2 api and one engine, what will it be... 13:50:11 nova-api know which nova-compute it is talking with always 13:50:27 nova-api sends requests to specific nova-compute based on scheduling result 13:50:39 not broadcast to let them compete :) 13:50:54 if we configure host of nova-compute same, vms can't build. 13:50:59 but in senlin's design, those engines will compete to get the task 13:51:06 eldon_, yes 13:51:11 I guess so 13:51:25 that doesn't make sense for nova :) 13:51:33 since nova has scheduler 13:51:39 but we don't need :) 13:51:48 So, if our feature becomes complex, we can't set engine host the same with each other. 13:52:02 yes. 13:52:24 by "feature becomes complex" you mean? 13:52:43 oh, you mean send rpc request to specified senlin-engine? 13:52:49 if so, yes 13:52:58 if so, we need scheduler kind of thing 13:53:11 to make the decision before senlin-api sends rpc request out to engine(s) 13:53:14 if we need schedule, e, we don't need~ 13:53:52 So, there is no problem now~ 13:54:04 but currently, I guess we don't need it for senlin-engine is not a worker doing local job in host it is staying in 13:54:21 yes. I see:) 13:54:33 eldon_, so besically, there are two different designs :) 13:54:45 for senlin-api/engie and nova-api/compute :) 13:55:00 yes. 13:55:20 eldon_, nice, plz make more tries and we can make further discussion about this topic 13:55:24 So we can make host the same to solve my problem:) 13:55:28 it's very interesting and also very important 13:55:41 eldon_, yes, it can be the current solution 13:56:14 ok, thank you so much for starting this topic, eldon_ :) 13:56:40 ok. last 4 minutes 13:56:45 for open discussion 13:56:49 Now, cmcc uses lb_policy and basic scaling policies in our project. And rally is also in use for testing. 13:56:52 #topic open discussion 13:56:59 eldon_, great 13:57:20 looking forward to your result and feed back :) 13:57:24 any question, just ping us 13:57:52 oh, for rally plugins, I think those ones in senlin repo works 13:58:03 you can have a try with latest code 13:58:19 ok 13:58:23 :) 13:58:35 last two minutes 13:58:50 I planned to make a discussion about summit proposal 13:58:58 ... 13:59:03 but I think we need to postpone it to next meeting :P 13:59:13 we can talk about it in senlin channel 13:59:18 we have one more week to think :) 13:59:18 or propose it 13:59:20 sure 13:59:43 ok, that's all. thanks you guys for joining, elynn , haiwei , eldon_ 13:59:51 ttyl 13:59:55 thanks 13:59:58 good night 13:59:58 thanks 14:00:02 have a nice day 14:00:05 good night 14:00:06 thanks a lot:) 14:00:11 night 14:00:16 my pleasure :) 14:00:19 #endmeeting