17:01:17 #startmeeting service_chaining 17:01:18 Meeting started Thu Nov 2 17:01:17 2017 UTC and is due to finish in 60 minutes. The chair is bcafarel. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:21 The meeting name has been set to 'service_chaining' 17:01:33 hi 17:01:40 hey vks11 17:01:44 :) 17:01:56 this should be a quiet meeting, but at least we can summarize the recent progress :) 17:02:05 go on 17:02:10 hi all 17:02:44 hello igordc, meeting-free this evening? 17:02:48 daylight saving time ended so now I can attend the meeting again :) 17:03:04 nice 17:03:11 let's start then 17:03:18 #topic Agenda 17:03:25 #link https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting#Agenda_for_the_Networking-SFC_Meeting_.2811.2F2.2F2017.29 17:03:53 the agenda will probably get shorter soon with recent merges :) 17:04:03 #topic Zuul v3 migration 17:04:39 patch to move zuul jobs in tree is merged: https://review.openstack.org/#/c/512340/ 17:05:21 follow up patches in progress (one mistake on the jobs I moved, backporting to stable branches, and cleaning the legacy configuration) 17:05:33 but so far it looks good! 17:06:38 #topic Tempest work 17:07:41 no feedback on the ml post http://lists.openstack.org/pipermail/openstack-dev/2017-October/123814.html but mlavalle commented on https://review.openstack.org/#/c/510552/ 17:08:46 he's leaning on an unified Neutron Stadium repo, waiting to see how it turns out 17:09:13 personally I'm quite neutral on the options (per repo, one for stadium, or joined with neutron) 17:11:47 #topic Tap SF 17:12:08 bcafarel: there were couple of neutronclient patches (mine , yours) 17:13:09 vks11: yes, I see some new +2 on them 17:13:49 with driver merged, and this getting close, we can probably cross this one of the todo list soon 17:14:37 yeah 17:15:03 vks11: and luckily you did not create a merge conflict too big for igordc ;) 17:15:20 bcafarel: I was really afraid of that :) 17:16:08 vks11: bcafarel: it was OK :) 17:16:16 had much worse back in the service graphs era 17:16:29 even worse in the mpls correlation one I think 17:17:18 oh yes I remember that one 17:17:21 and good transition: 17:17:30 #topic SFC graphs 17:18:11 all done too, except the OSC plugin, which has one +2 right? 17:18:59 bcafarel: yeap 17:19:10 bcafarel: amotoki +2d 17:20:17 igordc: and he pinged mlavalle on looking at the list of "open, python-neutronclient and AM+2" 17:20:56 may be some delay with the summit but they should go in the close release for python-neutronclient 17:21:31 bcafarel: great 17:22:23 #topic NSH dataplane 17:23:04 https://review.openstack.org/#/c/373465/ got merged, thanks igordc for all the hard work on the topic (with previous correlation stuff etc)! 17:24:17 phew, NSH finally becomes possible in OpenStack 17:25:09 now we need some sample nsh functions to test everything 17:25:47 indeed I'd love to get my hands on these too 17:26:57 and nsh in kernel data path (ovs) seems to be making good progress 17:27:41 igordc: is there a python-neutronclient related patch? 17:27:47 yes 17:28:30 https://review.openstack.org/#/c/487843/ 17:28:48 I temporarily abandoned it, I will restore now actually 17:29:08 will resubmit today or tomorrow 17:29:54 next python-neutronclient release will be heavy on the SFC commits :) 17:30:50 #topic python-neutronclient updates 17:31:24 I guess we mentioned these enough already, the bugfixes are also in good shape (at least one +2, in the list to get merged) 17:31:59 #topic API reference 17:32:29 I don't think I saw any update here 17:32:39 vks11: did you get a hold on tmorin? 17:32:53 not this week 17:33:03 I checked couple of time last week 17:33:13 most of the time he was offline 17:33:29 I will try to reach him tomorrow once more 17:33:51 but I guess he is like most of the time unavialable 17:34:04 may be I should send him email instead 17:35:04 vks11: thanks, he may be going to Sydney too 17:36:21 vks11: I forgot to add VMs failure detection to the agenda 17:36:44 I was thinking to brought up in last section of this meeting 17:37:23 ok time for the last section then :) 17:37:29 #topic VMs failure detection 17:37:32 hehehe 17:38:08 I looked briefly on ceilometer heathmon 17:39:00 I have worked long time back on the ceilometer to listen on the events published on rabbitmq 17:40:11 just for hypothesis, if we get that we need some sort of listener or API client to invoke the suitable action against it 17:41:05 this mechanism how it will fit is something we need to work on 17:41:37 however, I haven't spend much time this week on this topic 17:41:57 have you any thought on this ? 17:42:53 I'm reading some ceilometer docs in another tab to try to get up to speed :) 17:43:37 vks11: the listener is for the service VM, or for the collecting service? 17:45:26 bcafarel: I think to start with aliveness of service VM 17:46:30 we had internally probably in 2013-14 had implemented monitoring based on various metrics 17:47:25 and based on policy configured by user made it to take action 17:48:10 I think that's before ceilometer was in picture in openstack 17:49:57 that reminds me of octavia (lbaas) health monitoring 17:50:31 it may be worth checking how they monitor the amphorae (service VMs for octavia) 17:51:49 vks11: btw did you start an etherpad? 17:52:10 no not yet, 17:52:45 vks11: no problem, we will link it in next meeting :) 17:53:49 :) 17:55:31 ok, getting close to the hour 17:55:40 time to wrap up I guess :) 17:56:58 thanks igordc vks11, bye! 17:57:21 bcafarel: bye will talk tomorrow 17:57:38 #endmeeting