17:01:17 <bcafarel> #startmeeting service_chaining
17:01:18 <openstack> Meeting started Thu Nov  2 17:01:17 2017 UTC and is due to finish in 60 minutes.  The chair is bcafarel. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:01:21 <openstack> The meeting name has been set to 'service_chaining'
17:01:33 <vks11> hi
17:01:40 <bcafarel> hey vks11
17:01:44 <vks11> :)
17:01:56 <bcafarel> this should be a quiet meeting, but at least we can summarize the recent progress :)
17:02:05 <vks11> go on
17:02:10 <igordc> hi all
17:02:44 <bcafarel> hello igordc, meeting-free this evening?
17:02:48 <igordc> daylight saving time ended so now I can attend the meeting again :)
17:03:04 <bcafarel> nice
17:03:11 <bcafarel> let's start then
17:03:18 <bcafarel> #topic Agenda
17:03:25 <bcafarel> #link https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting#Agenda_for_the_Networking-SFC_Meeting_.2811.2F2.2F2017.29
17:03:53 <bcafarel> the agenda will probably get shorter soon with recent merges :)
17:04:03 <bcafarel> #topic Zuul v3 migration
17:04:39 <bcafarel> patch to move zuul jobs in tree is merged: https://review.openstack.org/#/c/512340/
17:05:21 <bcafarel> follow up patches in progress (one mistake on the jobs I moved, backporting to stable branches, and cleaning the legacy configuration)
17:05:33 <bcafarel> but so far it looks good!
17:06:38 <bcafarel> #topic Tempest work
17:07:41 <bcafarel> no feedback on the ml post http://lists.openstack.org/pipermail/openstack-dev/2017-October/123814.html but mlavalle commented on https://review.openstack.org/#/c/510552/
17:08:46 <bcafarel> he's leaning on an unified Neutron Stadium repo, waiting to see how it turns out
17:09:13 <bcafarel> personally I'm quite neutral on the options (per repo, one for stadium, or joined with neutron)
17:11:47 <bcafarel> #topic Tap SF
17:12:08 <vks11> bcafarel: there were couple of neutronclient patches (mine , yours)
17:13:09 <bcafarel> vks11: yes, I see some new +2 on them
17:13:49 <bcafarel> with driver merged, and this getting close, we can probably cross this one of the todo list soon
17:14:37 <vks11> yeah
17:15:03 <bcafarel> vks11: and luckily you did not create a merge conflict too big for igordc ;)
17:15:20 <vks11> bcafarel: I was really afraid of that :)
17:16:08 <igordc> vks11: bcafarel: it was OK :)
17:16:16 <igordc> had much worse back in the service graphs era
17:16:29 <igordc> even worse in the mpls correlation one I think
17:17:18 <bcafarel> oh yes I remember that one
17:17:21 <bcafarel> and good transition:
17:17:30 <bcafarel> #topic SFC graphs
17:18:11 <bcafarel> all done too, except the OSC plugin, which has one +2 right?
17:18:59 <igordc> bcafarel: yeap
17:19:10 <igordc> bcafarel: amotoki +2d
17:20:17 <bcafarel> igordc: and he pinged mlavalle on looking at the list of "open, python-neutronclient and AM+2"
17:20:56 <bcafarel> may be some delay with the summit but they should go in the close release for python-neutronclient
17:21:31 <igordc> bcafarel: great
17:22:23 <bcafarel> #topic NSH dataplane
17:23:04 <bcafarel> https://review.openstack.org/#/c/373465/ got merged, thanks igordc for all the hard work on the topic (with previous correlation stuff etc)!
17:24:17 <igordc> phew, NSH finally becomes possible in OpenStack
17:25:09 <igordc> now we need some sample nsh functions to test everything
17:25:47 <bcafarel> indeed I'd love to get my hands on these too
17:26:57 <bcafarel> and nsh in kernel data path (ovs) seems to be making good progress
17:27:41 <bcafarel> igordc: is there a python-neutronclient related patch?
17:27:47 <igordc> yes
17:28:30 <igordc> https://review.openstack.org/#/c/487843/
17:28:48 <igordc> I temporarily abandoned it, I will restore now actually
17:29:08 <igordc> will resubmit today or tomorrow
17:29:54 <bcafarel> next python-neutronclient release will be heavy on the SFC commits :)
17:30:50 <bcafarel> #topic python-neutronclient updates
17:31:24 <bcafarel> I guess we mentioned these enough already, the bugfixes are also in good shape (at least one +2, in the list to get merged)
17:31:59 <bcafarel> #topic API reference
17:32:29 <bcafarel> I don't think I saw any update here
17:32:39 <bcafarel> vks11: did you get a hold on tmorin?
17:32:53 <vks11> not this week
17:33:03 <vks11> I checked couple of time last week
17:33:13 <vks11> most of the time he was offline
17:33:29 <vks11> I will try to reach him tomorrow once more
17:33:51 <vks11> but I guess he is like most of the time unavialable
17:34:04 <vks11> may be I should send him email instead
17:35:04 <bcafarel> vks11: thanks, he may be going to Sydney too
17:36:21 <bcafarel> vks11: I forgot to add VMs failure detection to the agenda
17:36:44 <vks11> I was thinking to brought up in last section of this meeting
17:37:23 <bcafarel> ok time for the last section then :)
17:37:29 <bcafarel> #topic VMs failure detection
17:37:32 <vks11> hehehe
17:38:08 <vks11> I looked briefly on ceilometer heathmon
17:39:00 <vks11> I have worked long time back on the ceilometer to listen on the events published on rabbitmq
17:40:11 <vks11> just for hypothesis, if we get that we need some sort of listener or API client to invoke the suitable action against it
17:41:05 <vks11> this mechanism how it will fit is something we need to work on
17:41:37 <vks11> however, I haven't spend much time this week on this topic
17:41:57 <vks11> have you any thought on this ?
17:42:53 <bcafarel> I'm reading some ceilometer docs in another tab to try to get up to speed :)
17:43:37 <bcafarel> vks11: the listener is for the service VM, or for the collecting service?
17:45:26 <vks11> bcafarel: I think to start with aliveness of service VM
17:46:30 <vks11> we had internally probably in 2013-14 had implemented monitoring based on various metrics
17:47:25 <vks11> and based on policy configured by user made it to take action
17:48:10 <vks11> I think that's before ceilometer was in picture in openstack
17:49:57 <bcafarel> that reminds me of octavia (lbaas) health monitoring
17:50:31 <bcafarel> it may be worth checking how they monitor the amphorae (service VMs for octavia)
17:51:49 <bcafarel> vks11: btw did you start an etherpad?
17:52:10 <vks11> no not yet,
17:52:45 <bcafarel> vks11: no problem, we will link it in next meeting :)
17:53:49 <vks11> :)
17:55:31 <bcafarel> ok, getting close to the hour
17:55:40 <bcafarel> time to wrap up I guess :)
17:56:58 <bcafarel> thanks igordc vks11, bye!
17:57:21 <vks11> bcafarel: bye will talk tomorrow
17:57:38 <bcafarel> #endmeeting