17:02:34 <sridhar_ram> #startmeeting tacker 17:02:35 <openstack> Meeting started Tue Nov 10 17:02:34 2015 UTC and is due to finish in 60 minutes. The chair is sridhar_ram. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:39 <openstack> The meeting name has been set to 'tacker' 17:02:45 <sridhar_ram> #topic Roll Call 17:02:54 <vishwanathj> o/ 17:02:56 <sridhar_ram> who is here for Tacker weekly meeting ? 17:03:02 <tbh> o/ 17:03:03 <sridhar_ram> vishwanathj: hi there! 17:03:13 <vishwanathj> sridhar_ram, tbh, Hi 17:03:22 <sripriya_> hello 17:04:08 <tbh> Hi 17:05:15 <sridhar_ram> bobh: you there ? 17:05:58 <sridhar_ram> lets start .. 17:06:06 <sridhar_ram> s3wong: hi 17:06:07 <s3wong> hello 17:06:25 <bobh> o/ 17:06:30 <sridhar_ram> I think we have a quorum 17:06:36 <sridhar_ram> #topic Agenda 17:06:40 <sridhar_ram> #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Nov_10.2C_2015 17:07:07 <sridhar_ram> First, welcome to the first meeting of Mitaka Cycle 17:07:59 <sridhar_ram> #chair s3wong bobh 17:08:00 <openstack> Current chairs: bobh s3wong sridhar_ram 17:08:08 <sridhar_ram> #topic Announcements 17:08:19 <sridhar_ram> Mitaka Schedule - #link https://wiki.openstack.org/wiki/Mitaka_Release_Schedule 17:09:07 <sridhar_ram> While will use "M" milestones as guides to orient our activities in Mitaka 17:09:22 <sridhar_ram> M1 is Dec 1-3 17:10:38 <sridhar_ram> We have a proposal out to expand core team - #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/078971.html 17:11:04 <sridhar_ram> sripriya_: hope you've been nice to everybody here ;-) 17:11:31 <vishwanathj> +1, congrats sripriya_ 17:11:37 <vishwanathj> well deserved 17:11:45 <natarajk> +1 17:11:51 <sripriya_> sridhar_ram: hope so :-) 17:11:51 <tbh> sripriya_, congrats, way to go 17:11:55 <s3wong> sridhar_ram: didn't even see the email, sorry 17:12:00 <s3wong> +1 17:12:09 <sripriya_> thanks everyone 17:12:18 <bobh> +1 17:13:10 <s3wong> reply to ML also 17:13:13 <sridhar_ram> if you can send an email in ML that would be great 17:13:18 <sridhar_ram> s3wong: thanks 17:13:28 <sridhar_ram> lets move on... 17:13:45 <sridhar_ram> #topic Tacker Mitaka Priorities 17:14:19 <sridhar_ram> Etherpad link #link https://etherpad.openstack.org/p/tacker-mitaka-priorities 17:14:52 <sridhar_ram> Lets discuss the entries here. 17:15:15 <sridhar_ram> any general questions / comments ? 17:15:57 <sridhar_ram> FWIW, it is an ambitious plan and we really could use more devs! 17:17:33 <sridhar_ram> a specific entry that is missing is auto-scaling .. any one thinks that is super important ? 17:17:59 <sripriya_> sridhar_ram: when are we looking for blueprints submission for the features listed? 17:18:29 <sridhar_ram> sripriya_: yes, we should get going on blueprints for most of the entries there 17:18:32 <bobh> sridhar_ram: There hasn't been a lot of interest in auto-scaling from telcos - manual scaling is more interesting to them 17:18:49 <bobh> sridhar_ram: so maybe some support of stack-update would be in order 17:18:53 <vishwanathj> scaling is a VNFM function according to ETSI MANO, is it not? 17:19:05 <sridhar_ram> bobh: agree on auto-scaling observation 17:20:01 <sridhar_ram> vishwanathj: I was wondering how high value it is to spend our precious bandwidth 17:20:44 <sridhar_ram> bobh: manual stack-update is something we could absorb ... 17:20:45 <vishwanathj> if there are no use cases or demand, we should list it as a known limitation or gap to be taken up when there is a demand 17:21:34 <bobh> sridhar_ram: we can also look to see if/how the existing tosca-parser supports auto-scaling - we might get some of that support for free with the TOSCA parser changes 17:22:04 <sridhar_ram> vishwanathj: agree, one thing we are realizing - and other pointing it out - this whole orchestration is a huge space. we intentionally let the boundaries emerge - naturally based on customer input 17:23:01 <sridhar_ram> bobh: sounds good.. prashanthD is still interested in this area. Perhaps bobh you can guide him to contribute to this specific narrow use-case 17:23:52 <sridhar_ram> blueprints wise we already have one for SFC.. 17:23:58 <bobh> sridhar_ram: sounds good. I might suggest investigating VNFD update and stack-update as solutions for manual scaling until we get a use case for auto-scaling 17:24:21 <sridhar_ram> bobh: sounds like a plan 17:24:39 <bobh> I think the TOSCA parser changes will require three BPs, one each in tosca-parser, heat-translator and tacker 17:24:43 <sridhar_ram> we need new blueprints for tosca-parser work, multi-vim, 17:25:18 <sridhar_ram> bobh: sure, make sense 17:25:22 <sripriya_> sridhar_ram: agree 17:26:09 <sridhar_ram> Enhanced VNF placement will need a blueprint too.. again this is a vast subject. vishwanathj you need to clearly scope this out 17:26:30 <vishwanathj> sure 17:26:42 <sridhar_ram> For Auto Flavor / Network create I'd suggest to use simpler RFE process 17:26:53 <sridhar_ram> tbh: what do you think ? 17:27:28 <tbh> sridhar_ram, make sense 17:27:54 <sridhar_ram> tbh: cool.. 17:28:11 <bobh> tbh: You might want to look at the existing tosca-parser/heat-translator functionality to see if it supports creating flavors/networks 17:28:48 <tbh> bobh, sure, I will take a look at it 17:29:15 <sridhar_ram> For some of the efforts we also need more folks to join the different tracks .. 17:29:42 <sridhar_ram> for e.g. enhanced vnf placement and multi-vim needs more devs 17:30:07 <sridhar_ram> any new contributors here interested to join ? 17:30:16 <vishwanathj> tbh has volunteered and is interested in the enhanced vnf placement effort 17:30:28 <sridhar_ram> existing members - please spread the word 17:30:32 <tbh> sridhar_ram, yeah, I am interested in vnf placement 17:31:18 <sridhar_ram> vishwanathj: tbh: excellent, in fact those are related areas.. some of the extra_specs stuff goes into flavors 17:31:39 <brucet> Could someone clue me in as to what vnf placement means beyond existing functionality in OpenStack? 17:32:53 <sridhar_ram> brucet: there is a whole laundry list - starting with placing VMs with correct NUMA topology, cpu-pinning,... 17:32:57 <vishwanathj> brucet, this would be taking into consideration CPU Pinning, SR_IOV and NUMA awareness... 17:33:23 <brucet> Is there a doc for this under Tacker? 17:33:25 <bobh> brucet: Also affinity/anti-affinity, server groups, availability zones... 17:34:20 <vishwanathj> brucet, there is not a doc right now, but I shall be producing one after my investigations 17:34:32 <bobh> sridhar_ram: I need to leave early today, I'll catch up with the meeting notes. 17:34:36 <sridhar_ram> brucet: end goal is to place the VNF (imagine a set of VDUs / VMs) in the most optimal way for *maximum* performance 17:34:47 <brucet> Understood. 17:34:47 <sridhar_ram> bobh: sure, ttyl 17:35:11 <brucet> I'm just trying to see if there's new functionality required beyond what's a;ready in OpenStack 17:35:53 <sridhar_ram> brucet: for now, we are required to work with what's available in openstack... 17:36:02 <brucet> Ah.... OK 17:36:35 <brucet> Sorry. I'm just trying to come up to speed. 17:36:50 <sridhar_ram> brucet: no problem at all, thanks for the questions 17:37:33 <sridhar_ram> in fact this whole area (efficient placement) is quite interesting topic .. 17:38:14 <brucet> So as I understand it, the goal for now would be to map OpenStack functionality to VNF placement requirements for ETSI 17:38:20 <brucet> Then see if there are any gaps. 17:38:30 <sridhar_ram> vishwanathj: that's why we need some concrete goals to validate what we delivered ... imagine, using this feature, doing a 10g line rate passthru using VNF placed by Tacker 17:39:38 <sridhar_ram> brucet: spot on. nova team has done many things in this area for nfv. we will be the "user" of those features and bug them to fix / enhance as needed 17:39:48 <brucet> Got it. 17:39:49 <vishwanathj> sridhar_ram, good point....brucet, be tuned for a spec 17:39:59 <vishwanathj> be tuned -> stay tuned 17:40:07 <brucet> Makes perfect sense 17:40:26 <brucet> I would be happy to join effort 17:40:57 <vishwanathj> brucet, looking forward to your review and comments 17:41:03 <brucet> OK 17:41:04 <vishwanathj> once I have my spec out 17:41:05 <sridhar_ram> brucet: awesome, please do.. any contribution is welcome. you can start w/ reviews 17:41:14 <brucet> Perfect for me 17:41:48 <sridhar_ram> On a different topic - SFC - I couldn't be more happier on the progress.. 17:43:39 <s3wong> sridhar_ram: how is that progressing? Is there going to be a demo during the OPNFV summit? 17:44:04 <sridhar_ram> s3wong: I was looking up an email link to share here... 17:45:04 <sridhar_ram> Checkout this email thread in opnfv ML - #link http://lists.opnfv.org/pipermail/opnfv-tech-discuss/2015-November/006330.html 17:45:22 <sridhar_ram> s3wong: yes, my understanding is there is going to be a demo 17:46:15 <sridhar_ram> This comment in that thread made me happy - "Seems like as far as SFC is concerned , Tacker is the center of the universe" 17:46:25 <vishwanathj> +1 17:46:27 <sripriya_> +1 17:46:31 <s3wong> :-) 17:46:47 <sridhar_ram> seems we are indeed doing something that make sense for the nfv world :) 17:47:26 <sridhar_ram> any thing else on what is in store for Mitaka ? 17:47:39 <sridhar_ram> prashantD_: hi there 17:48:06 <sridhar_ram> prashantD_: we were just talking about how much we should do in "VNF scaling" in Mitaka 17:48:23 <sripriya_> sridhar_ram: probably you can call out for any volunteers on ML for other Mitaka features just in case anyone is interested 17:48:26 <sridhar_ram> prashantD_: please reach out to bobh 17:49:22 <brucet> Can I ask a question about SFC? 17:49:36 <sridhar_ram> sripriya_: good idea...will give a shout out in the ML. Based on my Tokyo summit conversation we should get more folks. Lets see. 17:49:41 <sridhar_ram> brucet: shoot 17:50:14 <brucet> Seems like SFC would be potentially used internally by tacker but not exposed in any Tacker APIs, correct? 17:50:25 * sridhar_ram 10min mark 17:50:43 <s3wong> brucet: by SFC, you mean networking-sfc APIs? 17:50:56 <brucet> Trying to remember how ETSI Mano describes SFC usage 17:51:00 <sridhar_ram> brucet: for now SFC will be expose as an post VNF instantiation API 17:51:27 <brucet> OK. So ability to chain VNFs? 17:51:54 <brucet> Neutron Service Function Chaining APIs 17:51:54 <sridhar_ram> brucet: in a follow on phase we will start supporting VNFFGD (Forwarding Graph Descriptor) to automatically render the chains without the need to invoke Tacker SFC APIs 17:52:24 <brucet> OK. I need to look at Mano VNF FGD 17:52:33 <brucet> So we need to map that to Neutron SFC 17:53:07 <s3wong> brucet: yes, as sridhar_ram mentioned, we will likely kick off with a Tacker NB APIs for SFC setup after VNFs are instantiated; in the future VNFFG is well defined, we will use that as NB 17:53:25 <brucet> OK 17:53:36 <s3wong> brucet: on SB, particularly for setting up traffic plumbing, we should actively integrate with networking-sfc, that's for sure 17:53:56 <brucet> The Neutron SFC guys are expecting that 17:54:19 <sridhar_ram> s3wong is one of that neutron-sfc guy :) 17:54:30 <brucet> Ah..... OK 17:54:41 <brucet> Newbie 17:54:42 <s3wong> brucet: on the email thread sridhar_ram sent out from opnfv-tech-discuss, various people are suggesting having Tacker plugging in SDN controllers directly 17:54:53 <sridhar_ram> brucet: no worries! 17:54:58 <igordcard> hi all, just a quick question, when "neutron-sfc" is mentioned, are you referring to "openstack/networking-sfc"? 17:55:35 <sridhar_ram> igordcard: yes 17:55:41 <s3wong> we looked briefly into that, in a certain extend, we may NEED to do that anyway --- for example, ODL SFC actually have a templatization side of SFC setup, which something like Neutron port-chaining would not be able to fully support 17:56:22 <sridhar_ram> s3wong: that area is still fluid IMO.. 17:56:56 <s3wong> sridhar_ram: certainly networking-sfc APIs integration is in order for us 17:57:00 <sridhar_ram> s3wong: my preference if is we can normalize everything behind neutron-sfc API.. that's the best. But I also realize the world is not perfect! 17:57:32 <s3wong> sridhar_ram: obviously we don't want to adopt networking-sfc but having it turned out to be less function than what Tim and Dan are doing 17:58:11 <sridhar_ram> s3wong: as long as we give a nice stab at tacker sfc-driver abstract class with say 70% adopting neutron-sfc with some odd balls directly going to their controller .. that might be one future 17:58:28 <s3wong> sridhar_ram: that's what I think will be the case as well 17:58:28 <sridhar_ram> we are almost out of time 17:58:31 <s3wong> 2 minutes 17:58:39 <brucet> Not sure why any calls would be needed directly to SDN controller 17:58:56 <sridhar_ram> let wraps.. we can contrinue the discussion next week.. 17:59:00 <s3wong> brucet: it depends on the SDN controller 17:59:06 <brucet> OK 17:59:15 <brucet> Would like to see the use case 17:59:30 <sridhar_ram> Folks with Mitaka deliverable please start working in the blueprints.. 17:59:50 <s3wong> sridhar_ram: mine already have a bp, right? :-) 17:59:51 <sridhar_ram> even some simple WIPs blueprints will be nice to see by next week 18:00:03 <sridhar_ram> s3wong: you are covered! 18:00:09 <sridhar_ram> times up... 18:00:15 <brucet> Bye 18:00:17 <sridhar_ram> thanks for joining folks! 18:00:17 <s3wong> bye, folks! 18:00:25 <sripriya_> thanks 18:00:31 <sridhar_ram> #endmeeting