16:03:39 <sridhar_ram> #startmeeting tacker
16:03:39 <openstack> Meeting started Thu Aug  6 16:03:39 2015 UTC and is due to finish in 60 minutes.  The chair is sridhar_ram. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:03:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:03:43 <openstack> The meeting name has been set to 'tacker'
16:03:58 <sridhar_ram> #topic Announcements
16:04:10 <sridhar_ram> Agenda #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Aug_6.2C_2015
16:04:47 <sridhar_ram> Quick, yet another reminder on Tacker Midcycle...
16:04:57 <sridhar_ram> scheduled for Aug 20 - 21st  #link https://wiki.openstack.org/wiki/Tacker/LibertyMidcycle
16:04:57 <sridhar_ram> 
16:05:17 <sridhar_ram> Please RSVP / update the etherpad
16:06:02 <sridhar_ram> Another quick update on our top-level wiki ...#link https://wiki.openstack.org/wiki/Tacker
16:06:13 <sridhar_ram> it now has the updated Mission & Scope !!
16:06:34 <sripriya> sridhar_ram: nice!
16:06:40 <sridhar_ram> both the text and the overview diagram now has the new scope!
16:06:47 <sridhar_ram> sripriya: thanks
16:07:40 <sridhar_ram> If anyone here as further question on the new clarified mission of this project .. please bring it up
16:08:24 <sridhar_ram> Quick point on the OpenStack release timeline
16:08:28 <sridhar_ram> #link  https://wiki.openstack.org/wiki/Liberty_Release_Schedule
16:08:36 <sridhar_ram> we are in Liberty-3
16:09:03 <sridhar_ram> approx 3 weeks remaining in L3
16:09:33 <Rajkumar_> Sorry got disconnected and joined back
16:09:41 <sridhar_ram> we need to plan a Tacker Liberty release soon
16:10:29 <sridhar_ram> Given couple of specs - health-mon & MANO API - are in flight we need to extend a bit beyond L3 for Tacker
16:10:49 <sripriya> sridhar_ram: so we will have liberty branch along with master after the feature freeze
16:10:57 <sridhar_ram> Exactly ..
16:11:22 <sridhar_ram> we can perhaps go until mid-Sept to pull the release/liberty branch
16:11:50 <sridhar_ram> so please plan your commits and PSs accordingly if you want to make it to Liberty
16:12:00 <sripriya> sridhar_ram: that would be helpful
16:12:41 <sridhar_ram> lets move on..
16:13:01 <sridhar_ram> #topic MANO API
16:13:14 <sridhar_ram> sripriya: any quick updates?
16:13:20 <sripriya> sridhar_ram: sure
16:13:35 <sripriya> sridhar_ram: thanks for providing your comments
16:13:44 <sridhar_ram> sripriya: sure
16:14:16 <sripriya> after looking into some of the attributes in bit more detail, it appears we can do away with some attributes
16:14:37 <sripriya> like mgmt_driver, service_context, service_type
16:15:13 <sripriya> these attributes will however need to be part of the request dict_ and we need to ensure that
16:15:25 <sripriya> in case the yaml file will not contain these attributes
16:16:03 <sripriya> so basically, the tacker db is expecting them on the outer request dict body
16:16:12 <sripriya> else it will error out
16:16:14 <sridhar_ram> sripriya: totally agree on service_context and service_type, can be done away w/
16:16:35 <sridhar_ram> sripriya: I see, I was about to ask any DB impact .. which we should avoid for this round
16:16:48 <sripriya> and code to handle service_context is present, we are not using it thought
16:16:51 <sripriya> though*
16:17:15 <sripriya> yes, we need to have them in request body
16:17:38 <sripriya> but we can populate it after reading vnfd
16:17:51 <sridhar_ram> on the mgmt_driver attr: the question to the wider team is VNF needs one mgmt_driver for the whole VNF including for all of its VDUs (= VMs)
16:17:55 <sridhar_ram> or should it be per VDU
16:17:56 <sripriya> we need not specify it as part of the API
16:18:33 <sridhar_ram> sripriya: sounds good (for your db related resp)
16:19:41 <sripriya> sridhar_ram: as far as i understand, it can be per VNF and not per VDU
16:20:29 <sridhar_ram> sripriya: then the scenario I would like to understand .. if VNF is a multi-VM VNF
16:20:37 <sripriya> just to confirm all VDUs are spawned from the same vnf image
16:20:49 <sripriya> sridhar_ram:?
16:20:50 <sridhar_ram> can one mgmt_driver be the same
16:21:10 <Rajkumar_> sripriya - if we keep it per vnf it would be difficult to performs operations since VM spec can be different and pupose as well
16:21:27 <Rajkumar_> per VDU
16:21:40 <sripriya> well, we may have the case where one VDU does Firewall and other vRouter..
16:21:43 <sridhar_ram> sripriya: no, that may not be the case .. each VDU might be different VM image.. image .. control-pl VM vs DP plane
16:22:06 <Rajkumar_> Agree +1
16:22:06 <sripriya> sridhar_ram: ok
16:22:18 <sridhar_ram> but this example I gave it a bad one .. ;-)
16:22:47 <sridhar_ram> but there are some VNFs like vEPC which are complex and the have different subsystems in VMs
16:23:01 <sridhar_ram> and they need different ways to configure
16:23:08 <sripriya> and they will have diff. configs
16:23:25 <sripriya> sridhar_ram: right
16:23:30 <s3wong> hello
16:23:35 <sridhar_ram> diff configs is fine .. we can handle in our config.yaml structure
16:23:51 <sridhar_ram> each VDU can have different snippets of config..
16:24:24 <sridhar_ram> the question is whether we can have *one* mgmt_driver for a whole multi-VM VNF
16:24:33 <sridhar_ram> s3wong: hi
16:25:02 <sridhar_ram> Rajkumar_: that's a valuable input...looks you would vote for per VDU mgmt_driver ?
16:25:13 <sripriya> sridhar_ram: can you elaborate 'different ways to configure'?
16:25:58 <sridhar_ram> sripriya: sure..
16:25:58 <Rajkumar_> yes. Sridhar.
16:26:57 <Rajkumar_> or on the whole we can have management driver per VNF whicn in turn takes care of VDU config as well
16:27:45 <sridhar_ram> imagine a complex multi-VM VNF (like vEPC or others).. where (I'm cooking up here) pcsf-VDU need configuration using RESTapi, another control-pl VDU needs some config using NETCONF/YANG and another VM might need something adhoc CLI-over-SSH
16:28:14 <Rajkumar_> one more thought - We can have management driver per connection protocol. for e.g. Netconf for YANG
16:28:16 <sripriya> sridhar_ram: i see...
16:28:16 <sridhar_ram> this is where mgmt_driver per VDU will help
16:28:43 <Rajkumar_> CLI commands for Telnet/SSH and Rest API's for HTTP
16:28:44 <shrinathsuresh> looks like we need mgmt_driver per vdu to handle these complicated scenarios
16:29:20 <sripriya> then do we need a 'plugin' instead of 'drivers' ?
16:29:21 <Rajkumar_> We have devices which supports CLI and few Netconf and few HTTP
16:29:55 <Rajkumar_> in this way we can generalize the management driver as well
16:30:16 <Rajkumar_> yes whichever is simpler and convenient
16:30:16 <sridhar_ram> Rajkumar_: sure, that came up few times as well
16:30:59 <sridhar_ram> there are some that can be generalized like NetConf
16:31:14 <sridhar_ram> but we can't for RESTapi which is inherently VNF / VDU specific
16:31:39 <Rajkumar_> one example - for arista device we use HTTP to do config
16:31:43 <sridhar_ram> we also plan for a generic SSH-replay driver .. that can be handy too
16:32:17 <sridhar_ram> Rajkumar_: sure, but you would need to write specific arista mgmt_driver code to sent those rest/http calls
16:32:26 <sridhar_ram> that can't be generalized
16:32:46 <sripriya> sridhar_ram: i kinda see mapping witn neutron ml2 drivers :-) like type and mechanism
16:32:58 <s3wong> it sounds like we need an ML2 equivalent model
16:33:04 <Rajkumar_> Sridhar :one more example: we need to use HTTP for tail-f to push configuration
16:33:05 <sridhar_ram> sripriya: Yeah, I see some pattern emerging here!
16:33:08 <s3wong> single plugin, multiple drivers
16:33:19 <sripriya> s3wong: :-)
16:33:40 <s3wong> sripriya: typing that at the same time as you :-)
16:34:00 <sridhar_ram> given we are early in this "exploration" cycle .. I'd say leave the mgmt_driver attr in the API
16:34:16 <sripriya> sridhar_ram: ack
16:34:39 <sridhar_ram> Rajkumar_: those a nice inputs...
16:34:48 <Rajkumar_> Thank you
16:35:27 <sridhar_ram> Tacker, over time, having a rich set of drivers what different transports will help to quickly onboard new VNFs using Tacker
16:35:51 <s3wong> sridhar_ram: yes, that has always been the goal
16:36:01 <sripriya> sridhar_ram: +1
16:36:51 <sridhar_ram> Let me add an some entries in our etherpad backlog to revisit this
16:37:08 <sridhar_ram> anything else on MANO API ?
16:37:29 <sripriya> sridhar_ram: i will submit a a new patchset and respond to your comments
16:37:43 <sridhar_ram> sripriya: sounds good
16:38:07 <sridhar_ram> shrinathsuresh: Rajkumar_: please review the spec at the earliest
16:38:17 <sridhar_ram> lets move on..
16:38:31 <sridhar_ram> prashantD: hi
16:38:45 <prashantD> hi
16:38:54 <sridhar_ram> #topic Basic AutoScaling
16:39:04 <Rajkumar_> Sridhar - Suresh & Xin are from my team as part of TCS-Comcast Contributors
16:39:05 <sridhar_ram> can you provide an update ?
16:39:26 <sridhar_ram> Rajkumar_: cool. Welcome to the team - folks!
16:39:55 <sripriya> welcome!
16:40:03 <shrinathsuresh> Thanks
16:40:10 <s3wong> Rajkumar_: great! Welcome to the team --- what are their IRC nics?
16:40:25 <s3wong> shrinathsuresh: that is Suresh. Welcome!
16:40:25 <Rajkumar_> Xin_ welcome
16:40:38 <s3wong> Xin_: that is Xin. Welcome!
16:41:00 <prashantD> on the autoscaling side it is possible setup a autoscaling enviroment using heat & ceilometer
16:41:31 <prashantD> so I am looking into ceilometer...sridhar_ram pointed me to a good write up on how that is done and I am still going through the writeup
16:41:49 <s3wong> sridhar_ram: speaking of auto-scaling... do we have the policy description for auto-scaling on TOSCA?
16:41:56 <sridhar_ram> As you can image, this is a big topic ..
16:42:07 <prashantD> we have to enhance TOSCA template
16:42:46 <s3wong> it is a big topic, very interesting (probably need to set up LBaaS instance automatically as well), and thanks prashantD for taking on this task
16:42:46 <sridhar_ram> s3wong: there are some simple primitives
16:43:09 <sridhar_ram> s3wong: exactly, almost wrote the same
16:43:25 <s3wong> sridhar_ram:  :-)
16:43:33 <Rajkumar_> Auto scaling group can be used for auto scaling with ceilometer policies
16:43:54 <sridhar_ram> scope is to leverage basic Heat autoscaling using Tacker+TOSCA
16:44:09 <sridhar_ram> very, very limited ... to get some proof points
16:44:28 <sridhar_ram> this also ties back to our multi-VDU VNF scenario
16:44:32 <Rajkumar_> we did it with cpu_util
16:44:50 <sridhar_ram> I have been hammering Tacker mostly using "complex" VNFs these days to see how it performs
16:45:13 <sridhar_ram> Rajkumar_: that's cool.. perhaps you can colloborate w/ Prashant
16:45:33 <s3wong> Rajkumar_: I do think Ceilometer is mostly dealing with VM/server specific metrics
16:45:35 <prashantD> Rajkumar : so initially it is just going to be cpu_util
16:46:19 <sridhar_ram> prashantD: Rajkumar_: again, lets keep the scope limited to Heat AutoScaling group + Ceilometer avg cpu_util
16:46:44 <prashantD> s3wong, sridhar_ram : do we need to setup LBAAS or we should be fine without that initially?
16:47:20 <sridhar_ram> I'd vote not to be sucked into a specific VNF
16:47:39 <s3wong> prashantD: how do we load balance traffic otherwise? do we expose new VNF IP address to users every time we scale up?
16:47:55 <sridhar_ram> you need to do lots of LBaaSy things to test your code
16:48:41 <s3wong> sridhar_ram, prashantD: also, I could be wrong, but I thought LBaaS instance is part of the auto-scaling Heat template?
16:49:24 <sridhar_ram> s3wong: prashantD: again, we got to be bit generic here
16:49:46 <sridhar_ram> can't make anything specific that applies only for LB.. atleast in this initial round
16:50:00 * sridhar_ram notes just 10mins left
16:50:07 <s3wong> sridhar_ram: sure. As far as LB is concerned, even NFV MANO spec talked about that as a deployment case
16:50:29 <sridhar_ram> prashantD: I'd suggest you to write a tacker-spec while you are exploring this
16:50:44 <s3wong> sridhar_ram: that for sure
16:51:00 <s3wong> prashantD: it is definitely a big enough feature that we need a spec to review
16:51:29 <sridhar_ram> s3wong: sure, we can apply the auto-scaling feature of Tacker and see if it holds .. but I just want to caution to design Tacker's auto-scaling with only LB in the picture
16:51:36 <sridhar_ram> *not to
16:52:02 <s3wong> sridhar_ram: sure. I do think we can design it with LB and no LB option
16:52:02 <sridhar_ram> #action prashantD to write a tacker-spec for Basic AutoScale
16:52:20 <sridhar_ram> lets move on
16:52:21 <s3wong> sridhar_ram: though the last time I read the MANO spec, the no-LB option is very vague...
16:52:26 <prashantD> sure I will write spec...are we planning on passing traffic with initial auto-scale proof of concept?
16:52:28 <sridhar_ram> s3wong: sure
16:53:02 <sridhar_ram> s3wong: we will remove the vague-ness in the MANO spec..
16:53:03 <sridhar_ram> :_)
16:53:05 <sridhar_ram> :)
16:53:21 <sridhar_ram> #topic Testing
16:53:48 <sridhar_ram> I'm still waiting for some test to land in the repo before enabling new gate jobs
16:54:03 <s3wong> prashantD: let's get the end to end orchestration flow going --- that is with whatever limited TOSCA defined VNFD w.r.t. auto-scaling, and generate equivalent Heat template and auto-scaling group
16:54:08 <Rajkumar_> I committed one bug fix and its working fine but Jenkins build shows error. Please help us
16:54:39 <vishwanathj> Rajkumar_, share the link to your patch set
16:54:51 <sridhar_ram> Rajkumar_: sure, we have enough folks here to guide you
16:54:52 <Rajkumar_> http://logs.openstack.org/38/209838/1/check/gate-tacker-pep8/4807afd/
16:55:00 <prashantD> s3wong : okay, got it, thanx
16:55:01 <Rajkumar_> Jenkins log
16:55:17 <Rajkumar_> patch bug/1481888
16:56:03 <sridhar_ram> I hope w/ new code changes like MANO API & Health mon we will have more unit-tests..
16:56:27 <sridhar_ram> we are looking to fixup some existing servicevm functional tests
16:56:50 <vishwanathj> Rajkumar_, The bug does not have link to the patch set
16:56:51 <sridhar_ram> tempest / scenario tests are little further out .. we should consider it for the next cycle..
16:57:13 <sridhar_ram> #topic Open Discussion
16:57:21 <Rajkumar_> But we followed all the steps correctly
16:57:26 <s3wong> sridhar_ram: what other tests are we running now for gate?
16:57:35 <sripriya> Rajkumar_: commit message shoud be fixed
16:57:42 <sripriya> it cannot exceed 50 characters
16:57:58 <sripriya> that is the Jenkins error
16:58:01 <sridhar_ram> s3wong: just pep8 for now, but that will change soon
16:58:05 <sridhar_ram> :)
16:58:23 <sridhar_ram> Folks - I use this link to look at all Tacker reviews
16:58:28 <sridhar_ram> #link https://review.openstack.org/#/q/project:stackforge/tacker+OR+project:stackforge/python-tackerclient+OR+project:stackforge/tacker-horizon+OR+project:stackforge/tacker-specs,n,z
16:58:28 <s3wong> sridhar_ram: so Rajkumar_ bug fix fails pep8? that shouldn't be too hard to fix, right?
16:58:35 <Rajkumar_> Ok. let me commit again with less chars, is that will resolve the issue?
16:58:37 <sridhar_ram> please bookmark this!
16:58:55 <sripriya> Rajukar_: https://wiki.openstack.org/wiki/GitCommitMessages
16:59:04 <s3wong> sridhar_ram: I am way behind... but I will be off next week, so should pick up on the review backlog
16:59:06 <sridhar_ram> s3wong: Rajkumar_ need to fix his patchset (looks commit msg)
16:59:10 <vishwanathj> Rajkumar_, sripriya is right about the commit message causing the issue you are seeing
16:59:18 <sridhar_ram> almost time guys!
16:59:33 <sridhar_ram> lets continue if you want in the #tacker channel
16:59:35 <s3wong> Thanks, guys!
16:59:42 <sridhar_ram> we are two week away for Midcycle
16:59:48 <sridhar_ram> s3wong: have a nice vacation!
16:59:54 <Rajkumar_> Thanks. I'll commit again
17:00:07 <sridhar_ram> lets wrap
17:00:11 <sridhar_ram> #endmeeting