17:01:07 #startmeeting tacker 17:01:09 Meeting started Tue Dec 8 17:01:07 2015 UTC and is due to finish in 60 minutes. The chair is sridhar_ram. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:13 The meeting name has been set to 'tacker' 17:01:19 #topic Roll Call 17:01:22 o/ 17:01:34 Hi Tackers -- who is here ? 17:01:38 bobh: hi there! 17:01:44 brucet 17:01:48 o/ 17:01:59 hi 17:02:30 lets start then... 17:02:34 I am here also 17:02:57 morning everyone! 17:03:22 ... noon, or evening (depending on tz) 17:03:22 morning sridhar_ram 17:03:30 #topic Agenda 17:03:40 #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Dec_8.2C_2015 17:03:47 #topic Announcements 17:04:13 We now have tacker packages in pypi... 17:04:25 cool 17:04:28 #link https://pypi.python.org/pypi/tacker/0.1.0 17:04:36 nice 17:04:37 #link https://pypi.python.org/pypi/tacker-horizon/0.1.0 17:04:50 #link https://pypi.python.org/pypi/python-tackerclient/0.1.0 17:05:02 this is based of kilo release 17:05:22 we finally got everything in place to make releases.. 17:06:03 hello 17:06:05 Will make a liberty release as well sometime soon.. 17:06:24 sridhar_ram:will this be reflected in the tacker launch page also? 17:06:28 .. we can decide on a time, perhaps mid-Dec ? 17:07:17 sripriya: not sure, if launchpad will pick up automatically...need to check 17:07:40 sridhar_ram: ok thanks 17:07:50 Folks - if you've any pending cherry picks to stable/liberty please do it now! 17:08:38 we should decide on our release strategy .. 17:09:46 our devstack installers can potentially use pypi based tackerclient and tacker-horizon for dependencies moving forward.. 17:10:14 any thoughts / suggestions on pypi / pkg releases ? 17:10:49 what do other projects do? like monasca, sfc 17:11:25 networking-sfc hasn't been released yet 17:11:26 vishwanathj: some projects, particularly library projects like tosca-parser, make continuous releases .. every 5-6 weeks 17:11:26 trying to see if we there is a precedent and can follow best practices 17:11:50 sridhar_ram: I thought pypi jobs can be scheduled automatically along with releases 17:11:51 "client" projects should also make regular releases... 17:12:07 but once you are tagged release: independent, you can release whenever and under whatever frequency as you want 17:12:29 natarajk: we are not yet in governance, so release team won't do it for us.. we are on our own 17:13:19 s3wong: we can do the same for us... 17:13:31 sridhar_ram: yep 17:13:47 bottomline ... we need to move to regular client & horizon releases and tacker repo need to pull them using requirements.txt 17:14:05 we are NOT bounded by the half-year release cycle; even if we make it to big tent, as long as we are release:independent 17:14:14 s3wong: true 17:14:51 versioning wise 0.1.0 == kilo, 0.2.0 == liberty, 0.3.0 == mitaka .. 17:15:12 .. for liberty we might do follow 0.2.1 with things like sfc 17:15:18 *follow-on 17:15:42 lets move on.. we can talk more on release in future mtgs 17:15:51 #topic Mitaka Midcycle Plans 17:16:36 with many big ticket items in flight - like sfc, enhanced vnf placement, tosca-parser, auto-resource - I thought we shd consider meeting face 2 face 17:16:56 for a quick poll - how many of you would be interested in attending ? 17:17:11 sridhar_ram +1 17:17:14 +1 17:17:17 +1 17:17:21 +1 17:17:22 +1 17:17:25 +1 17:17:29 +1 17:17:32 +1 17:17:44 +1 17:17:56 nice ... 17:18:11 I'll send a doodle pool for some possible dates .. 17:18:42 .. tentatively, would mid- to- 2nd half of Jan work for most of you ? 17:19:16 yes 17:19:27 sridhar_ram: fine with me 17:19:32 +1 17:19:37 sounds fine --- at least for now :-) 17:19:39 we can take this to finalize in an ML thread.. 17:19:50 sridhar_ram, yes 17:20:00 I will be traveling Jan 9-16 17:20:08 +1 17:20:45 we need to find a way for remote attendance, without hampering the benefit of a F2F... 17:21:02 tbh: you would need a remote dial-in ... ? 17:21:27 sridhar_ram, yeah 17:21:51 sridhar_ram, I need some way to connect 17:22:11 alright.. we can decide the date/time/logistics outside this mtg. 17:22:15 lets move on... 17:22:31 #topic Mitaka Blueprint Updates 17:22:47 #topic Enhanced VNF Placement (EVP) 17:23:42 vishwanathj: and gongysh from China has come forward to get this going. I know tbh is also interested in contributing.. 17:24:22 I'm planning to host a one time adhoc / china / india friendly irc meeting on this topic.. 17:24:24 sridhar_ram, yes I want to be part of this BP 17:24:35 it is 1am in Beijing :( 17:24:56 tbh: sure 17:25:12 sridhar_ram, is it possible to have alternate timings each week 17:25:35 like most of the other openstack projects follow to accommodate all contributors? 17:26:04 tbh: absolutely.. lets starts with few adhoc meetings and see how it goes 17:26:16 sridhar_ram, sure 17:26:42 anyone here interested to discuss further on EVP ? 17:27:16 btw EVP == numa topology awareness + cpu-pinning + pci-passthry + sr-iov... 17:28:02 please watchout for a mtg invite and plan to attend.. 17:28:08 lets move on... 17:28:08 OK. brucet +1 17:28:20 brucet_: sure, bruce 17:28:32 #topic TOSCA parser updates 17:28:43 bobh: any quick updates from your side ? 17:29:30 I submitted a WIP patchset for the tosca-parser changes and created the BPs for heat-translator and tacker (I think - need to check that one) 17:29:58 The next big hurdle is the object mappings from TOSCA NFV -> Heat in heat-translator 17:29:59 bobh: link ? 17:30:11 sridhar_ram: checking 17:30:35 #link https://blueprints.launchpad.net/tacker/+spec/tosca-parser-integration 17:31:09 #link https://blueprints.launchpad.net/heat-translator/+spec/tosca-nfv-support 17:31:36 #link https://review.openstack.org/#/c/253689/ 17:32:08 bobh: cool.. this actually big deal, this is probably the first and only implementation of tosca-nfv profile (AFAIK) 17:32:11 There needs to be some discussion around the specific object mappings to I'll start that conversation this week - maybe an etherpad is the best place for that 17:32:40 sridhar_ram: probably - it's not completely implementable in it's current form 17:32:40 bobh: etherpad is a good idea.. 17:33:22 Looking at the NFV spec has raised some questions in my mind about the Simple TOSCA spec 17:33:32 bobh: understood, we have some mindful folks in tosca-nfv stds group (that's includes me) to incorporate our findings 17:33:39 like why the basic Network object has IP addresses in it 17:33:53 so it might end up triggering additional changes in the base spec if we do it right 17:34:07 bobh: I see.. 17:34:41 bobh: one thing, you might already know .. tosca-nfv profile is not a standalone spec as it stands. It build on top of tosca-simple profile namespace 17:34:46 we can discuss in the etherpad and find the right solution - for now anyway 17:35:16 right - but for example they define the "VirtualLink" as derived from Root not derived from tosca.nodes.network.Network 17:35:33 sure, please send a ML email when you get the etherpad to going 17:35:48 will do 17:35:55 bobh: that VL reference is weird.. will take a look 17:36:16 lets move on... 17:36:33 #topic Multi-Site 17:36:41 sripriya: please take over.. 17:37:02 are multi-site and multi-vim synonymous ? 17:37:11 sridhar_ram: sure, multisite vim support v2 is out for review https://review.openstack.org/#/c/249085/ 17:37:22 vishwanathj: IMHO no 17:38:13 vishwanathj: there was bit of confusion, as multi-vim for some folks means different types of VIMs like openstack, vmware, aws 17:38:16 ok, my understanding as well was that they were not the same 17:38:43 vishwanathj: multi-VIM refers to types of VIMs (OpenStack, VMware, AWS, etc) where as multi-site ( multiple installations of same VIM) as i understand 17:39:22 sridhar_ram: one thing i wanted to discuss here was about the auto network creation in case of multisite scenario 17:39:23 sridhar_ram, sripriya, thanks for the explanation 17:39:31 OK. Since this is multi site, did you see my comments about the potential use of Heat multi cloud? 17:39:40 for this feature? 17:40:51 brucet: my understand is that feature is still under development ? 17:41:01 brucet: heat mutlicloud is still a WIP right? 17:41:06 *multi 17:41:14 The point is that you could incorporate a prive version of it into Tacker 17:41:34 It assumes a standalone version of Heat 17:41:51 So you could incorporate that standalone versiojn of Heat in Tacker 17:42:05 When the feature is fully released, you could use the public version. 17:42:48 brucet: that puts a dependency for operators to use this bleeding edge openstack feature before someone can use Tacker's multi-site 17:42:55 Much easier than developing something new 17:43:05 Nope 17:43:18 The feature is extremely simple 17:44:26 brucet: the scope of this spec is even basic than that.. 17:44:31 brucet: i don't see any spec for the blueprint or i may be missing it. moreover it uses keystone federation (as the bp states) which needs some real effort to get it right 17:44:32 brucet: see https://answers.launchpad.net/tacker/+question/276717 17:44:37 If you find bugs in the feature, you can fix them in your standalone version of Heat and fold them into the version under development 17:45:09 brucet: we need to support Tacker, sometime even on existing / operational OpenStack instances 17:45:27 Standalone Heat in Tacker 17:45:30 brucet: we CANNOT ask the operators to re-work their deployment at this point 17:45:44 Works with existing OpenStack deployments 17:46:21 brucet: have said that, we shd consider this (multi-cloud) and perhaps other things like keystone federation in follow on iterations of multi-site 17:46:49 brucet: but it is not available now.. lets start with something that is available now and iterate 17:46:52 But you understand my point about standalone Heat? 17:47:09 I think a beta version is available now. 17:47:19 brucet: can you explainn further on that ? 17:47:38 Heat can be deployed as a standalone subsystem 17:48:07 The multi cloud feature works with a stand alone Heat deployment 17:48:24 You use the standalone Heat to instantiate stacks in other OpenStack clouds 17:48:48 The standalone Heat variant could be incorporated in Tacker 17:49:06 brucet: this is the multi-region heat feature right? 17:49:14 Separate from Heat in OpenStack that Tacker is running on 17:49:29 Actually multi cloud 17:49:42 I sent a link to the blueprint in my comment 17:50:02 where standalone heat can instantiate stacks in remote stacks and a single identity service is running across all sites 17:50:03 Seems extremely simple 17:50:12 brucet: I see. Standalone Heat talks to remote heat-engines ? or does it talk directly to remote nova/neutron ? 17:50:24 Remote heat engines 17:50:43 Remote heat engines instaniate stacks on remote OpenStack instances. 17:51:14 The new multi cloud feature is based on multi region 17:51:50 brucet: so you propose tacker --> standalone-heat --- > { remote-heat-engine-1, remote-heat-engine-2, .. } 17:51:56 brucet: with the requirement of keystone federation 17:52:18 sridar: yes 17:53:07 I think the new multi cloud fearture does not require keystone federation 17:53:48 In any case, I think its worth looking into 17:54:12 brucet: yes, that is an interesting feature.. 17:54:38 brucet: however lets keep in mind. that higher order tacker user facing requirement here is .. target VIMs need to be exposed to NFV / VNF Orchestrators. 17:54:47 brucet: .. apart from mult-site support 17:54:59 lets continue the discussion in the spec.. 17:55:04 OK 17:55:11 brucet: not sure,as the BP says "Extend our existing multi-region remote stacks to multi-cloud, so that a remote stack can be created on a separate cloud with its own Keystone, provided that Keystone federation is supported between clouds." 17:55:17 tbh: can give a quick update from your side ? 17:55:26 * sridhar_ram 5 mins mark 17:55:43 #topic Automatic Resource Creation 17:55:45 sridhar_ram, yeah, we have decided to create network resources on vnf-create tim 17:56:09 sridhar_ram, but I need some clarity on declaration of image details in TOSCA template 17:56:42 sridhar_ram, if we use artifacts syntax, you mentioned in the comments 17:56:57 tbh: on the create network part, it will be good to understand its requirement in multisite use case 17:57:37 then are we not going to support vm_image ? 17:57:50 sripriya, sure, I will look into that direction 17:57:59 tbh: no, we should support both for time being.. 17:58:31 tbh: leave your question in your spec.. I'll respond 17:58:32 tbh: thanks, you can provide your suggestions on the multisite spec as well 17:58:48 sridhar_ram, but have concerns with artifacts syantax 17:58:53 tacker team - pllease review https://review.openstack.org/#/c/250291/ 17:59:10 we need to land this, if possible, earlier in the Mitaka cycle 17:59:18 sridhar_ram, sure will update in the comments 17:59:28 tbh: unfortunately we are out of time for today... 17:59:40 lets continue in the gerrit 17:59:48 #topic Open Discussion 18:00:05 folks - watch out for an email on EVP mtg and plan to attend! 18:00:17 that's it for today... 18:00:23 bye 18:00:25 bye 18:00:26 bye 18:00:30 thanks bye 18:00:33 #endmeeting