08:02:43 <dkushwaha> #startmeeting tacker 08:02:45 <openstack> Meeting started Tue Sep 10 08:02:43 2019 UTC and is due to finish in 60 minutes. The chair is dkushwaha. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:02:46 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 08:02:48 <openstack> The meeting name has been set to 'tacker' 08:02:56 <dkushwaha> #topic Roll Call 08:03:22 <tpatil> Hi 08:03:59 <hyunsikyang> Hi all 08:05:14 <dkushwaha> hello all 08:05:48 <hyunsikyang> Hi dkushwaha 08:05:54 <keiko-k> Hello 08:06:05 <joxyuki> hi 08:06:57 <dkushwaha> #chair joxyuki 08:06:57 <openstack> Current chairs: dkushwaha joxyuki 08:07:21 <dkushwaha> #topic announcement 08:08:20 <dkushwaha> We have to release client libs code freeze is in this week, so we needs to merge client code soon 08:09:09 <dkushwaha> #topic BP 08:09:46 <dkushwaha> tpatil, any update from ur side on VNF package? 08:09:54 <tpatil> dkushwaha: you have given one review comment on python-tackerclient patch: https://review.opendev.org/#/c/679956/2/tackerclient/osc/v1/vnfpkgm/vnf_package.py@54 08:10:32 <tpatil> as per ETSI specs, tenant_id or project_id cannot be passed in the request body when you create a VNF package 08:11:27 <tpatil> we are getting the project_id that is available in the tacker context and setting it during creation of vnf package in tacker-api service 08:12:27 <dkushwaha> tpatil, not sure why we can passe it. This way we may loose multi-tenancy 08:13:18 <dkushwaha> IMO, its beeter to get in as additional params 08:13:26 <tpatil> with tacker context, multi-tenancy will be retain 08:13:48 <tpatil> becoz it all works using token, when you get a token you need to specify username, password and project name 08:15:13 <tpatil> In tacker case, endpoint doesn't include tenant_id, but in other projects it's there. 08:15:51 <dkushwaha> tpatil, ok, lets go with current way, we needs to check it again and we may keep it as further AI 08:16:38 <hyunsikyang> I will comeback soon 08:16:44 <tpatil> dkushwaha: Sure, Thanks 08:17:24 <dkushwaha> rest parts looks fine to me, and just reviewing https://review.opendev.org/#/c/679958/ 08:17:35 <tpatil> dkushwaha: let me confirm, so there is no need to add tenant_id for this release, correct? 08:18:28 <dkushwaha> tpatil, IMO we need to add it 08:18:50 <dkushwaha> but might be we can check it later 08:19:14 <dkushwaha> yea, i mean not needed in this release 08:19:37 <tpatil> dkushwaha: Understood, Thanks 08:20:55 <dkushwaha> joxyuki, please help to review https://review.opendev.org/#/q/topic:bp/tosca-csar-mgmt-driver+project:openstack/python-tackerclient 08:21:12 <dkushwaha> so that we can get it in soon 08:21:25 <tpatil> dkushwaha: Talking about tosca-parser, we have got two +2 but workflow is not yet set. 08:22:05 <dkushwaha> tpatil, yea i seen it, i will ask them to get it in 08:23:04 <tpatil> dkushwaha: Thanks, some of the unit tests are failing becoz it depends on the tosca-parser changes 08:23:56 <tpatil> #link : https://review.opendev.org/#/c/675600/ 08:25:05 <dkushwaha> #topic Open Discussion 08:26:18 <dkushwaha> tpatil, ok, lets wait to merge tosca patch. 08:27:12 <dkushwaha> I do not have other topics to discuss now 08:27:32 <hyunsikyang> I have a one question:) 08:28:06 <hyunsikyang> Anyway, we implement tacker-fenix integration. So we uploaded it. https://review.opendev.org/#/c/681157/ 08:28:12 <hyunsikyang> as a WIP. 08:28:45 <dkushwaha> hyunsikyang, thanks for update. 08:29:01 <hyunsikyang> But, I just want to ask your guys about alarm url. 08:29:03 <dkushwaha> hyunsikyang, please go ahead your question 08:29:27 <hyunsikyang> we implemented it to create alarm url for each VDU. 08:30:05 <hyunsikyang> At the first time, we just think that VNF maintenance happen by each VNF unit. 08:30:59 <hyunsikyang> but, IMO, when we think about multi VDU in one VNF, it should manage for each VDU. 08:31:35 <hyunsikyang> What your guys think about it? 08:34:25 <dkushwaha> hyunsikyang, I think maintenance work will be done at VNF level, not for a single instance but for all instances in VNF 08:35:41 <dkushwaha> hyunsikyang, caould you please explains some use cases where it needed at specific VDUs 08:35:42 <dkushwaha> ? 08:36:18 <hyunsikyang> In the ETSI standard, one of the example is vEPC for VNF. 08:37:24 <hyunsikyang> It means that each VDU is one of the component of vEPC. To support maintanance for each component, I think each VDU needs maintenance. 08:37:50 <hyunsikyang> BUt, we also consider maintenance as a VNF level, too. 08:38:46 <hyunsikyang> Other question, Do you think multi VDU deployed to different host? 08:39:22 <hyunsikyang> CAN I THINK VDU AS A vnfc? 08:40:29 <dkushwaha> hyunsikyang, yes a VNF can multiple VDUs/vnfc 08:40:32 <joxyuki> for the first question, yes. VDUs can be deployed to different host. 08:41:37 <hyunsikyang> I think so. 08:43:34 <dkushwaha> hyunsikyang, As per your current spec, we needs to apply same maintenance policies for each instances in VNFs 08:44:27 <hyunsikyang> what is the mean of each instance? 08:44:31 <hyunsikyang> only for VNF? 08:45:13 <dkushwaha> hyunsikyang, i mean the each VDUs 08:45:59 <dkushwaha> maintenance should be true inside each VDU 08:46:03 <hyunsikyang> ok 08:46:06 <hyunsikyang> thanks 08:48:00 <hyunsikyang> So if we also consider maintenance for VNF level, we will add that function for vnf url 08:48:12 <hyunsikyang> we will think about it:) 08:48:15 <hyunsikyang> thanks all 08:48:31 <dkushwaha> hyunsikyang, +1 08:49:15 <dkushwaha> hyunsikyang, JangwonLee, I will be great if you add test cases in your patch. 08:50:34 <dkushwaha> Do we have anything else to discuss? otherwise we can close this meeting 08:52:19 <dkushwaha> Thanks Folks 08:52:33 <dkushwaha> Closing this meeting 08:52:42 <dkushwaha> #endmeeting