16:00:12 #startmeeting tacker 16:00:13 Meeting started Tue Nov 22 16:00:12 2016 UTC and is due to finish in 60 minutes. The chair is sripriya. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:16 The meeting name has been set to 'tacker' 16:00:30 hello tackers 16:00:32 #topic Roll Call 16:00:36 o/ 16:00:38 o/ 16:00:39 o/ 16:00:41 o/ 16:01:17 tbh janki marchetti diga hi! 16:01:23 Hi all 16:01:33 tung_doan: hi 16:01:34 hey 16:01:37 hi all 16:01:55 btw, this is mike_m, i lost my nick 16:02:22 marchetti: it took me some time to figure out :-) 16:03:06 #chair tbh 16:03:07 Current chairs: sripriya tbh 16:03:14 lets get started 16:03:29 #topic agenda 16:03:45 https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Nov_22nd.2C_2016 16:03:50 #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Nov_22nd.2C_2016 16:04:30 we can quickly go through the patches and then talk about the ocata spec 16:04:56 since diga and marchetti are also here we can touch upon pecan update and multi vim support 16:05:07 hello 16:05:20 #topic Tacker dsvm gate failure 16:05:24 s3wong: hi 16:05:47 as you have observed, tacker dsvm job is broken and hence all patches are failing 16:05:53 sripriya: yes, we should discuss on that 16:06:14 this is due to mysql versions we are using on the gate 16:06:47 aodh folks helped us identify this error since this error was specifically seen for aodh mysql installation 16:07:18 we use mysql 5.5 version on gate with ubuntu trusty 16:07:38 we need to move to 5.6 to update all our jobs to ubuntu-xenial 16:08:16 i will work with infra teams to update our tacker jobs to use xenial as a priority 16:08:19 Hi All 16:08:59 this is a heads up to all of us that if we are using mysql 5.5 version in your development, please upgrade to 5.6 version 16:09:25 manikanta_: hello 16:09:38 any questions or thoughts team? 16:10:12 yes 16:11:03 diga: did you have something to share? 16:11:32 sripriya: My base API framework is ready 16:12:01 diga: we will get to it, we are 1st knocking off the agenda items in meeting wiki 16:12:02 sripriya: I want to push it under tacker 16:12:05 next topic 16:12:10 #topic Adding status field in VNFD 16:12:37 here is the patch https://review.openstack.org/#/c/373157/ 16:13:14 Regarding review comments on https://review.openstack.org/#/c/373157/ , instead of displaying the constant VNFD status as ACTIVE or NA 16:13:19 manikanta_: i know you wanted to discuss more on this patch with rest of folks, take it away 16:13:36 is it better we have a status field in VNFD object ? 16:13:55 gongysh suggested this, Need inputs from others as well ? 16:15:16 whether it is "state" or "status" I don't think the end user is not using that info for any purpose 16:15:37 manikanta_: interesting, till now we have treated only VNF as the entity going through different life cycle management states, VNFD is more of a static catalog from which VNF is created 16:16:03 state or status here refers to proper onboarding of the VNFD in catlogue, IMO 16:16:21 are we using VNFD status like enable/disable? I mean enabled are the ones from which launch VNF? 16:16:21 manikanta_: tbh it also doesnt make sense to display status as N/A for VNFD 16:16:26 IMO, status of vnfd is not much used 16:16:39 sripriya, yes 16:17:01 So everytime, We assue that the state of VNFD is ACTIVE ?? in all case ? 16:17:12 janki: i think the patch is dealing with VNFD events and not to the end user 16:18:06 sripriya, manikanta_ VNFD is not a running entity that it will have stages. once created, it is static siting in db 16:18:10 sripriya, yes eventually logs for the user? and is there any other event for VNFD? 16:18:40 so when an user queries VNFD events, even though the actual VNFD entries we do not maintain any status, we need to keep this consistent with VNFD events 16:19:11 tbh: we also access the specific VNFD events through Horizon right? 16:19:44 sripriya, yes we can access 16:19:59 manikanta_: i don't know if ACTIVE can be used since it is easier to confuse with VNF status ACTIVE 16:20:26 we can use "created" 16:20:44 or uploaded 16:21:01 or onboarded? 16:21:02 or onboarded - this goes with horizon term too 16:21:07 janki: sripriya : How about onboarded 16:21:14 sripriya, same thoughts :) 16:21:20 since we onboard VNFD and deploy VNF 16:21:22 :-) 16:21:36 do we agree with onboarded then? 16:21:46 +1 16:21:50 sripriya: +1 from myside 16:22:10 cool, manikanta_ please update the patch with the comments 16:22:24 will this state be displayed on CLI and horizon too? 16:22:30 sripriya, janki tbh : Thanks for the inputs 16:22:36 janki: yes 16:22:41 next topic, 16:22:42 sripriya, : Will update the same and respond to gongysh 16:22:48 #topic Fix hard coded VDU in alarm monitor 16:23:38 tung_doan: where do we stand as of now for this patch https://review.openstack.org/#/c/382479/ 16:23:49 sripriya: thanks.. my patch is almost done.. just need reviews.. 16:24:08 sripriya: just one concern and need disucss with you guys... 16:24:18 tung_doan: was there any dependent patches that needed to be reviewed before we review this? 16:24:26 tung_doan: sure please 16:24:55 sripriya: regarding to scaling use case, when we support both scaling in/out... 16:25:21 sripriya: alarm format need to know specific scaling action (in/out) 16:25:38 sripriya: but they are not shown in tosca-template 16:25:55 tung_doan: okay 16:25:58 sripriya: that why i need to parse to get scaling in/out 16:26:11 sripriya: does it make sense? 16:26:11 tung_doan: can you share a sample tosca template 16:26:20 tung_doan: it is easier to understand that way 16:26:37 sripriya: https://review.openstack.org/#/c/382479/17/samples/tosca-templates/vnfd/tosca-vnfd-alarm-scale.yaml 16:27:13 tung_doan: i think till now it was assumed it would always be a scale out operation 16:27:35 sripriya: yes.. but new patch was fixed this. 16:28:35 tung_doan: so right now you point to the same scaling policy in both cases? 16:29:07 sripriya: right.. but it will be parsed.. like this: https://review.openstack.org/#/c/382479/17/doc/source/devref/alarm_monitoring_usage_guide.rst@136 16:29:40 sripriya: just look in alarm url 16:30:10 sripriya: we have SP1-in and Sp-out for scaling-in and scaling-out 16:31:04 tung_doan: ok so we need specific policies based on alarm triggers 16:31:20 sripriya: that's right 16:31:21 tung_doan: so what is the current implementation with your patch? 16:32:11 sripriya: i already tested.. scaling in/out supported, fixed VDU hardcoded was done using metadata 16:32:36 sripriya: almost items for alarm RFE were done 16:33:10 tung_doan: i see that you apply the scaling policy based on the operator specified in alarm policy 16:34:17 tung_doan: we may be missing some edge cases here, but let us get this version out and later work on enhancing it 16:34:37 sripriya: ok.. thanks 16:34:59 sripriya:also, please show me your suggestion if possible, thanks 16:35:35 team, please review this patch , we need to get this in to newton and make a stable release, kindly provide your comments or leave your +1s 16:35:46 tung_doan: will take a closer look at the patch today 16:35:55 tung_doan: was there any other patch related to this topic? 16:36:31 sripriya: i already mentioned to you about scaling stucks in PENDING_SCALING_STATE 16:37:00 tung_doan: is this for the scale_in? 16:37:02 sripriya: i will look into that later.. 16:37:07 tung_doan: okay 16:37:12 moving on 16:37:16 #topic VNF Container spec 16:37:24 sripriya: right. actually heat fixed this 16:37:41 janki: take it away 16:37:46 #link https://review.openstack.org/#/c/397233/ 16:37:49 tung_doan: ack 16:37:50 sripriya: so we can leverage it 16:37:55 sripriya, I have replied to the comments 16:38:19 I am thinking of calling magnum apis directly instead of going via heat 16:38:38 we can also directly supply dockerfile for vnf creation 16:38:49 janki: help me understand, when we discussed at the design summit, we talked about zun for container life cycle management 16:39:00 in similar lines to supporting HOT template for vnf creation 16:39:22 sripriya, yes zun is the best approach. 16:39:42 since zun is not fully ready, going with magnum for the first iteration 16:40:03 janki: so does magnum also support container creation? 16:40:18 janki: magnum is supposed to manage COEs right? 16:40:31 sripriya: yes, its COE 16:40:51 sripriya, yes magum is to manage COE. but it does create containers in bay and then manage the bay 16:41:09 tbh correct me if i am worng here 16:41:14 sripriya: I think we should go with magnum 16:41:36 janki: now bay called as cluster 16:41:40 janki: i guess those are specific container platform commands and not openstack api commands 16:43:08 sripriya, i think there are openstack commands too. need to check on this 16:43:09 sripriya: no, all the commands are openstack apis 16:43:26 janki: diga : okay and this is through magnum? 16:43:34 sripriya, yes 16:43:37 sripriya: yes 16:43:38 sripriya, janki I feel magnum won't serve the purpose for the following reasons ... 1)It will launch the containers in the nova instance, which will be extra overhead 2)sometimes if we want to scale out the NF, it may launch new nova instance(if the existing VMs serving COE is out of capacity), so again introducing VM problems here 3)If we choose magnum, we need to select COE first and develop those specific yaml files 16:44:18 tbh +1. but this is for the first iteration I am proposing 16:44:37 tbh: thanks for the info 16:44:46 janki: tbh: how about zun? 16:45:01 i did not see any REST api docs of the project, i may be missing something 16:45:03 janki, even in the first iteration if we consider magnum, and then in future if we want to move to zun, we need to change lot of codebase which is undesirable 16:45:03 tbh: I dont agree with you, we dont need to launch nova instance when we want to scale, that;s the reason cluster part comes into picture 16:45:31 tbh: COE files are already there, we just want to reuse 16:45:40 zun is not fully ready. last I heard, they were writing a scheduler for spawning containers 16:45:50 janki: okay 16:45:58 diga, cluster can come into picture inside the bay like pod kind of thing, or more of nova instances in single bay 16:46:15 diga, at some of point of time, we have to launch new VM again 16:46:22 tbh: bay is collection of nova instances 16:46:23 instead of waiting for zun and siting idle, lets start with some POC kind of work and get things in discussion/improvement 16:46:42 sripriya, tbh ^ 16:46:46 diga, yes, it is also limited I feel 16:47:13 tbh: I dont think, I have written some part of COE two release before 16:47:26 diga, same here :) 16:47:34 tbh: it works well, & tested on coreos & fedora-atomic 16:48:11 tbh: janki: diga: let us introduce magnum as the 1st container support and later introduce zun as a 2nd support 16:48:24 sripriya, +1 16:48:37 sripriya, janki I feel either go with zun or heat docker plugin(if not deprecated). or tacker own apis for container 16:49:08 sripriya: +1 but I dont know when zun will get stable, I think not for this release at least 16:49:12 tbh: that is also a good point, does heat directly interface with docker? 16:49:23 yeah 16:49:35 heat has docker plugin implemented 16:49:43 I am also emphasising on using method 3 of spec - Directly pass in Dockerfile to create a VNF. This would bypass 16:49:43 tosca-parser, Heat and call Magnum APIs directly. 16:50:01 tbh sripriya heat-docker is deprecated/removed 16:50:17 sripriya, but I don't think magnum will be sufficient, even in the case of SFC 16:50:25 tbh: my only concern for direct interface is we dont want to own too much of this logic in tacker since that is not the main goal for the orchestrator and may have challenges when we integrate with other VIMs 16:50:43 janki: I would like you to cover all this point in the spec 16:50:56 tbh: we are still in a nascent stage for this :-) 16:50:57 diga all the points are covered 16:50:58 sripriya: +1 16:51:33 sripriya: any how we need, third layer to manage orchestration 16:51:38 writing APIs in tacker would be like duplicating some portion of zun 16:51:46 sripriya, but my concern, we have to completely move to new project if we choose magnum 16:51:53 alright, some good thoughts here, let us continue to iterate the spec based on this comments 16:52:25 tbh: tacker already has started to interact with multiple projects for many cloud functionalities 16:53:03 sripriya, yes, but here I mean pick one project for one or two cycles is not good I think 16:53:21 tbh: i do not think we will remove one and add a new one 16:53:29 tbh: they can evolve in parallel 16:53:34 tbh: what do you think? 16:54:04 sripriya, tbh we can have this feature as a techpreview and keep on improving over the coming releases 16:54:11 sripriya, I am thinking for magnum, it may happen :) 16:54:17 tbh: please add your thoughts on the spec and we can discuss on gerrit 16:54:34 sripriya, sure 16:54:35 we have 2 more specs to touch upon quickly 16:55:16 #topic vmware vim support 16:55:19 marchetti: are you planning to create a spec for supporting vmware as a VIM? 16:56:06 I'm still trying to figure out what is needed. It may be that VIO is the best option. 16:56:14 VIO == Vmware Integrated OpenStack 16:56:36 If that is the option, then technically speaking it should already be supported 16:56:46 marchetti: if you need some information from Tacker perspective, someone can tag with you for this spec 16:57:00 sripriya: ok thanks 16:57:00 marchetti: right 16:57:33 marchetti: let us discuss more on this in tacker channel after the meeting 16:57:42 sripriya: ok 16:57:43 #topic pecan framework 16:57:54 diga: do you have a spec created for this? 16:58:06 yes 16:58:10 Already pushed 16:58:11 link please? 16:58:25 1 min 16:58:54 https://review.openstack.org/#/c/368511/ 16:58:58 please add the cores to the spec and feel free to ping folks on tacker channel 16:59:08 sripriya: sure 16:59:08 team, please review this spec and provide your thoughts 16:59:27 time is up team 16:59:31 thanks for attending! 16:59:38 sripriya: about pecan code, can I create one folder under tacker 16:59:41 #endmeeting