14:01:02 <topol_> #startmeeting interop_challenge
14:01:03 <openstack> Meeting started Wed Feb  8 14:01:02 2017 UTC and is due to finish in 60 minutes.  The chair is topol_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:03 <zhipengh_> o/
14:01:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:07 <openstack> The meeting name has been set to 'interop_challenge'
14:01:10 <HelenYao> o/
14:01:15 <topol_> Hi everyone, who is here for the interop challenge meeting today?
14:01:34 <topol_> The agenda for today can be found at:
14:01:35 <topol_> # https://etherpad.openstack.org/p/interop-challenge-meeting-2017-02-08
14:01:35 <topol_> We can use this same etherpad to take notes
14:01:42 <skazi> o/
14:01:59 <topol_> Agenda:
14:02:00 <topol_> #topic Review last meeting action items
14:02:00 <topol_> #link http://eavesdrop.openstack.org/meetings/interop_challenge/2017/interop_challenge.2017-02-01-14.00.html
14:02:14 <topol_> topol to create a ping reminder list for our meeting attendees
14:02:31 <topol_> this is done, let me know if I need to add someone to the list
14:02:46 <topol_> topol add agenda item for update from China chapter meeting
14:02:51 <topol_> this is done as well
14:03:23 <topol_> next last meeting action item:
14:03:27 <topol_> tongli to try some thing on the patch
14:03:27 <topol_> , tong will figure out versions of ansible and shade to be supported by lampstack workload
14:03:41 <topol_> tongli any progress on these?
14:03:57 <tongli> @topol, I have not had time to try different version of ansible /shade on this, heads down on k8s.
14:04:17 <tongli> @topol, sorry.
14:04:29 <topol_> K, no worries, we will keep it on the list :-)
14:04:31 <tongli> trying to get k8s in a shape that I can submit a patch.
14:04:41 <dmellado> yep, please, I'll try to take some time to test that out too
14:04:47 <topol_> tongli sounds good.
14:04:47 <dmellado> worst case let's sync on the PTG
14:05:12 <tongli> @dmellado, thanks.
14:05:22 <topol_> Okay, todays agenda which is action packed:
14:05:36 <topol_> #topic Patches that need review  https://review.openstack.org/#/q/status:open+project:openstack/interop-workloads,n,z
14:05:44 <luzC> o/
14:06:18 <tongli> one is the tox thing, and the other is the root access for wordpress deployment in lampstack.
14:06:30 <topol_> 2 two patches out there, all please review.  anything specific we need to discuss about the patches?
14:06:46 <tongli> the problem was that if you use root to deploy wordpress, it will fail.
14:07:01 <topol_> #action all please review patches #link https://review.openstack.org/#/q/status:open+project:openstack/interop-workloads,n,z
14:07:17 <tongli> there needs to be an option, china chapter (T2Cloud) submitted that patch.
14:07:46 <topol_> so with #link we are waiting for dmellado to push a new patch?
14:08:06 <topol_> er #link https://review.openstack.org/#/c/409180/
14:08:22 <dmellado> I'll push a patch as soon as I sync with tongli on the ansible versions
14:08:37 <topol_> dmellado, excellent THANKS
14:08:38 <dmellado> as it is, it would work, but we might want to be more open on that
14:08:53 <topol_> #action dmellado to push a new patch for #link https://review.openstack.org/#/c/409180/
14:10:13 <topol_> zhengming will you push a new patch for #link https://review.openstack.org/#/c/430631/
14:10:17 <topol_> ?
14:10:47 <topol_> tongli is that a china team patch?
14:10:48 <tongli> @topol, zhengming's patch is good. that flag is necessary.
14:11:18 <tongli> @topol, yes, it is. after we run the workload while I was in China, we found this problem, and discussed the patch.
14:11:18 <topol_> tongli it has a -1 from markvoelker
14:11:40 <tongli> yes, since Mark did not know the reason why it is needed, I have answered his question
14:11:46 <tongli> this is a necessary fix.
14:12:10 <markvoelker> We should get that kind of information into the commit message though.  The commit message really gives no indication about why we're doing this.
14:12:15 <zhipengh_> but still need a patch right ?
14:12:22 <zhipengh_> agree with mark here
14:12:32 <tongli> @markvoelker, sure.
14:12:54 <topol_> markvoelker +++ was just about to say that
14:12:55 <tongli> add a bit of info in the commit msg. Let me add some comment there.
14:13:00 <chandankumar> \o/
14:13:35 <topol_> #action tongli to follow up to make sure commit message gets updated in https://review.openstack.org/#/c/430631/
14:13:47 <dmellado> o/ chandankumar
14:13:56 <topol_> K, think we are getting close to the fun stuff :-)
14:14:17 <topol_> #topic review last weeks networking choice
14:14:30 <topol_> just FYI, We have decided to use flat GCE like networking choice for our k8s workload
14:14:50 <topol_> #topic Huawei (Helen) NFV workload proposal.
14:15:02 <HelenYao> Hi everyone
14:15:04 <topol_> Hi HelenYao, Welcome!
14:15:09 <dmellado> hey HelenYao welcome!
14:15:16 <HelenYao> thx :)
14:15:37 <zhipengh_> So I think everyone should already got the slide Helen prepared which I sent to the interop mailinglist
14:15:38 <topol_> HelenYao tell us about your NFV Workload proposal
14:15:46 <HelenYao> sure
14:15:51 <tongli> @HelenYao, thanks for taking time joining this wg, I saw that you have added the flow to the etherpad, please take us through the steps.
14:15:59 <HelenYao> Sample Workflow: Run vIMS on OpenStack with OPEN-O
14:15:59 <HelenYao> 1. Deploy 1 VM and install OPEN-O
14:15:59 <HelenYao> 2. Deploy 2 VM and install Juju (OpenStack should be able to support Juju)
14:15:59 <HelenYao> 3. Call the OPEN-O Rest API to upload the NSD and VNFD package(definition files for the network service and VNF)
14:15:59 <HelenYao> 4. Call the OPEN-O Rest API to deploy the Clearwater vIMS (1 VM)
14:16:00 <HelenYao> 5. Call the vIMS Rest API to configure Ellis (a specific call number is set for each OpenStack vendor)
14:16:00 <HelenYao> 6. Show the audience by dialing the specific call number
14:16:23 <HelenYao> basically, for the nfv, we can demo the call function
14:16:38 <HelenYao> which is deployed by vIMS - a VNF service
14:16:38 <dmellado> I would be keen on not using Juju
14:16:58 <dmellado> can't we use another orchestrator?
14:17:08 <HelenYao> dmellado: could u explain the reason?
14:17:10 <zhipengh_> dmellado any reason why not juju ?
14:17:20 <HelenYao> open-o is the orchestrator
14:17:26 <dmellado> because basically, Juju is totally tied to Ubuntu
14:17:30 <HelenYao> juju is just used to deploy vIMS
14:17:31 <dmellado> and if we're going to show interop
14:17:39 <tongli> @demllado, I think similar steps can be done using ansible, but let's not hang on the tools.
14:17:48 <dmellado> I'd rather use something that would be supported and available for *all* distributions
14:17:59 <tongli> we just need to figure out what need to be done here.
14:18:04 <HelenYao> dmellado: yeah, u r making a good point
14:18:13 <zhipengh_> agree tongli
14:18:29 <tongli> we need to figure out what needs to be done here, tools can be chosen later.
14:18:35 <topol_> dmellado, so I had the same discussion with markbaker  yesterday.   He is willing to put some resources to show that juju will run everywhere.  I agree that needs to be proven
14:18:45 <topol_> But they are willing to try
14:19:04 <dmellado> we can ellaborate more on that
14:19:06 <topol_> The safest thing for us is to try and support some options here
14:19:14 <dmellado> I'd like to have a look if possible
14:19:19 <tongli> @zhengming, step 3 and step 4, what are the details there?
14:19:21 <dmellado> but I don't think that, as of now
14:19:26 <HelenYao> we can define the main flow first and find tools to fit in
14:19:29 <dmellado> another distributions wuold spend resources on juju, tbh
14:20:08 <tongli> when you say call OPEN-O Rest API to upload NSD and VNFD package, what do they mean specifically?
14:20:14 <HelenYao> for step 3, to deploy a vnf, a topology is needed to define the network service, that is done by NSD and VNFD
14:20:22 <topol_> so the safe thing to do is do have ansible which we  know works everywhere but let folks also try to see if juju will run everywhere
14:20:32 <HelenYao> it is basically tosca yaml
14:21:03 <topol_> HelenYao, step 3 uses tosca yaml?
14:21:10 <zhipengh_> yes
14:21:21 <HelenYao> the open-o is the orchestrator to deploy the service and juju is to managne all deployed vnfs
14:21:27 <HelenYao> topol_: yes
14:21:34 * topol_ gold star for HelenYao! I started that project years ago :-)
14:21:57 <dmellado> heh, I need to check on open-o
14:22:11 <dmellado> is it under openstack/ github's?
14:22:17 <zhipengh_> #link https://wiki.opnfv.org/display/PROJ/Opera+Project
14:22:19 <HelenYao> topol_: wow, u started tosca, brilliant
14:22:28 <zhipengh_> open-o is an individual community
14:22:29 <luzC> me too
14:22:36 <topol_> HelenYao, actually the heat translator project, not TOSCA
14:22:49 <luzC> I need to check it out too
14:22:53 <zhipengh_> topol_ i worked with sahdev vert closely on that
14:23:09 <topol_> gold star for zhipengh_ as well
14:23:11 <topol_> :-)
14:23:13 <topol_> very nice
14:23:18 <zhipengh_> dmellado luzc please follow the link I provided
14:23:30 <zhipengh_> opera is opnfv integration project for open-o
14:24:04 <zhipengh_> it got all the information you need
14:24:11 <zhipengh_> topol_ thx :)
14:24:57 <topol_> HelenYao, zhipengh any more details you can provide?
14:25:39 <HelenYao> clearwater mentioned here is also an open source project for vIMS
14:25:53 <HelenYao> we can count on it to make a call
14:26:33 <topol_> HelenYao so what is the specific network service that is started by all this orchestration?
14:26:36 <HelenYao> to make a call to a vIMS deployed by a openstack is good candidate for nfv demo
14:27:10 <zhipengh_> topol_ IMS
14:27:18 <HelenYao> the orchetrator will deploy the all topology of vIMS
14:27:21 <dmellado> thsnks zhipengh_ , I'll take a look
14:27:31 <tongli> what is vIMS?
14:27:37 <topol_> HelenYao, zhipengh please excuse my limited NFV background. what does IMS do?
14:27:41 <HelenYao> http://www.projectclearwater.org/technical/clearwater-architecture/
14:27:45 <zhipengh_> virtual IP Multimedia Subsystem
14:28:02 <topol_> well thats sounds pretty impressive
14:28:02 <zhipengh_> tongli it is a classic core network service
14:28:05 <dmellado> heh, then I guessed it right
14:28:16 <dmellado> was just wondering about the v
14:28:27 <topol_> Im impressed
14:29:02 <zhipengh_> topol_ you could dial calls via SIP based client, where IMS will handle the SIP protocal in the core network
14:29:04 <HelenYao> v stands for virtualized version
14:29:20 <zhipengh_> so basically it is a VoIP function
14:29:26 <zhipengh_> something like that
14:29:35 * topol_ I get all dazzled by worklaods that have multimedia in their name
14:29:52 <zhipengh_> IMS was a hot topic ten years ago
14:30:11 <topol_> zhipengh_, HelenYao,  here are my thoughts.
14:30:17 <dmellado> topol_: I always recall about those old pentium's MMX whenever I listen that word
14:31:02 <topol_> This looks like an exciting workload and we really wanted an NFV workload.  I would love to see this submitted as a patch to the repository.
14:31:25 <dmellado> +1 zhipengh_ HelenYao would you mind drafting this as a patch?
14:31:34 <dmellado> I was wondering if we should have blueprints
14:31:50 <dmellado> I'd really love to discuss those implementation details over something concrete and not on the irc
14:31:53 <zhipengh_> topol_ for patch you mean just for the scripts or the actual vIMS and juju source code ?
14:32:01 <HelenYao> what is expected in the patch? all the scripts to run whole workable flow
14:32:07 <topol_> Now for reasons that will become clear later in the agenda,  we need to make sure everyone can run this. Hint Interop Challenge phase 2 Boston Summit, as requested by the foundation
14:32:08 <zhipengh_> dmellado we could schedule ZOOm calls :P
14:32:15 <dmellado> topol_: tongli would it be reasonable to expect a blueprint?
14:32:19 <dmellado> zhipengh_: ZOOm?
14:32:38 <zhipengh_> ZOOM, the anti-gotomeeting-webex
14:32:56 <dmellado> heh xD
14:32:58 <topol_> dmellado, we will need a blueprint/ some for of description. that will help everyone. I fine with it being submitted with the patch
14:33:00 <tongli> @dmellado, a blueprint will be nice. so that we can comment and agree on by our wg members.
14:33:17 <dmellado> then can we add an action on zhipengh_ and HelenYao to submit a blueprint?
14:33:52 <HelenYao> what is a blueprint? sorry, I am new here :-)
14:33:52 <tongli> consider other companies are also willing to contribute. that is one of the things china chapter has talked about last night.
14:34:00 <topol_> Are folks wanting a blueprint to provide more description or because they want a more formal approval process?
14:34:11 <tongli> @HelenYao, I can help with that. we can talk offline on that.
14:34:21 <topol_> I do have one issue with this
14:34:21 <HelenYao> tongli: neat
14:34:31 <dmellado> topol_: from my side, maybe more description and comments there
14:34:31 <zhipengh_> tongli do we need to register the bp on the launchpad ?
14:34:43 <tongli> @topol, I would like the blueprint just to clarify things and more description.
14:34:43 <dmellado> I'd like to have a deeper look at it after checking some stuff ;)
14:34:50 <topol_> demellado that is what I am thinking.
14:35:23 <tongli> @demellado, @topol_, I think we are going the same direction on the blueprint
14:35:29 <topol_> My one issue is the juju needs to be optional and can replaced with ansible if for some reason juju will not run everywhere.
14:35:52 <dmellado> +1
14:35:58 <tongli> moving forward, we probably should have two workload implementations.
14:36:07 <zhipengh_> topol_ we could look at alternatives, but ansible might not be the answer as well
14:36:11 <dmellado> but would it make sense to spend time on juju
14:36:13 <tongli> if one is done, doing the other should be fairly easy
14:36:16 <dmellado> if in the end it won't be interoperable
14:36:17 <dmellado> ?
14:36:36 <HelenYao> I think openstack heat could be a replacement for juju
14:36:39 <rarcea> dmellado: +1
14:36:40 <tongli> well, regardless which tool we use, the end results have to be the same.
14:36:41 <topol_> So here is where my dream is where I want to be.   We show we have workloads that run on all clouds and we show we have more that one autoamtion tool (thnk ansible,juju) that work on all clouds
14:36:49 <tongli> otherwise, we just defeated ourselves.
14:37:04 <topol_> this desire originally mentioned by markvoelker way back when
14:37:12 <dmellado> I'd rather use heat/ansible
14:37:21 <rarcea> dmellado: +1 for heat
14:37:25 <dmellado> markvoelker: around? any reason for the juju thing?
14:37:49 <zhipengh_> heat + heat-translator could be the alternative
14:37:58 <topol_> dmellado markvoelker wanted multiple tools not juju specifically
14:37:58 <markvoelker> I don't recall asking for Juju specifically.  I do recall saying we want to not be tied to any particular config management tool
14:38:16 <topol_> markvoelker yes, I tried to captrue that acccurately
14:38:29 <dmellado> topol_: I understand that, but my concern is that if we don't narrow the scope in the end we won't ever end it ;)
14:38:53 <markvoelker> Basically: if we find problems with a particular tool in terms of it's ability to provision workloads on OpenStack clouds, shedding light on those is actually a good thing so they can be addressed
14:38:56 <HelenYao> even chef, docker compose can be an option. We have all kinds of tools
14:39:14 <markvoelker> My concern there is primarily on the OpenSTack side rather than than guest though.
14:39:22 <topol_> dmellado, so I need to be risk averse here.   ideally we have a path that has a high level of confidence works everywhere
14:39:32 <dmellado> topol_: markvoelker I see your point
14:39:41 <dmellado> but then, I'd start with some tool that would work on every distro
14:39:45 <markvoelker> E.g. if ToolX can't provision VM's on 6/10 OpenStack clouds b/c it uses an archaic API version or something, that's a problem
14:39:46 <dmellado> and and a fallback
14:39:56 <tongli> man , can we not talk about tools today, my head really hurts.
14:39:56 <HelenYao> one thing to mention, juju is a verified VNFM. it takes time to evaluate other tools as VNFM
14:40:01 <dmellado> i.e. heat/ansible
14:40:20 <topol_> at the same time I need to let markbaker kick tires on juju to see if it will really run on lots of clouds like he claims
14:40:24 <dmellado> but if they submit the blueprint we can modify it
14:40:32 <dmellado> so I won't block the conversation here ;)
14:40:32 <markvoelker> If the workloads requires you to boot a Fedora VM, I'm less concerned about it because most clouds will be able to boot VM's (regardless of whther it's Fedora or something else)
14:40:39 <dmellado> would this work for you too rarcea markvoelker ?
14:41:29 <topol_> do the issues im trying to worry about/balance make sense?
14:41:56 <dmellado> topol_: yep, they would for me, just that I'd try to go for tools that would be available in most clouds
14:41:57 <markvoelker> topol_: Yes.  Again, I'm most concerned with the bits that touch OpenStack and less concerned with the contents of the workload itself.
14:42:42 <zhipengh_> tongli where should we submit the BP ? have a new spec folder under the interop-workload repo ?
14:43:14 <topol_> markvoelker, are you saying the juju is not scary cause int inside the workload?
14:43:25 <topol_> err its inside the workload?
14:43:32 <tongli> out project has a page for submit blueprint.
14:43:37 <tongli> let me find the link.
14:44:11 <tongli> https://blueprints.launchpad.net/interop-workloads
14:44:40 <markvoelker> topol_: If it's being used to spin up infrastructure within OpenSTack and can do that across many OPenStack clouds, I'm fine with that.  If this particular workload requires that the guests be one OS or another, I'm even ok with that.
14:44:41 <tongli> that is our launchapd site, where we can file bugs, blueprints.
14:44:42 <zhipengh_> got it thx tongli
14:45:13 <topol_> in any case I think we need to get it submitted and folks can see if it runs in their clouds or not.
14:45:51 <tongli> @topol_, sure, we should have a blueprint for this considering that people have quite different views of NFV.
14:46:02 <tongli> consolidate it a bit is a good thing.
14:46:11 <topol_> if juju has issues markbaker promised to provide a rover to help other clouds.  But I  also like backup options..
14:46:12 <dmellado> +1 on the blueprint
14:46:43 <topol_> so everyone okay with we move forward with the blueprint?
14:47:29 <topol_> can the patch be submitted concurrently with the blueprint? Any concerns doing both?
14:48:04 <dmellado> topol_: well, I'd implement the patch after agreeing on the blueprint
14:48:16 <dmellado> both of them can be sumitted at a time
14:48:19 <tongli> @topol_, blueprint first please
14:48:27 <dmellado> but the patch should be modified along the requirements on the blueprint
14:48:29 <dmellado> IMHO
14:48:30 <topol_> k, lets see a blueprint first
14:48:41 <tongli> consider the patch may take some time to complete
14:49:14 <gema> tongli: I get an error when I open that link
14:49:27 <gema> LAunchpad doesn't seem to know how interop challenge workloads tracks feature planning
14:49:28 <topol_> tongli+++ I wasnt sure if the patch was close.  Blueprints are used to save the patch committer pain and misery of submitting something that then has the wrong design/architetcure
14:49:28 <tongli> the blueprint page?
14:49:47 <gema> yep, you will have to go into the settings and tell it you want to use blueprints
14:49:55 <tongli> @topol_, exactly, we should have the blueprint first.
14:50:24 <topol_> #action HelenYao, zhipengh_,tongli to work together to get a blueprint submitted
14:50:25 <tongli> I am sure other companies wanting to contribute to NFV might have some say to it. so let's have an agreement on the content first.
14:50:32 <topol_> tongli+++
14:50:48 <dmellado> I can't say more ++ on it ;)
14:51:10 <topol_> k so how will we do our blueprints,  we going to have a specs-folder and review blueprint in there like they are code?
14:51:50 <tongli> https://blueprints.launchpad.net/interop-workloads @gema, you can not open it?
14:51:58 <zhipengh_> topol_ what i asked earlier
14:52:04 <dmellado> I'd go for that, I guess it'd be easier than spawning a new repo
14:52:11 <tongli> @topol_, we do blueprint on the launchpad page.
14:52:21 <zhipengh_> tongli that is for registering
14:52:22 <topol_> tongli yes but thats old school
14:52:36 <zhipengh_> tongli the actual BP rst file needs to go somewhere
14:52:49 <topol_> register there but reveiw comment in some form of repository
14:52:57 <zhipengh_> yep
14:53:13 <topol_> for example, #link https://review.openstack.org/#/q/status:open+project:openstack/keystone-specs,n,z
14:53:14 <tongli> hmmm, ok, I thought we create it there like create bugs.
14:53:30 <tongli> hmmm. ok,
14:53:38 <topol_> tongli, you register it there but then look at the contents of https://review.openstack.org/#/q/status:open+project:openstack/keystone-specs,n,z
14:54:08 <topol_> to keep life simple, should we just have a specs folder?
14:54:14 <topol_> and not a spearate repo?
14:54:17 <dmellado> topol_: yep
14:54:17 <tongli> so, we create a new folder in the project for blueprint
14:54:21 <dmellado> +1 on the folder
14:54:24 <dmellado> maybe within docs
14:54:33 <dmellado> and I can tweak a patch for tox to generate the docs
14:54:49 <topol_> dmellado, can you set up the specs folder for us?
14:55:05 <dmellado> yep, I'll put a patch for that, no worries
14:55:07 <tongli> can we just create /bp at the root of the project folder?
14:55:15 <dmellado> I'll do that after the meeting
14:55:19 <topol_> dmellado very nice, I love the autogenerate of docs
14:55:38 <topol_> K, folks I only have 5 mins I need to move on
14:55:43 <topol_> let me jump ahead
14:56:03 <topol_> #topic PTG Meeting Agenda and Logistics:  Meeting Feb 21, Tuesday afternoon 1:30 - 5:00 RefStack room:
14:56:04 <topol_> Agenda and participant List:  https://etherpad.openstack.org/p/interop-challenge-meeting-2017-01-11
14:56:04 <topol_> PTG Meeting will now include meeting with Mark Collier to discuss showcasing Interop Challenge Phase 2 in Boston
14:56:15 <topol_> so good news but bad for my blodd pressure
14:56:45 <topol_> Mark Collier will meet with us at PTG, the foundation wants to do something with InteropChallenge in Boston
14:57:15 <dmellado> heh, live demo again?
14:57:21 <topol_> so yay more visibility. I thought we could stay under the radar but the bug stage is calling us again
14:57:44 <topol_> dmellado, looking like something like that.  we can discuss at the PTG with Mark
14:58:05 <topol_> but understand now I have to go back to making sure we deliver.
14:58:19 <dmellado> let's discuss it there too
14:58:29 <topol_> so I worry about things like juju vs ansible etc
14:58:31 <dmellado> but if that's the case we ought to try small milestones
14:58:45 <topol_> dmellado agreed
14:58:53 <dmellado> and try to focus on small enhancements
14:58:53 <topol_> need some MVPs
14:59:23 <topol_> so lets see how we are with workload progress at the PTG
14:59:35 <dmellado> sounds fine for me ;)
14:59:51 <topol_> Im out of time, tongli if we have an update on chapter it needs to wait till next week
15:00:04 <gema> o/
15:00:08 <topol_> #action tongli to update on china chapter
15:00:09 <gema> bye
15:00:09 <tongli> that is fine.
15:00:11 <zhipengh_> one thing yall need to keep in mind is that NFV workload will run on NFV private clouds, that means for verified VNFM like Juju, chances are majority of the NFV private clouds should already support that
15:00:25 <topol_> #endmeeting