14:00:27 <topol> #startmeeting interop_challenge
14:00:28 <openstack> Meeting started Wed Feb  1 14:00:27 2017 UTC and is due to finish in 60 minutes.  The chair is topol. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:32 <openstack> The meeting name has been set to 'interop_challenge'
14:00:45 <tongli> o/
14:00:47 <topol> Hi everyone, who is here for the interop challenge meeting today?
14:00:47 <topol> The agenda for today can be found at:
14:00:47 <topol> # https://etherpad.openstack.org/p/interop-challenge-meeting-2017-02-01
14:00:49 <topol> We can use this same etherpad to take notes
14:00:55 <markvoelker_> o/
14:02:00 <topol> small crowd today... anyone else here?
14:02:05 <eeiden> o/
14:02:41 <topol> K, lets gets started. hopefully other folks join.
14:02:56 <topol> Agenda:
14:02:56 <topol> #topic Review last meeting action items
14:02:56 <topol> #link http://eavesdrop.openstack.org/meetings/interop_challenge/2017/interop_challenge.2017-01-25-14.03.html
14:03:21 <topol> tongli to update agenda with corrected terminology
14:03:32 <topol> I think you already dis this, correct?
14:03:51 <dmellado> hey topol o/
14:03:56 <dmellado> I was about to miss the meeting
14:04:03 <topol> hi dmellado, glad you made it
14:04:04 <tongli> @topol, yes, I did.
14:04:05 <dmellado> as it isn't even stated on the IRC
14:04:11 <gema> o/
14:04:27 <topol> dmellado what does that mean?
14:04:27 <dmellado> on last meeting tongli IIRC stated that there woudln't be due the to the chinese new year
14:04:40 <zhipengh> o/
14:04:45 <dmellado> I mean, it wasn't on the meeting page, IIRC
14:04:53 <dmellado> it stated that it was going to be next week
14:04:58 <dmellado> let me check, just in case
14:05:12 <tongli> @dmellado, that was for the china chapter meeting.
14:05:23 <dmellado> https://wiki.openstack.org/wiki/Interop_Challenge#Meeting_Information
14:05:28 <tongli> we still have our own. their meeting most likely will be in Chinese.
14:05:39 <topol> oh... wow, I thought I went rogue there for a sec. :-)
14:05:48 <dmellado> oh, it's there now
14:05:50 <dmellado> cool!
14:05:52 <skazi_> o/
14:06:15 <dmellado> can't we integrate that china chapter meeting with the usual one?
14:06:26 <dmellado> I'd love to hear what do they have to say ;)
14:06:31 <topol> but that reminds me I can send a ping to everyone before meetings start on IRC.   I look at the logs and make a list of our common attendees
14:06:37 <tongli> @dmellado, you are welcome to join in. the problem was language and time
14:07:10 <topol> dmellado, we plan to use tongli and his bilingual abilities to keep us uodated.  I should add that as a permanent agenda item.
14:07:26 <dmellado> tongli: I see, well, if it's just about time we can have a split in US/EMEA/ASIA
14:07:35 <tongli> @topol, yes, please.
14:07:43 <dmellado> but if you're proxy-ing tongli , then I'm fine for now ;)
14:07:44 <topol> #action topol to create a ping reminder list for our meeting attendees
14:08:09 <topol> #action topol add agenda item for update from China chapter meeting
14:08:23 <tongli> @dmellado, also language issue. most of these guys insisted to use Chinese when I talked to them while I was there two weeks ago to kick off the chapter.
14:08:41 <dmellado> tongli: hmm, interesting stuff
14:08:41 <topol> but there code goes in our repository :-)
14:08:48 <topol> err their code
14:08:50 <dmellado> I can kinda understand the language barrier
14:09:02 <dmellado> but then I might request a spanish meeting too xD
14:09:05 <tongli> @topol, that is  right. also the meeting minutes everything.
14:09:11 <topol> tongli to add a recommended actions before you show up at PTG section to the agenda etherpd
14:09:20 <tongli> just different time and language for the meeting.
14:09:25 <topol> tongli did you add this?
14:09:34 <dmellado> anyway, I won't block the meeting with this ;)
14:09:47 <gema> dmellado: you mean more than you already did? :P
14:09:55 <dmellado> gema: heh xD
14:10:02 <topol> tongli I dont see. this. I think you still need to add it
14:10:09 <dmellado> gema: don't go on, or I'll sing you up for the spanish chapter
14:10:28 <gema> dmellado: I don't speak spanish, lo siento senyor
14:10:44 <tongli> @topol, what did I add?
14:10:48 <topol> dmellado Yo hablo espanol.
14:11:05 <dmellado> topol: gema oh, perfecto entonces xD
14:11:27 <tongli> haha.
14:11:47 <topol> tongli you were supposed to add a section of what new folks should ahead of the meeting to be able to show up and run the workloads I think
14:11:56 <topol> should do ahead of the meeting
14:12:02 <tongli> @topol, do you mean the meeting information on the wikipage about China chapter?
14:12:13 <topol> no, Im talking about the PTG meeting
14:12:35 <topol> #link https://etherpad.openstack.org/p/interop-challenge-meeting-2017-01-11
14:12:55 <topol> werent you going to add something to that or did I misunderstand?
14:13:19 <topol> anyway, we can figure that out offline
14:13:24 <tongli> k.
14:13:39 <topol> topol to add agenda item to discuss using coreos for k8s workload
14:13:45 <topol> that was done
14:13:58 <topol> topol to confirm exact PTG schedule time allocation with cdiep
14:14:24 <topol> this is done as well.  I updated #link  https://etherpad.openstack.org/p/interop-challenge-meeting-2017-01-11
14:14:41 <topol> We get the RefStack room 1:30-5:00 on Tuesday :-)
14:15:00 <topol> next item:
14:15:02 <topol> all, please review patches #link https://review.openstack.org/#/q/status:open+project:openstack/interop-workloads,n,z
14:15:14 <dmellado> on the patches, tongli
14:15:29 <dmellado> could you please test some ansible versions and upted my patch on the tox envs
14:15:37 <dmellado> I really do think that having proper tox and maybe gates
14:15:55 <dmellado> would be a must if we want to verify versions
14:15:58 <dmellado> xD
14:16:02 <topol> looks like #link https://review.openstack.org/#/c/409180/ is the only one that needs reviewed. Nice!
14:16:06 <tongli> @dmellado, ok, I will try that.
14:16:19 <topol> great thanks dmellado and tongli
14:16:31 <tongli> @topol, yes, I reviewed zhipeng's summary patch. it just went in, I think.
14:16:33 <dmellado> thanks tongli !
14:16:37 <topol> todays items
14:16:53 <tongli> this one , we kind need confirmation on the versions of ansible and shade.
14:17:04 <tongli> I will try few things. and add comments.
14:17:18 <tongli> as an action.
14:17:57 <topol> #action, tongli to try some thing on the patch
14:18:02 <topol> #topic PTG Meeting Agenda and Logistics:
14:18:02 <topol> Meeting Feb 21, Tuesday afternoon 1:30 - 5:00 RefStack room:
14:18:02 <topol> Agenda and participant List:  https://etherpad.openstack.org/p/interop-challenge-meeting-2017-01-11
14:18:04 <topol> 
14:18:09 <tongli> #action, tong will figure out versions of ansible and shade to be supported by lampstack workload
14:18:24 <topol> okay thats been covered well enough. Look forward to seeing everyone!
14:18:35 <topol> #topic China Chapter update
14:18:40 <topol> tongli any updates?
14:18:59 <topol> or none due to Chinese New Year?
14:19:03 <tongli> no, they are on New Year vacation for two weeks.
14:19:12 <tongli> they won't be back until after 2/15
14:19:18 * topol easy agenda today :-0
14:19:45 <topol> #topic updates from eeiden on nfv.
14:19:54 <topol> eeiden any updates on the NFV app?
14:21:21 <topol> eeiden, still around?
14:21:23 <zhipengh> one thing I might add on this topic, we could coordinate this with OPNFV testing folks
14:21:35 <zhipengh> they have done plenty related  work
14:21:55 <topol> zhipengh,  can you provide more details. This sounds interesting
14:22:31 <gema> zhipengh: who are the "opnfv" testing folks?
14:22:41 <eeiden> yes sorry!
14:22:58 <dmellado> gema: that could be plenty
14:23:10 <dmellado> OPNFV, just ODL, both?
14:23:30 <zhipengh> so for example in OPNFV we first define scenarios, sth like "VM-2-VM traffic measurement for clearwater vIMS "
14:23:31 <dmellado> I kinda think that going OPNFV goes a li'l bit out of the scope of OpenStack
14:23:53 <zhipengh> with scenarios there will be test scripts developed
14:24:17 <zhipengh> and testing projects will help feature projects to test out those scripts with workloads
14:24:38 <zhipengh> gema meaning developers from Ericsson, Cisco, Huawei, great people
14:24:42 <eeiden> No updates yet. OPNFV Danube release code freeze just completed so we should be good very soon -- will check on it today
14:25:02 <topol> i guess I would have to see what the code for the scenario looked like to see if it was in or out of scope
14:25:04 <zhipengh> dmellado OPNFV, which includes ODL integration with OpenStack
14:25:29 <dmellado> zhipengh: I know, but just pointing at it
14:25:31 <zhipengh> topol we could pick those scenarios that involved OpenStack
14:25:32 <gema> topol: we also need to understand what APIs are we targetting here
14:25:49 <gema> topol: if they are outside of the scope of openstack then the use case is not what we need
14:25:51 <dmellado> zhipengh: that's exactly why I do say that's going over the boundaries
14:26:04 <dmellado> there are a couple of OPNFV installers, such as apex
14:26:12 <markvoelker_> FWIW, the Interop WG will be having some discussions with OPNFV folks in the near future anyway regarding a possible "OpenStack Powered NFV" program (this is not committed yet, just an idea we're working up).
14:26:17 <dmellado> but not every vendor would have OPNFV
14:26:30 <dmellado> if we all agree on adding such scenario, fine
14:26:37 <dmellado> but that've to be first agreed
14:26:39 <topol> gema. I agree. I guess I'm waiting to see some NFV workloads to review.  Between China Chapter, eeiden do we have anyting close to pushing a patch to review?
14:26:57 <dmellado> that'd be a good start
14:27:04 <markvoelker_> If vertical programs like this do come into existence, it seems quite reasonable to coordinate an interop challenge for them too.
14:27:31 <topol> markvoelker_ +++
14:27:36 <zhipengh> dmellado we could just don't use non-OpenStack related test cases
14:28:20 <topol> its sounds like an interesting discussion.  But until a patch shows up.... :-)
14:28:32 <dmellado> zhipengh: I was mainly wondering about for example
14:28:40 <markvoelker_> We will likely at least have some hallway track discussion on this (if not something more formal) at the PTG for folks who are interested.  I'll keep you all advised as we figure out scheduling. =)
14:29:09 <topol> markvoelker_ I am okay with it being on our agenda. Anyone is free to add to the agenda
14:29:27 <zhipengh> dmellado :)
14:29:40 <dmellado> zhipengh: my main 'concern' is for example
14:29:52 <dmellado> the different behaviour between plain neutron
14:29:59 <dmellado> and neutron+odl
14:30:25 <dmellado> I'd see that as something as what markvoelker_ said, something along the lines on an extensions from the interop
14:30:33 <dmellado> but let's wait to see some actual patch ;)
14:30:40 <zhipengh> agree
14:30:41 <topol> dmellado. makes sense!
14:31:03 <topol> eeiden, any updates on your NFV workload?
14:32:00 <eeiden> No, so sorry. Will keep you all posted as soon as I have something.
14:32:06 <tongli> @dmellado, exactly, code talks.
14:32:16 <tongli> I like to see some patches from anyone,
14:32:37 <topol> tongli+++
14:32:45 <topol> K next topic
14:32:54 <topol> #topic open discussion
14:33:11 <tongli> @topol, I added one item
14:33:11 <topol> anything else for today? any updates on k8s workload?
14:33:44 <dmellado> topol: I was wondering if ther's any plan for the Boston summit, I guess we'll get to talk about that at the PTG
14:33:46 <tongli> @topol, I have been working on it. but as you may have expected, the networking part is quite difficult.
14:33:49 <gema> topol: are we going to be doing something in Boston related to interop ? present something? I need to decide whether I need to be there
14:33:55 <dmellado> but would the idea be to repeat the keynote?
14:34:12 <dmellado> we might need a bigger stage, though xD
14:34:19 <tongli> I would like to ask the folks, for the workload, what kind of networking configuration we like to have for the k8s workload.
14:34:30 <topol> #topic Any preference for networking choice for kubernetes workload?
14:34:43 <topol> dmellado, good question
14:35:11 <topol> lets cove after tongli topic
14:35:17 <topol> go ahead tongli
14:35:41 <tongli> networking choice for k8s workload, what folks think? there are quite a lot options.
14:35:55 <tongli> flat networking or overlay over overlay.
14:36:09 <daniela_ebert> which options do you have in mind tongli?
14:36:46 <tongli> I am thinking flat which seems to me to be more practical like how google does it.
14:37:11 <dmellado> how to you plan to run the k8s workload?
14:37:20 <dmellado> k8s over OpenStack?
14:37:29 <dmellado> if so, it could even make sense to give kuryr a try
14:37:31 <daniela_ebert> flat sounds less error-proune
14:37:40 <tongli> flannal, is the overlay, since we are doing this on OpenStack, it will be overlay over overlay,
14:37:44 <zhipengh> dmellado +1
14:38:03 <tongli> probably will not be a good choice for production.
14:38:21 <topol> tongli what would be a good hoice for production?
14:38:37 <markvoelker_> From an interop perspective I think flat makes sense.  Not a lot of productized kuryr yet.
14:38:41 <tongli> @dmellado, the workload will basically like this.
14:38:58 <markvoelker_> O-in-o with flannel would probably be fine too, but there are some downsides to that approach.
14:38:59 <topol> markvoelker+++
14:39:05 <tongli> 1. provision few nodes (4 at first, configurable of course)
14:39:23 <tongli> 2. install k8s mater and 3 worker nodes
14:39:29 <tongli> 3. setup networking
14:39:42 <tongli> 4. start some container apps.
14:39:56 <tongli> 5. test they are all working
14:40:15 <tongli> 6 (option), remove a node, make sure everything still work
14:40:26 <topol> tongli I vagely recall a walk stage of a k8s app as well :-0
14:40:40 <dmellado> tongli: ack, thanks for the clarification
14:40:40 <tongli> 7(option), add a node,
14:41:23 <tongli> @dmellado, I think that is what I am thinking of doing now, basically k8s over openstack.
14:41:43 <tongli> I do not know openstack over k8s will be something we should look into. but I am open.
14:42:22 <topol> both use cases are interesting but my preference is k8s over openstack... ask me why
14:42:46 <tongli> @topol, why brad?
14:43:05 <topol> thanks tongli :-). I owe you.
14:43:10 <dmellado> heh xD
14:43:57 <topol> There are many folks in the container world who view OpenStack as not providing value to k8s.   They say its too complicated, etc.
14:44:30 <gema> topol: to be fair to them, I am still struggling to understand how this workout works
14:44:31 <topol> I think we help OpenStack a lot if we show that k8s runs great on an OpenStack infrastructure
14:44:34 <gema> workload, sorry
14:45:05 <topol> gema at a high level I think of it as k8s using OpenStack as its cloud infrastructure
14:45:06 <tongli> @gema, the problem is really the networking, what else can I say.
14:45:29 <gema> topol: as the provider of what?
14:45:30 <gema> containers?
14:45:32 <gema> VMs?
14:45:33 <zhipengh> k8s is growing way more complicated ...
14:45:36 <gema> VMs with containers?
14:45:43 <topol> Cause the alternative is k8s doesnt use OpenStack, rolls its own volume storage and virtual networking and claims OpenStack provides no value
14:45:49 <tongli> k8s also started doing the networking plugins.
14:46:08 <zhipengh> tongli storage is alsi a headache
14:46:14 <zhipengh> also
14:46:21 <dmellado> topol: +1 on k8s over OpenStack
14:46:33 <topol> zhipengh you are correct. But I see two possible futures
14:46:51 <topol> 1.  K8s runs great on OpenStack and the projects are complementary
14:47:16 <tongli> any way, I just want to get a feel on how people thinking about networking choice for our workload.
14:47:18 <topol> 2.   k8s becomes frustrated with trying to make OpenStack work and reinvents all our wheels
14:47:29 <topol> which future do we want?
14:47:40 <tongli> most likely that the workload will need OpenStack tenant network.
14:48:11 <topol> tongli, that is up to us to show the way forward?  Does that make sense?
14:48:12 * gema wonders how topol and tongli's private meetings look like, each one of them following their own train of thought x)
14:48:30 <zhipengh> tongli volumes are a mess atm
14:48:35 <tongli> @topol, understand.
14:48:39 <gema> zhipengh: what does that mean?
14:48:45 <zhipengh> topol i doubt they could do 2 tbh
14:48:55 <topol> gema how so?  the meetings usually end up with me taking tongli to lunch and he forgets is wallet (not last time though, he paid)
14:49:05 <gema> haha
14:49:07 <tongli> @topol, haha.
14:49:26 <zhipengh> gema k8s has a volume plugin mechanism, but the in-tree and out-of-tree are still up to debate
14:49:27 <tongli> because I was so focused on my work.
14:49:35 <topol> on a serious note, if this group doesnt show k8s running great on OpenStack,  who else would?
14:49:45 <zhipengh> anyways OpenStack could do a lot as a cloudprovider for k8s
14:49:53 <zhipengh> topol agree
14:50:08 <topol> I have been to kubecon,  OpenStack does not get high praise fromt hat community..
14:50:30 <tongli> ok, guys, I did not want to start a meeting to talk about which is better, just want to know the preference of networking for our k8s workload.
14:50:47 <tongli> I will go with the flat for now and see how it goes.
14:50:50 <topol> so that's why I think we help OpenStack if we do k8s over OpenStack as the focus
14:50:57 <gema> tongli: sorry I was only able to follow topol's meeting line
14:51:09 <topol> sorry tongli go ahead
14:51:17 <dmellado> tongli: on that,  I guess we can go for flat first, and then let's see if we can get to flannel if everything works ok
14:51:59 <topol> any concerns with going flat first?
14:52:07 <gema> nope, sounds easier
14:52:16 <tongli> @dmellado, that is what I am thinking as well, since this is a bit sensitive to our various network implementation from many different cloud.
14:52:29 <tongli> I know some of the folks do not have tenant network.
14:53:10 <topol> #agreed k8s workload will use flat networking  first
14:53:24 <topol> ok 7 mins left
14:53:27 <tongli> I forgot which cloud, but at least there were two clouds do not support tenant network.
14:53:28 <gema> tongli: is this how we use it: https://kubernetes.io/docs/getting-started-guides/openstack-heat/ ?
14:53:39 <tongli> @gema, can not use heat
14:53:39 <gema> is the heat api the right one for this?
14:54:04 <tongli> most clouds do not have it on.
14:54:33 <gema> oh, so we are ahead of ourselves, ok
14:54:45 <dmellado> gema: depending on what we want to do
14:54:53 <topol> #topic plans for Boston
14:54:53 <dmellado> I guess we can switch heat for ansible as an orchestrator
14:54:55 <gema> dmellado: ok, I will listen to you guys, you have tried it
14:55:36 <gema> go ahead topol
14:55:49 <topol> So I havent thought about trying to getback on the keynote stage for Boston.  It was a lot of pressure and my fear was we'd go back into make things work and lock them down
14:56:10 <topol> as opposed to having less pressure to make progress on interop with the workloads
14:56:44 <topol> mark voelker, tongli, and I are submitting a session to do an update on phase two
14:56:49 <gema> plus it wouldn't be as impressive anymore since we've done it once?
14:57:04 <gema> ok
14:57:21 <topol> gema.  yes not as much to gain because that day in Barcelona was magical
14:57:26 <tongli> @gema, unless something very different and as exciting as what we did last time.
14:57:34 <dmellado> maybe some kind of workshop
14:57:48 <dmellado> i mean, a session showing how do we run the workloads on the clouds
14:57:51 <topol> But I think a workshop session would be awesome to submit
14:57:55 <zhipengh> Forum ?
14:57:57 <topol> dmellado +++
14:57:59 <dmellado> where any dev from any cloud might join
14:58:31 <topol> zhipengh, I think Anni Lai from Huawei is trying to put together a panel too
14:58:32 <dmellado> I'll try to put something as a session before the deadline
14:58:36 <tongli> @topol, I like workshop a lot.
14:58:36 <dmellado> when was it, Feb 6th?
14:59:19 <topol> dmellado  please put a session together.  Include some other folks if you dont mind
14:59:31 <topol> dmellado, yes Feb 6th is the deadline
14:59:36 <dmellado> sure, is there anyone around who sould like to co-present?
14:59:48 <dmellado> gema: hogepodge luzC ?
14:59:52 <dmellado> anyone ?
15:00:06 <topol> dmellado want me to voluntold tongli?
15:00:07 <markvoelker_> I can probably help with that
15:00:12 <gema> I would love to, but I am not sure I am going to be there x D
15:00:17 <daniela_ebert> I could help with that dmellado
15:00:33 <topol> daniela_bert, very nice
15:00:37 <tongli> I can help with workshop sessions.
15:00:38 <dmellado> cool, then IIRC there's a max of 3 presenters
15:00:55 <dmellado> let me check
15:01:07 <luzC> yes, o/
15:01:39 <topol> would be nice to get as many folks visibility as we can
15:01:42 <dmellado> tongli: markvoelker_ tongli
15:01:48 <dmellado> if you guys are doing the another session
15:01:58 <dmellado> I'll put daniela_ebert and luzC along
15:02:01 <dmellado> would that work for you guys?
15:02:10 <dmellado> we can all together work on the workshop in any case
15:02:16 <topol> dmellado that makes sense to me
15:02:57 <topol> everyone okay with that?
15:02:57 <tongli> we ran out the time.
15:03:04 <dmellado> yep
15:03:09 <daniela_ebert> fine with me :-)
15:03:20 <dmellado> daniela_ebert: luzC I'll put a brief description and add you as co-presenters so you can edit it
15:03:24 <dmellado> tomorrow morning my time ;)
15:03:28 <topol> k ping folks on irc if any concerns
15:03:33 <topol> #endmeeting