14:00:35 <topol> #startmeeting interop_challenge
14:00:35 <openstack> Meeting started Wed Mar  1 14:00:35 2017 UTC and is due to finish in 60 minutes.  The chair is topol. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:39 <openstack> The meeting name has been set to 'interop_challenge'
14:00:47 <tongli> o/
14:00:54 <topol> Hi everyone, who is here for the interop challenge meeting today?
14:01:01 <eeiden> o/
14:01:07 <HelenYao_> o/
14:01:08 <topol> Ping dmellado tongli gema zhipengh markvoelker daniela_ebert skazi luzC gcb hodgepodge yaohelan
14:01:16 <skazi_> o/
14:01:18 <markvoelker> o/
14:01:43 <HelenYao_> topol: pong
14:01:56 <topol> The agenda for today can be found at:
14:01:56 <topol> #link https://etherpad.openstack.org/p/interop-challenge-meeting-2017-03-01
14:01:56 <topol> We can use this same etherpad to take notes
14:02:14 <topol> we have a packed agenda today
14:02:29 <topol> #topic Review last meeting action items
14:02:29 <topol> #link http://eavesdrop.openstack.org/meetings/interop_challenge/2017/interop_challenge.2017-02-15-14.00.html
14:02:59 <topol> Also items from PTG
14:03:11 <topol> #link https://etherpad.openstack.org/p/interop-challenge-meeting-2017-02-21
14:03:19 <topol> so first item
14:03:35 <topol> all please review patches #link https://review. openstack.org/#/q/status:open+project:openstack/interop-workloads,n,z
14:03:35 <topol> 
14:03:46 <topol> several patches out there need review
14:04:13 <topol> any that need discussion or do folks just need to find time to review?
14:04:19 <tongli> Yes. we do.
14:04:43 <topol> go ahead tongli
14:05:11 <topol> any patches we need to discuss?
14:05:11 <tongli> yes, I think we just need to review these patches.
14:05:16 <topol> K, sounds good
14:05:57 <topol> #action all please review #link https://review.openstack.org/#/q/status:open+project:openstack/interop-workloads,n,z
14:06:17 <topol> #topic PTG Updates and decisions
14:06:28 <topol> #link https://etherpad.openstack.org/p/interop-challenge-meeting-2017-02-21
14:06:43 <topol> so one item here is mine:
14:07:06 <topol> #action topol to contact board members to increase participation
14:07:24 <topol> I will contact platinum and gold board members starting today
14:07:46 <topol> lets see what else we have from our ATL getaway
14:07:51 <tongli> @topol, we have currently 5 companies signed up.
14:08:40 <topol> tongli, yep and we need lots more
14:09:08 <markvoelker> #link https://wiki.openstack.org/wiki/Interop_Challenge#Boston_Summit_On_Stage_Keynote_K8S_Demo_Commited_Parties Sign up page on the wiki
14:09:15 <topol> #topic Boston Demo, signing up sheet on the wiki page. #link https://wiki.openstack.org/wiki/Interop_Challenge
14:09:35 <topol> markvoelker had a beter version :-)
14:10:22 <topol> another item we had from PTG:   Lauren to help get user feedback on Kube apps to use
14:10:43 <topol> so Mark Collier actually did this
14:11:06 <topol> #topic Uodate from MarkCollier/ Alex Polvi of coreos  Kube Workload Enhancements
14:12:31 <topol> So I dont think we had enough runway to invite Alex Polvi to this weeks meeting but here is what I got from Mark Collier:
14:13:06 <tongli> @topol, right. next week probably.
14:13:12 <topol> So I had an interesting chat with Alex Polvi from CoreOS about cool use cases on top of Kubernetes for the interop challenge. He suggested we look at CockroachDB, which could be configured to replicate data ACROSS all of the openstack clouds.  We could do something that shows off how data is replicating in real time across each of the clouds, even though they are different public and private clouds… It could
14:13:13 <topol> demonstrate the power of a network of clouds where openstack’s diversity in geography and providers really shines.
14:13:35 <topol> Also got the following from Mark:
14:14:08 <topol> Another similar option would be to use Vitess which is what Youtube uses to scale their DB. Vitess is more of a MySQL inspired approach and CockRoach is more Postgress inspired.
14:14:23 * markvoelker is always game for a CockroachDB workload
14:14:23 <topol> And finally a third piece of input:
14:14:38 <topol> Last, but not least, one thing that I think would benefit our efforts would be to find a way for CoreOS to be part of the challenge, showing that our ecosystem spans the app tools / container layer too. If we could get their help in installing and configuring the K8S bits and maybe use their “distro” of it, that would bring them into the tent.  As luck would have it, they are in the process right now of te
14:14:38 <topol> sting their K8S product on OpenStack to make that a supported cloud platform. So there might be a way to help them ensure k8s does in fact work smoothly on any openstack cloud while also showing it off to the world in Boston
14:14:56 <topol> So a lot to read for everyone there. I'll take a pause
14:15:22 <tongli> yeah, my concerns is the time to run such a workload.
14:16:04 <tongli> it may take a lot longer than 10 minutes if the number of nodes is greater than 3, remember Mark said that the 3 nodes workload sounded too small.
14:16:46 <markvoelker> IMHO the DB we deploy doesn't matter a whole lot, but CockroachDB is a neat piece of tech and one that's getting a lot of attention lately (particularly after Google's Spanner announcement a week or two ago).  Seems like a good choice.
14:17:07 <topol> tongli, well if he got the fancier workload Im guessing he would bend on the number of nodes
14:17:47 <topol> tongli but I agree it is hard to think about a fancy workload until at least we have everyone running the basic Kube workload on their cloud
14:17:57 <kgarloff> topol: Network connectivity between clouds for replication is something that might be difficult
14:18:04 <kgarloff> Especally the private clouds ...
14:18:16 <topol> kgarloff, very good points
14:18:21 <tongli> @kgarloff, agreed,
14:18:22 <skazi_> topol: do I understand correctly that you want to make different vendors's clouds replicas of the same DB?
14:18:31 <kgarloff> VERY nice showcase though ...
14:18:49 <skazi_> kgarloff: +1
14:19:07 <tongli> I think that is the idea. but if every replicates from public cloud, then it should be ok for private cloud as well.
14:19:17 <topol> skazi_  I think the idea is a user is using two different clouds and so yes the users workload on the two diff clouds had replicas of the same DB
14:19:48 <tongli> we are not going into private cloud, just pulling from public cloud, it may be doable, but again needs to get things going first.
14:20:02 <tongli> I wonder if I should switch gears working on coreos.
14:20:28 <tongli> the k9s workload has been on ubuntu. coreos is a bit different beast.
14:20:37 <topol> so my view is this seems like a very cool demo but we are far from feeling comfortable about pulling it off.
14:20:47 <markvoelker> I think a sufficient starting point would just be to get a workload developed that deploys CockroachDB atop Coreos on a single cloud.  If we get that squared away we can move on to linking up clusters.
14:20:58 <topol> the other suggestion about making coreos part ofthe challenge. IS that more doabke?
14:21:16 <markvoelker> (whether that's on multiple clouds deployed by a vendor...e.g. two instances of VIO for example...or public-private, or whatever)
14:21:37 <topol> markvoelker, I like how you try to break it down into manageable and palatable iterative steps
14:22:03 * topol it seems less scary when markvoelker describes it :-)
14:22:37 <topol> so first part, how tough to switch to a coreos image? tongli any ideas?
14:22:40 <skazi_> markvoelker: +1, we should get the base working first and check the other options once we have participants list
14:22:45 <markvoelker> topol: I think using CoreOS for the base OS is doable.  Needs some elbow grease, but the mechanics shouldn't be that hard.
14:23:27 <topol> anyone else share markvoelkers confidence?
14:23:39 <tongli> @topol, since coreos does not have anything that ansible needs, it is a bit challenge to get it going the way how ansible deals with other OSs like ubuntu or redhat.
14:24:14 <kgarloff> topol: Last time we tried, it was not hard to get CoreOS to work on our cloud. But getting it officially supported was smth where the CoreOS folks asked for $$$ ... if they do this always, they'll be rich after we did the challenge :-)
14:25:32 <topol> tongli and kgarloff sounds like you both have great questions for Alex Polvi.  Let's see if we can get him to show up here next week.  at the very least can everyone write up questions we should ask Alex and I can forward them on?
14:25:32 <tongli> @kgarloff, I looked at it and by following some docs I can find by googling, it did not work, of course, I have no spend a lot of time on it, but in theory it should work.
14:26:17 <tongli> @topol, right, we should talk to Alex.
14:26:23 <kgarloff> +1
14:26:27 <topol> #action all, any questions on coreos please send to topol (btopol@us.ibm.com)
14:26:38 <topol> #action topol, try to get Alex here next week
14:26:56 <topol> K, so lots to think about there.  let's move on to next item
14:27:17 <topol> #topic Challenges to run k8s workload if outside of US,
14:27:35 <topol> so who added this?
14:27:55 <topol> I assume someone outside the US? :-)
14:28:14 <tongli> @topol, I did.
14:28:24 <topol> tongli, the floor is yours
14:28:35 <tongli> haha, ok.
14:29:31 <tongli> when I wrote the workload for ubuntu, I purposely made the repo to download k8s binaries and other dependencies configurable so that you can have a local repo for the workload to download things from.
14:30:01 <topol> thats sounds helpful
14:30:26 <tongli> but the issue is that when you start up pod with images for container, often the repository for these container images are from docker repo or google repo.
14:30:27 <kgarloff> tongli: Is the concern bandwidth or firewall policies?
14:30:40 <tongli> you know that gcr is blocked in China.
14:31:07 <markvoelker> kgarloff: the latter I think.  Sounds like we need a way to configure private registries?
14:31:18 <tongli> so I was told that often these guys will have to setup a proxy to run the work load.
14:31:38 <tongli> the proxy will be running somewhere in US, so that the workload can go on,
14:31:55 <tongli> the problem of doing that is again the time, it will be quite slow,
14:31:57 <kgarloff> tongli: for replicating the DB over the net, would this be a problem as well?
14:32:09 <tongli> true.
14:32:35 <tongli> @kgarloff, if the DB is not a google domain, then it should be ok.
14:32:59 <kgarloff> ok
14:33:02 * markvoelker wonders if anyone is planning to deploy their OpenStack on GCE and hopes his head doesn't explode
14:33:02 <tongli> China blocks off anything related to google, facebook, twitter, etc.
14:33:10 <tongli> the list is long
14:33:56 <tongli> @markvoelker, not necessarily deploy OS on gce but pulling container images from gcr.
14:34:21 <markvoelker> tongli: I was thinking of the DB replication thing.  Shouldn't be an issue.
14:34:22 <tongli> container images can be pulled often from docker repo or gcr.
14:34:49 <tongli> such as the one we use k8s dashboard and dns.
14:35:11 <markvoelker> tongli: So, for the registry problem: seems like the ask here is for an option to pull from a private registry?
14:35:28 <topol> tongli, so extra configuration at our workload (ie Kube and above) would be necessary to have private container repos or to use a proxy?
14:35:30 <markvoelker> tongli: Might make some operations faster for folks anyway since the registry could be colocated
14:35:33 <tongli> but anyway, these are just some problems these guys may face. making run the workload a bit more challenge.
14:36:39 <tongli> @markvoelker, sure making it configurable is quite easy, just more variables in the config file.
14:37:06 <tongli> so the runner can change it, I think that the effort is to setup the local/private repo for container images.
14:37:19 <dmellado> o/
14:37:37 <tongli> I have not done it myself and do not know what is involved.
14:37:40 <topol> tongli, looks like you found some ugly issues folks may run into. the sooner we get folks trying the worload on their cloud the better
14:38:08 <tongli> @topol, yes, exactly and more testing and more patches can really help.
14:38:08 <markvoelker> tongli: Right.  I think if folks want a local registry that's outside the scope of the workload (E.g. let them set it up however they want, just give them a variable to point to it and default to using gcr/dockerhub/whatever)
14:38:55 <tongli> @markvoelker, ok, I will add these variables.
14:39:06 <topol> okay, sounds like some good safety tips on this topic.  Any more we need to cover on this topic for now?
14:39:24 <markvoelker> tongli: Cool.  I may see if I can set up a private Harbor registry to help test.
14:39:44 <tongli> I will start looking into the coreos workload.
14:39:47 <topol> markvoelker +++ Thanks
14:40:16 <topol> #action tongli to start looking at coreos workload
14:40:27 <topol> K, next topic
14:40:54 <topol> k8s on coreos we have covered thoroughly which brings us to....
14:41:03 <topol> #topic NFV workload updates
14:41:12 <topol> any updates on NFV?
14:41:13 <tongli> https://review.openstack.org/#/c/439492/
14:41:33 <tongli> we've got a WIP on the workload, you can see at the above link.
14:41:53 <HelenYao_> a new bp patch is submitted to address the comments that are given after the bp is merged
14:42:27 <HelenYao_> a patch is in progress which will include the script
14:42:53 <dmellado> HelenYao_: cool, thanks
14:42:56 <tongli> @HelenYao_, thanks for the patch and the patch to address comments on the blueprint.
14:43:06 <topol> HelenYao_ will a description be added on what this does and how to run it?
14:43:36 <HelenYao_> when is the target date for the nfv patch?
14:43:37 <tongli> @HelenYao_, for the workload patch, please include a README.md file to help people follow the instructions to run it.
14:43:48 <HelenYao_> is there any rough schedule
14:44:01 <topol> tongli+++ yes, my thoughts exactly
14:44:11 <HelenYao_> tongli: sure. I was thinking about it before the meeting
14:44:16 <tongli> haha.
14:44:34 <tongli> that will also help you developing the workload.
14:44:37 <topol> ok excellent, very nice to see progress on this!
14:45:06 <HelenYao_> do we have any rough schedule for nfv workload?
14:45:13 <topol> and more updates on NFV?
14:45:19 * topol 15 mins left
14:45:43 <tongli> @topol, Helen is asking for the schedule. you have a requirement on that?
14:46:08 <topol> schedule for when it needs to be completed by?
14:46:15 <HelenYao_> yes
14:46:59 <topol> well if we want Mark Collier to be able to mention it in the keynote before the Boston Summit is the most critical deadline
14:47:20 <zhipeng> topol is it still possible that we showcase on the main stage ?
14:47:21 <topol> but ideally the sooner the better so folks can try and run it and get confidence with it.
14:47:26 <zhipeng> if we finish it on time
14:47:59 <topol> zhipeng, so define on time. on time means workload is done and lots of folks are able to run it on their clouds
14:48:23 <topol> zhipeng how soon until folks can try and run it the workload on their cloud?
14:48:27 <zhipeng> we need the time to be defined
14:48:39 <zhipeng> when is the deadline that we need to make this happen ?
14:48:54 <topol> zhipeng, how much time d you think you need?
14:49:05 <topol> Right now its March 1st
14:49:18 <topol> can you get somehting for folks to run by March 14/
14:49:18 <tongli> I would say if we want time for people to run and test this, we need it to be done within March.
14:49:52 <zhipeng> ok then we could set March 14 for the first target
14:49:56 <kgarloff> tongli: +1
14:50:02 <zhipeng> the latest date to have everything running ok
14:50:08 <topol> tongli, yes must be done in March. I mentioned March 14 :-). If they are a few days over that is still probably OK
14:50:22 <zhipeng> and end of March as the second target that we could test show for interoperability
14:50:34 <zhipeng> would that be ok ?
14:50:47 <topol> zhipeng we need to leave lots of time for other folks to try on their cloud. we always hit interesting unknown issues when we try all the other clouds.
14:51:15 <zhipeng> topol I know, that's why we need a ballpark figure on the timing
14:51:24 <zhipeng> so that we know when to hit what target
14:51:31 <zhipeng> so that people could do the testing on time
14:51:31 <topol> zhipeng, how is March 14 for having something for others to try on their clouds and ideally everyone running it by March 31
14:51:51 <zhipeng> topol that is reasonable for me :)
14:51:57 <topol> perfect.
14:52:39 <topol> #agreed march 14 for  NFV workload available for test and March 31 for all clouds running it
14:53:11 <topol> zhipeng that leaves us time to go back to foundation and pitch what we think would look good on the keynote stage
14:53:21 <zhipeng> topol understood
14:53:37 <topol> OpenStack foundation wil always want to see what we deliver before they commit to what they want to show on stage
14:53:55 <topol> k next topic
14:54:06 <topol> #topic Update from China Chapter Meeting
14:54:17 <topol> tongli any updates worth mentioning?
14:54:55 <topol> #topic Updates from China Chapter meeting
14:55:00 <tongli> they are having a meeting from today to Friday.
14:55:12 * topol weird, copy paste and setting topic dont get along
14:55:36 <tongli> in Xiamen, Dasiy actually presents the k8s workload to the meeting there.
14:55:45 <topol> tongli, great!
14:55:53 <tongli> and companies have started running k8s workload as well.
14:56:20 <topol> tongli are they running in the gcr issues you mentioned earlier
14:56:33 <topol> err running into
14:56:55 <tongli> I suspect that they have the issues, but they hit some other configuration issues first.
14:57:07 <tongli> I know they will hit the gcr thing later.
14:57:32 <topol> K, any other updates from the china chapter?
14:57:44 <tongli> that is all.
14:57:53 <tongli> they have bi-weekly meetings.
14:57:59 <topol> #topic open discussion
14:58:08 <topol> any other topics, 2 mins left :-0
14:59:18 <topol> I guess we are good. GREAT MEETING and it was very nice seeing folks in ATL last week!
14:59:37 <topol> we're done
14:59:43 <topol> #endmeeting