14:01:35 <tongli> #startmeeting interop_challenge
14:01:36 <openstack> Meeting started Wed Apr 12 14:01:35 2017 UTC and is due to finish in 60 minutes.  The chair is tongli. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:39 <openstack> The meeting name has been set to 'interop_challenge'
14:01:41 <mnaser> o/
14:01:54 <tongli> can you please announce yourself again, sorry I am a bit late.
14:02:11 <pilgrimstack> Hi, Jean-Daniel from OVH
14:02:17 <tongli> #link https://etherpad.openstack.org/p/interop-challenge-meeting-2017-04-12
14:02:17 <sparkycollier_> Hi it's mark collier, looking forward to working with everyone
14:02:17 <skazi_> o/
14:02:20 <markvoelker> o/
14:02:28 <alexrobinson> Hi, Alex Robinson from Cockroach Labs
14:02:36 <mnaser> mohammed naser from vexxhost
14:02:37 <topol> Hi Alex
14:02:39 <spencerkimball> Hi, Spencer Kimball from Cockroach Labs
14:02:47 <topol> Hi Spencer
14:03:00 <tongli> please see the etherpad for agenda.
14:03:50 <tongli> foundation has decided to have our show tech check on Sunday 5/7 at 3:15 to 3:55pm.
14:04:12 <hogepodge> o/
14:04:17 <daniela_ebert> thats fine
14:04:22 <tongli> this is mandatory. you and your laptop and any other necessary things need to be there.
14:04:29 <vkmc> o/
14:04:30 <vkmc> hey
14:04:46 <mnaser> cool, i'll be arriving on sunday at 12pm so works for me.  is there any announced location yet (i know its early) to be at?
14:04:54 <tongli> and if you can not be there at 3:15, 10 minutes late is fine because it takes time for everybody to hook up things.
14:05:04 <sparkycollier_> main tech check purpose is to make sure all of the video outputs work from the laptops to our splitters
14:05:12 <mnaser> and how do we work out what sort of inputs we need?  i have one of those new adapter-for-everything macbook, so hdmi? dvi?
14:05:13 <tongli> but you do have to show up to make sure things working.
14:05:20 <tongli> hdmi is the default,
14:05:32 <mnaser> tongli okay, i'll make sure i have an adapter
14:05:32 <tongli> if your laptop does not have hdmi, then you need to bring an adaptor.
14:05:50 <mnaser> ack
14:06:10 <pilgrimstack> ack
14:06:18 <tongli> then there will be another rehearsal on Monday, that time is TBD.
14:06:33 <tongli> got to be in both to qualify to be on stage.
14:06:55 <vkmc> the demo will be on the second day keynote, right?
14:06:56 <vkmc> like last time
14:07:01 <tongli> and also please put your info on this pad.
14:07:04 <zhipeng> tongli this only applies to the k8s demo or nfv as well ?
14:07:15 <tongli> #link https://etherpad.openstack.org/p/interop-challenge-boston-onstage
14:07:24 <sparkycollier_> second day keynote, correct
14:07:36 <tongli> #action, please required information on the boston onstage etherpad.
14:07:44 <sparkycollier_> however unlike bcn, I will be the MC for day 2 so y'all are stuck with me this time
14:07:51 <tongli> @zhipeng, right now, the k8s workload only.
14:07:53 <daniela_ebert> tongli: already put my info to the pad. Bu to whom to send the logo?
14:08:03 <zhipeng> tongli okey
14:08:14 <zhipeng> do we have a Forum session so that we could demo the NFV Workload ?
14:08:17 <tongli> @daniela_ebert, Maria, her info is on the etherpad.
14:08:43 <tongli> @zhipeng, we do, let's talk offline.
14:08:53 <mnaser> thanks for that link, useful info tongli
14:08:53 <sparkycollier_> I think I need to get up to speed on the NFV workload and then I can see about a space/time for it
14:08:54 <zhipeng> okey-dokey
14:09:39 <tongli> @sparkycollier_, we do have one action item from last week. the action is to schedule a demo session with you so that you can take a look at the demo and make decision if that can go on stage as well.
14:09:53 <tongli> @sparkcollier_, do you have time next week?
14:10:04 <sparkycollier_> yes let's take that offline and set it up
14:10:12 <daniela_ebert> tongli: thanks!!
14:10:17 <tongli> @sparkcollier_, great. thanks.
14:10:38 <tongli> #link https://etherpad.openstack.org/p/interop-challenge-boston-onstage
14:11:09 <tongli> we communicate and find information from that pad, please have your questions there so that we can track and answer also let other people know.
14:11:35 <tongli> that is the pad Tamara will be looking at and post requirement and other useful information such as date & time etc.
14:12:11 <tongli> ok. now, let's talk about the k8s workload.
14:12:19 <ksumit> @tongli For the k8s workload, do we need a public cloud? My IT org isn't very friendly :) but I did manage to get the workload to run successfully in a private cloud (connected through VPN).
14:12:23 <tongli> #topic k8s workload with cockroachdb cluster.
14:13:02 <tongli> @ksumit, great question, exactly what I like to discuss with mark, spencer next.
14:13:50 <tongli> I think that the current thinking on the demo goes like this. @sparkcollier_, @spencerkimball@
14:14:25 <tongli> everybody run the current k8s workload (will remove the cockroachdb cluster setup)
14:14:36 <vkmc> on this topic, I'd like to bring up (if this is not already on the agenda) the existence of this issue https://hub.docker.com/r/cockroachdb/cockroach-k8s-init/
14:14:45 <tongli> the result will be just a kubernetes cluster on your openstack cloud.
14:15:10 <vkmc> we ran this workload with dmellado and we found that the cockroachdb pods were not being launched correctly due to this
14:15:14 <tongli> then mark and the gangs will show Virtual Machines, and possibly k8s dashboards.
14:15:41 <sparkycollier_> @vkmc we are looking at a different approach to provisining the cockroachDB which @tongli is explaining now
14:15:49 <vkmc> sparkycollier_++
14:15:50 <vkmc> thx
14:15:58 <alexrobinson> vkmc: thanks for bringing that up - I'll look into that separately, but we don't plan on using that configuration
14:15:59 <tongli> after we made sure that all the OS cloud can have kubernetes cluster running succesufully, we will move on to the second phase of the demo.
14:16:55 <tongli> first, one cloud will start a single node (could be multiple nodes) cockroachdb cluster on the k8s cluster you just created.
14:17:48 <tongli> then some information will be sent(or pulled by each cloud), that information (most likely a first cockroachdb node info) will be used by other clouds
14:18:20 <tongli> other clouds will start cockroachdb pods to start joining the first cockroachdb cluster.
14:18:49 <tongli> the end results will be a bigger cockroachdb cluster spread over multiple clouds.
14:19:23 <alexrobinson> what will we be doing for networking, again? public IP addresses? VPN?
14:20:01 <tongli> if a node joined the cockroachdb cluster successfully, then that node will start run a script (or program) to generate data against the cockroachdb, the purpose of that app is to create some load,
14:20:21 <tongli> so that the cockroachdb dashboard will show activities from the cluster.
14:20:38 <tongli> I think that is the basic idea.
14:21:27 <tongli> so two steps, first, getting k8s cluster up running live on your OS cloud. second, join the first cockroachdb cluster, run the app to adding data to the database.
14:21:31 <sparkycollier_> so what's the networking approach so that the different clusters on each cloud can talk
14:21:50 <tongli> mark and guys will talk and explain what we are doing.
14:22:03 <tongli> the key is how we connect nodes from various clouds.
14:22:13 <tongli> especially for the private cloud,
14:22:15 <mnaser> for networking, public cloud is easy mode.. the private is when things get tricky
14:22:39 <tongli> @mnaser, that is what I was worried about.
14:23:05 <tongli> my understanding is that for private cloud, each company has their own way of going through the VPN.
14:23:09 <mnaser> (also, i love to think people aren't evil but if there is some sort of "easy way" to enter the cluster during our demo when we do a multicloud thing, we'd have to worry about someone trying to do something slightly malicious)
14:23:42 <tongli> it will be difficult to use many different ways (certificates, credentials) to connect these nodes over 16 clouds.
14:23:48 <vkmc> mnaser++
14:24:03 <mnaser> and if things are going to be over public ips, it'll be tricky to "hide" things from a huge keystone (that's also most likely livestreamed?)
14:24:16 <tongli> @mnaser, that is a small issue, we can enable userid/password and other things on k8s cluster.
14:24:23 <tongli> I started working on that already.
14:24:26 <sparkycollier_> if we give enough info on screen to figure out how to access the clusters on public cloud, someone in the audience will do it
14:24:37 <tongli> the key is how to connect from public cloud to private cloud.
14:24:39 <jbryce> Can any private cloud demo-ers describe how they're accessing their demo environments from the stage network? Is everyone using different vpns?
14:24:47 <topol> sparkycollier_ +++
14:24:47 <sparkycollier_> good point @tongli that is the harder problem
14:25:40 <markvoelker> jbryce: Yes, I was planning to VPN from my laptop.
14:25:51 <tongli> @jbryce, IBM use CiscoMobility to go through ibm firewall. each individual has a certificate and I have no way to share that with anyone.
14:25:53 <ksumit> @jbryce I'd have to use the Cisco Any Connect client on my Mac to be able to connect.
14:25:56 <tongli> not even brad.
14:26:11 <topol> tongli Ha Ha So true
14:26:44 <tongli> I am pretty sure that other companies like VMWare, Huawei have their own way of doing things.
14:26:57 <mnaser> (i apologize, i have to run to something at 10:30 in a few minutes, we're a public cloud so we should be okay regardless of what choice everyone proposes, i'll catch up on the agenda and logs, really sorry everyone)
14:27:14 <tongli> the demoer can use their own credential to connect to their own clouds, but these credentials can not be shared.
14:27:18 <vkmc> shall we ask to the security council about this?
14:27:28 <vkmc> I think we could get a better understanding on how risky would be to do this on stage
14:28:04 <mnaser> (before i run, the only way i can imagine this working is if we all setup a vpn and connect all clouds together in advance)
14:28:08 <tongli> @vkmv, who is secuirty council? we have both jonathan and mark here. haha.
14:28:12 <jbryce> And your demo environments don't have any way to open up for just the port needed by cockroach?
14:28:14 <vkmc> I agree that the final result would be great, but it's quite risky both in the sense that the networking configuration may fail and/or we may have security vulnerabilities
14:28:24 <mnaser> that way we can run it in a secure private network which is unaccessible by anyone except the clouds
14:28:44 <mnaser> aaand it solves the inter-public/private concerns, but we need to plan it and run that vpn first
14:29:39 <vkmc> tongli, this team https://wiki.openstack.org/wiki/Security
14:29:42 <ksumit> @jbryce Pretty sure my IT org won't do it.
14:30:14 <tongli> @jbryce, I think that the issue is that private cloud runs inside company firewall, to go through that firewall, different company uses different means.
14:30:17 <markvoelker> jbryce: I'd have to get very creative. =)  I may be able to find an environment to run this on that has public addressing, or rig up a VPN+jumphost/router, but it'll be tricky given the short timeframe.
14:30:21 <skazi_> how much time do we have for the complete demo?
14:30:34 <vkmc> tongli, I trust on Mark and Jonathan criteria, but if we have a team dedicated to security that we can ask in order to get an assessment, I don't see how that would hurt
14:31:22 <topol> so what worries me is if you all have to jump through hoops to make this work how could folks in the audience view it as plausible?
14:31:23 <tongli> @vkmc, so what do we do for private cloud?
14:31:49 <vkmc> tongli, I don't follow the question
14:31:49 <sparkycollier_> We don't have a solution yet to vet with the security team, so I think that would be a good problem to have at this stage
14:31:58 <jbryce> topol: this is exactly what people in the audience are doing to run their multi-cloud workloads
14:32:16 <topol> jbryce what is the best practice?
14:32:41 * markvoelker notes that having said that, he may have just found something
14:32:53 <jbryce> alexrobinson: how much network access does cockroach need to function?
14:33:06 <tongli> each private cloud create a new demo credential for this demo?
14:33:14 <jbryce> As in Barcelona, we won't have time to show everything from every demo
14:33:18 <spencerkimball> each cockroachdb node needs to be able to connect to every other node; that's the requirement from our side. But for the demo to make sense, we don't necessarily have to connect to every private cloud. If we could just run across 3, the point would be made. So we could use some public, some private, and limit ourselves to private clouds where there is a solution in place to allow external connections to a host/po
14:33:38 <jbryce> So perhaps we show k8s on private clouds that can't interconnect
14:33:47 <alexrobinson> jbryce: each cockroachdb process needs to be able to reach each other cockroachdb process on a single port (which, by default, is 26257)
14:33:54 <jbryce> Then cockroach across a handful of public+ private that can interconnect
14:34:06 <jbryce> Basically what spencerkimball just said = )
14:34:33 <tongli> oh, so say we have one set of credentials/userid/password from each private cloud, that get that distributed to all clouds.
14:34:48 <markvoelker> spencerkimball: jbryce: Something that may make a differnece for a lot of folks is who sends the SYN.  E.g. for many corporate envs I can establish an outbound connection from within, but having something outside establish a connection to something behind the firewall is...hard.
14:34:50 <tongli> every cloud will have to know how to use these credentials?
14:35:41 <tongli> @markvoelker, I think cockroachdb needs two way communication.
14:36:00 <tongli> in this case, all 16 clouds will have to be able to talk to each other.
14:36:03 <alexrobinson> yes, each node needs to be able to open a connection to each other node
14:36:50 <tongli> this is not a problem for IBM since we have bluebox which exposes accessible IPs.
14:37:24 <ksumit> I think the point @spencerkimball is trying to make that not all private clouds need to be connected to the cluster. For the purpose of the demo, we only show the ones to the public that are able to connect, which can be a mix of public + those private clouds that find a solution and are able to connect to the cluster. Is my understanding correct?
14:37:24 <sparkycollier_> I think the idea of having a subset as examples (some public, some private) could work.
14:37:29 <tongli> @zhipeng, what do you think? will you run on Huawei public cloud or private cloud?
14:37:49 <jbryce> So basically all clouds can participate in the k8s deployments portion, then the k8s environments that have full access on :26257 can participate in the cockroach portion
14:37:50 <skazi_> @tongli, can we add info on public/private cloud to the onstage etherpad? this would give as some idea on how many public/private clouds we'll have
14:38:09 <tongli> @skazi_, great idea.
14:38:15 <zhipeng> tongli sorry my network just went down earlier
14:38:19 <jbryce> ksumit: yes
14:38:23 <spencerkimball> @ksumit: exactly
14:38:25 <zhipeng> do you mean for nfv workload ?
14:38:27 <topol> skazi_ +++
14:38:39 <tongli> #action, identify if your cloud is public or private on the boston onstage etherpad.
14:38:46 <ksumit> @spencerkimball @jbryce I think I like this solution the most.
14:38:50 <tongli> #link https://etherpad.openstack.org/p/interop-challenge-boston-onstage
14:39:14 <tongli> we go with public clouds first, then figure out if we can add few more private clouds?
14:39:18 <tongli> will that work?
14:39:23 <sparkycollier_> I think so
14:39:23 <jbryce> #action Identify if your cloud will be able to allow access on port 26257
14:39:28 <tongli> we can start with vexxhost and IBM.
14:39:37 <skazi_> +1 for limited set of clouds for the second part of the demo
14:40:02 <jbryce> I think this is makes sense. Again, we wouldn't have time to get to every single environment anyway
14:40:05 <tongli> and of course any other public clouds who participate.
14:40:05 <sparkycollier_> I can explain it from the stage
14:40:11 <topol> jbryce exactly
14:40:35 <daniela_ebert> tongli: pick me :)
14:41:05 <tongli> great. I have created two k8s clusters using our workload on vexxhost and bluebox.
14:41:25 <tongli> if you can also do the same, then send the info to spencer, that will be great
14:41:43 <tongli> that will make 3 for now.
14:41:45 <topol> Hopefully a nice balance can be made of showing off all the folks who can just do Kube (phase 1) so they get some screen time before you move to phase 2 where the Kube only folks kind of stand there with not much to do
14:41:59 <daniela_ebert> tongli: ok
14:42:00 <ksumit> @topol +1
14:42:04 <sparkycollier_> will it be possible spencerkimball to show it working next week?
14:42:11 <sparkycollier_> +1
14:42:39 <jbryce> topol: yeah. We should remember to rehearse that
14:43:29 <sparkycollier_> I can talk to tamara about how to arrange the participants so the flow works from the stage (for example we might have the ones that include cockroachdb sync on one side for second phase)
14:43:30 <topol> #action remember to identify phase 1 only folks so they get screen time as wlel
14:44:15 <tongli> @topol, wow, make sure everybody gets something. great leader!
14:44:31 <spencerkimball> @sparkycollier_, @alexrobinson: I believe so, though alex needs to weigh in on timing
14:45:19 <alexrobinson> tongli: a somewhat more specific question that we can take offline is how cockroachdb should learn its external IP address on the different clouds. on the ibm cluster you shared with me it doesn't appear to be configured on the machine
14:45:26 <tongli> I will be traveling next week, won't be back until 4/21 (Friday night).
14:46:20 <tongli> @alexrobinson, I suggested to develop a small app, which runs before demo starts.
14:46:25 <topol> Key issue will be is there enough time to setup the cockroach stuff after the Kube stuff or do you do that using a cooking show technique and cut over to some subset of the participants with that part ready to be viewed
14:46:28 <tongli> so that everybody knows that app IP.
14:46:33 <alexrobinson> @sparkycollier_: yup, next week shouldn't be a problem. I hope to have things working today
14:46:39 <sparkycollier_> woohoo
14:46:41 <tongli> that IP will be used by other clouds to get first node IP.
14:46:46 <tongli> and other info if we need.
14:46:49 <topol> Great!!
14:46:58 <tongli> so all these stuff is automatic.
14:47:23 <alexrobinson> I mean how can each node know its own IP address. we can follow up after the meeting though
14:47:49 <tongli> @alexrobinson, oh, that is part of the k8s service, we can talk offline.
14:48:27 <tongli> well, I do not have any other topics for today. we still have 12 minutes left.
14:48:36 <tongli> let me try to summaries what we discussed today.
14:49:06 <tongli> 1. phase 1 of the demo is to stand up kubernetes cluster on top of openstack. every cloud can participate.
14:49:46 <tongli> 2. phase 2 of the demo is to create a cockroachdb cluster on top of k8s across multiple clouds, hope all clouds can join.
14:50:34 <tongli> 3. work on the app so that running cockroachdb cluster will have some load to show actitivites on the cockroachdb dashboard.
14:51:06 <tongli> 4. create a simple app for clouds to get the IP information about the first cockroachdb node so that they can join.
14:51:41 <tongli> 5. that simple app will be started before the demo so that that IP can be used in all cloud workload configuration.
14:52:07 <tongli> let me know if I missed anything?
14:53:00 <tongli> did I lose everybody?
14:53:06 <sparkycollier_> is there any reason for the clouds that won't be in the cross-cloud phase2 piece to install cockroackdb? or should they skip that piece
14:53:07 <ksumit> @tongli Does the provisioning method that's on GitHub today change in any way? I mean I tested last Friday by following the guide based around deployment through Ansible. Does that change in any way?
14:53:38 <tongli> @ksumit, I am hoping whatever spencer create will be also part of our workload.
14:54:05 <tongli> it will be just a new playbook. there will be changes to the workload for sure.
14:54:22 <tongli> at least I have to remove the cockroachdb cluster deployment from the current workload.
14:54:23 <ksumit> So is Spencer working on the application?
14:54:28 <tongli> since spencer is going to do that.
14:54:38 <ksumit> Ok
14:54:51 <vkmc> so we have one month exactly to work on items 2-5
14:54:52 <tongli> @ksumit, yes spencer is working on the app to generate some load.
14:54:53 <sparkycollier_> I think that answers my question
14:55:32 <tongli> I will work on an app to expose the first node IP.
14:56:08 <tongli> and add steps for workload to retrieve that IP and pass that to spencer so that his cockroachdb pods can join
14:56:11 <spencerkimball> @ksumit, the app will be a standard load generator: the "kv" in https://github.com/cockroachdb/loadgen
14:56:30 <ksumit> Got it. Thanks!
14:56:46 <spencerkimball> one question I have is whether we'll be starting multiple nodes per cloud
14:56:53 <tongli> @spencerkimball, I would like to pull that into our workload.
14:57:18 <tongli> @spencerkimball, I think we use 3 (stack size 4).
14:57:25 <tongli> so each cloud has 3 nodes to join.
14:57:29 <tongli> 3 pods actually.
14:58:03 <tongli> @sparkycollier_, you ok with what I just described?
14:58:25 <tongli> I mean the flow and the content and how we are going to accomplish that?
14:58:31 <topol> 2 mins left
14:59:12 <sparkycollier_> yes sir
14:59:18 <spencerkimball> OK, sounds good. So we'll have a ~9-15 nodes CockroachDB cluster depending on how many public/private clouds participate
14:59:28 <sparkycollier_> in fact I'm very excited!
14:59:30 <tongli> @sparkycollier_, that was a very long pause. haha.
14:59:39 <tongli> you got me a bit worried.
14:59:40 <topol> Awesome progress! great work everyone
14:59:44 <sparkycollier_> sorry doing too many things at once
14:59:53 <sparkycollier_> :)
14:59:59 <tongli> ok. great. thanks everybody.
15:00:14 <alexrobinson> sounds good, thanks tongli!
15:00:14 <sparkycollier_> woohoo
15:00:20 <tongli> #endmeeting