14:01:49 <tongli> #startmeeting interop_challenge
14:01:50 <openstack> Meeting started Wed Mar 15 14:01:49 2017 UTC and is due to finish in 60 minutes.  The chair is tongli. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:53 <openstack> The meeting name has been set to 'interop_challenge'
14:02:11 <zhipeng> i thought the meeting is one hour later
14:02:13 <markvoelker_> o/
14:02:14 <skazi> o/
14:02:24 <tongli> Hi, everyone, who is here for the interop challenge meeting today?
14:02:28 <feima> Hi everyone, I am Fei Ma, from China OpenSource Cloud Alliance for industry (OSCAR)
14:02:39 <tongli> welcome @feima.
14:02:40 <feima> This is my first time to here.
14:02:51 <feima> thank you, tongli
14:03:14 <tongli> Brad has a conflict today , he asked me to chair this meeting.
14:03:24 <feima> ok
14:03:31 <Thor_> Hi everyone, this is Thor from inwinSTACK and this is also my first time to join this meeting
14:03:45 <tongli> welcome @Thor_
14:03:49 * topol brad about to hop on a plane. THANKS tongli for leading today!!
14:04:06 <tongli> Yes, I saw your email, glad that you are here.
14:04:42 <tongli> @topol, thanks
14:04:47 <Thor_> good to see you and everybody too
14:05:13 <tongli> ok, we are 5 minutes past the hour, let's start.
14:05:27 <tongli> #topic Review last meeting action items
14:05:40 <tongli> http://eavesdrop.openstack.org/meetings/interop_challenge/2017/interop_challenge.2017-03-08-14.00.html
14:06:16 <tongli> we had a short meeting last time. the only action was to schedule Alex from coreos community and Mark to participate in this meeting.
14:06:46 <tongli> I had few emails exchanges with Alex from coreos.
14:06:50 <tongli> on the k8s workload.
14:07:13 <tongli> he suggested us to use a tool that coreos uses to stand up k8s cluster.
14:07:46 <tongli> The tool is called Tectonic.
14:08:40 <Thor_> https://coreos.com/tectonic/
14:09:01 <tongli> The issue is that Tectonic is a tool sold by coreos.com, it is not an open source software.
14:09:11 <tongli> however, 10 nodes or less is free.
14:09:45 <tongli> so I assume we are not interested in switching to that tool.
14:09:57 <tongli> any one object to that, please speak up.
14:10:25 <markvoelker_> I suppose we *could*...given the time constraint we're probably not going to spin up more than 10 nodes anyway...but I'm not sure there's any real impetus to do so.
14:11:02 <linux_> Well, using it might be a strong endorsement of this tool -- it's not clear to me whether we would want to do this, especially if it's a commercial tool.
14:11:13 <tongli> @markvoelker, I had made a lot of improvement to our ansible based k8s workload.
14:11:31 <tongli> now provisioning VM on openstack is done parallel.
14:11:40 <tongli> that is where we spent most of time,
14:12:16 <tongli> with that improvement, my 8 VM nodes k8s clusters entire run finishes within 8 minutes.
14:12:37 <vkmc> I wouldn't use a tool that is not open source for an effort of an open source community
14:12:47 <garloff> tongli: We have like 10 -- 12 mins again for the showcase?
14:13:14 <tongli> I think we may go 15 minutes, but that will be the most time we have.
14:13:28 <garloff> vkmc: exactly, that would be my default approach as well to this question
14:13:35 <markvoelker_> tongli: fair point.  Even so though, sentiment seems to be there's not really much incentive to use Tectonic here.
14:13:43 <tongli> I think we may go with 8 nodes since Mark hinted that 3 nodes is too small.
14:14:44 <tongli> do not think Tectonic can stand up k8s cluster against ubuntu, but I am not sure about that.
14:15:12 <tongli> I will give it some time next week to see how it works. and will report back.
14:15:35 <tongli> for now, I think we are sticking with Ansible scripts, everyone agrees with that?
14:15:46 <markvoelker_> ++
14:16:05 <garloff> tongli: sure, that's a proven approach
14:16:06 <tongli> anyone else?
14:16:55 <tongli> if no, I think #decision, we stick with Ansible script for k8s workload.
14:17:53 <tongli> this week I am in Milan for openstack operator midcycle meetup.
14:18:10 <tongli> there are quite few session on containers.
14:18:49 <vkmc> ++
14:18:58 <tongli> seems to me that OpenStack operators have very little experiences running k8s on openstack.
14:19:32 <tongli> this is the area very hot but not many people have actually been running k8s to serve their customers.
14:19:50 <garloff> tongli: Containers are a challenge for operators b/c of a new (and sometimes not well defined) share of responsibility b/w Devs and Ops
14:19:52 <tongli> but there are vendors actively trying to do things with k8s
14:20:07 <tongli> @garloff, agreed.
14:20:15 <tongli> still a lot of learnings to do.
14:20:24 <tongli> from developers to operators.
14:21:04 <tongli> so this group IMHO is doing something quite exciting. I mean both k8s workload and NFV workload.
14:21:34 <tongli> ok, let's move on to our Agenda.
14:21:50 <tongli> https://review.openstack.org/#/q/status:open+project:openstack/interop-workloads,n,z
14:22:01 <tongli> #topic review patch sets
14:22:48 <tongli> we have 4, one is very minor, on  the readme doc at the root.
14:23:15 <tongli> someone please take a quick look https://review.openstack.org/#/c/440027/
14:23:22 <tongli> it is like one word change.
14:24:26 <tongli> then we have two nfv patch sets. we need to review these patches.
14:24:49 <tongli> and of course the k8s workload, sits at patch #29
14:25:17 <tongli> it is getting very big now. 1712 lines.
14:26:02 <tongli> but since we are deploying k8s dashboard, dns, and cockroachdb pods, we need to general k8s spec yaml files.
14:26:34 <tongli> these files normally are pretty big, we have 3 this type of files sized around 500 lines.
14:26:59 <tongli> so we do not have many lines of code if we take away these spec files and template files.
14:27:17 <tongli> please review the code and run the workload.
14:27:26 <zhipeng> I think helen could give a update about the nfv workload ?
14:27:32 <tongli> it will be really nice if we can get it merged soon.
14:28:00 <tongli> @zhipeng, sure, let me finish the k8s workload.
14:28:08 <zhipeng> tongli okey :)
14:28:18 <tongli> there was one review comment regarding the license headers .
14:28:34 <tongli> my take is that we already have apache2 license file at the root of the project.
14:28:47 <tongli> do we need license header on each individual file?
14:29:18 <tongli> someone suggested to add to the README.rst file, but most openstack project do not place license headers on README.rst file.
14:29:40 <tongli> so I do not think we should do that. what do you guys think?
14:29:45 <dmellado> o/
14:30:02 <tongli> @dmellado, good timing.
14:30:11 <dmellado> hey tongli, I don't really have a strong opinion on that, so disregard that change if you want to
14:30:16 <tongli> we started talking about the license thing.
14:30:26 <dmellado> that said, luzC did you want to add that license to all yaml files?
14:30:31 <dmellado> IMHO it could be a li'l overkill
14:30:46 <tongli> I do not think we have LuzC today.
14:30:52 <markvoelker_> tongli: https://wiki.openstack.org/wiki/LegalIssuesFAQ#Copyright_Headers
14:31:32 <dmellado> markvoelker_: tongli I guess that we're just fine with that
14:31:36 <markvoelker_> "In general, ..... Always keep the license in the header"
14:32:04 <markvoelker_> So probably best practice to do so (most other projects do this too IIRC)
14:32:28 <markvoelker_> See also: https://docs.openstack.org/developer/hacking/#openstack-licensing
14:32:49 <dmellado> hey markvoelker_ yep
14:32:53 <markvoelker_> "All source files should have the following header"
14:32:55 <dmellado> that's waht luzC and we were speaking about
14:33:07 <dmellado> so TODO: add Apache license to source files ;)
14:33:55 <dmellado> would that be fine for you tongli?
14:34:02 <dmellado> not that is urgent in any case, but we should address that
14:34:21 <tongli> @dmellado,@markvoelker_, can someone go through this and add copyright headers to source files?
14:34:50 <markvoelker_> Sure, I'll take a look this week
14:35:04 <dmellado> tongli: sure, I'll try to automate that as much as possible, so I'll be sending some patches
14:35:32 <tongli> great. I am trying to let our guys review the patch and get it through, then we fix the copyright thing.
14:35:48 <dmellado> +1
14:35:51 <tongli> otherwise, this patch set gets even bigger, and harder to review.
14:35:57 <garloff> +1
14:36:27 <tongli> please review https://review.openstack.org/#/c/433874/
14:36:32 <markvoelker_> works for me
14:37:12 <tongli> please review and let's get this going. it is not easy for guys wanting to run the workload as a patch.
14:37:37 <tongli> ok. please review all the four patch sets we have outstanding.
14:38:16 <tongli> #action mark will look into the copy right and submit a patch to fix this requirement.
14:38:53 <tongli> @zhipeng, @hellenyao, can you update on the NFV workload?
14:39:15 <HelenYao> Sure. The deployment for OPEN-O and clearwater is done. I am working on test scripts to verify the deployment and it will be done shortly. It is able to support both Keystone v2 and v3. I would suggest to start to run the workload on cloud to explore the potential limitation
14:39:25 <HelenYao> I will also fix the copyright header
14:39:57 <zhipeng> so in short we are able to deploy the nfv workload using ansible now
14:40:31 <zhipeng> we are working on the functional testing part
14:40:35 <tongli> @HelenYan, I looked at the patch and just wonder if it is all possible to use the configuration file instead of using env viriables.
14:41:11 <HelenYao> tongli: I saw ur comment. I will try to make use of config file
14:41:21 <tongli> all other workload we develoepd use configuration files , that is the place you have your cloud info, any variables.
14:41:47 <tongli> when you run it, you do not really care if you are in a different console or if you have set the variables.
14:42:02 <tongli> it also sets up an example to others.
14:42:16 <tongli> if it is all possible, I would like to have it.
14:42:17 <garloff> except for passwords, which you might not normally have in a config file ...
14:42:36 <tongli> @garoff, except password, well that should be specified as a password field.
14:42:50 <tongli> if you really really want to put in that file, you can, if not, command line.
14:43:08 <garloff> good, just making sure ...
14:43:19 <HelenYao> garloff, tongli: got it. It makes sense to make the code work in a consistent way
14:43:19 <zhipeng> and another question for the interop-challenge team is that
14:43:28 <tongli> the problem with command line is that when we do demos, I would rather put the password in the conf file.
14:43:28 <zhipeng> if we were to demo it
14:43:35 <tongli> since everyone will see my command line.
14:44:02 <tongli> the idea is that you can put either in the conf file or command line.
14:44:04 <garloff> tongli: I would pass $OS_PASSWORD there, noone will see it :-)
14:44:25 <tongli> correct, so we should have options in any case.
14:44:36 <tongli> just little thing.
14:44:46 <zhipeng> should we demo the interoperability between different OpenStack versions which the NFV workload deploys on
14:45:16 <tongli> @HelenYao @zhipenghuang, I will try the workload next week and provide feedback.
14:45:39 <HelenYao> tongli: great
14:45:40 <dmellado> we could as well support os-client-config
14:45:43 <dmellado> shouldn't be that much work
14:45:45 <dmellado> https://docs.openstack.org/developer/os-client-config/
14:45:55 <dmellado> but yeah, we do have several options ;)
14:46:44 <zhipeng> do we have an answer for my previous question ?
14:47:00 <zhipeng> tongli okey wait for you tryouts
14:47:04 <tongli> @zhipeng, in terms of different releases of the openstack, in the past we have made sure that the workload runs on past 3 recent releases.
14:47:32 <zhipeng> tongli the "make sure" is part of the interop testing ?
14:47:38 <zhipeng> or a precondition ?
14:47:50 <dmellado> zhipeng: basically it all depends on the shade library
14:47:53 <dmellado> and also on tempest
14:47:54 <tongli> I am not sure if NFV requires any new features from more recent release.
14:48:52 <tongli> ok, we need to leave some time for @feima to update
14:48:52 <dmellado> HelenYao: which version is the cloud that you're running this against?
14:49:15 <HelenYao> mitaka and newton
14:49:21 <dmellado> ack, thanks
14:49:22 <HelenYao> both are tested
14:49:28 <tongli> I have mitaka and newton, I will run against these two.
14:49:58 <tongli> any other things before we give the floor to @feima?
14:50:12 <feima> thank you, tongli. i am alive:-)
14:50:37 <tongli> ok. it is all yours now, please proceed. @feima.
14:50:47 <feima> ok
14:51:11 <tongli> #topic Mafei from Interop Challenge China chapter
14:51:11 <feima> China OpenSource Cloud Alliance for industry (OSCAR) will hold the Global Cloud Computing summit in Beijing April 19 to 20, and we have invited Mike Perez and Tong Li to attend the summit.
14:51:33 <feima> In the summit, we will have a InterOp show similar to Barcelona.
14:51:47 <feima> Under the guidance of OSCAR, there are eight Chinese companies have finished the LAMP and docerswarm testing, and they have submit some bugs and patches to the community.
14:51:58 <feima> We are testing the k8s workload now, at the same time, we will encourage everyone to go into the community.
14:52:29 <feima> We plan to choose one workload to show, and how do you think?
14:52:38 <feima> LAMP, dockerswarm or k8s?
14:52:50 <feima> that`s all
14:53:02 <dmellado> feima: cool, looking forward to meet you at Boston ;) IIRC, the chosen workload for the keynote would be k8s
14:53:05 <dmellado> wasn't it, tongli ?
14:53:24 <tongli> consider k8s has not been shown by the foundation.
14:53:44 <tongli> @dmellado, yes, foundation has committed to show k8s workload.
14:53:47 <feima> i will go to Boston,dmellado
14:54:20 <tongli> and foundation has very specific requirements how the workload should be done. such as what image to use and what apps to show
14:54:46 <tongli> at present, it will be k8s on coreos image to show cockroachdb cluster with cockroachdb UI.
14:55:21 <tongli> so it may not be very nice to steal foundation's thunder.
14:55:23 <feima> I wonder if it is good to show a same demo as the Boston summit
14:55:52 <tongli> showing off the lampstack may be good especially the lampstack workload showed in Barcelona was a plain wordpress site.
14:56:08 <tongli> the latest lampstack workload shows OpenStack superuser site.
14:56:21 <tongli> all the themes and artifacts came from foundation.
14:56:32 <feima> and, there is only 1 month left before the Beijing open source summit. I don't know if the time is enough to get K8s ready.
14:56:34 <tongli> so the content will be a bit different comparing to the one in Barcelona.
14:57:13 <tongli> @feima, good point,  k8s workload is not complete, but lampstack is.
14:57:14 <zhipeng> lampstack would be a good choice
14:57:19 * luzC luzC enter and sit at the back
14:57:20 <feima> Do you think if it's better to show a China specific website other than OpenStack superuser site ? @tongli
14:57:40 <dmellado> tongli: that said, I don't think that we'll have time to show more than 1 demo
14:57:50 <dmellado> just recall at BCN, we were kinda short on time ;)
14:57:53 <tongli> @feima, if want to do that, then we need to deploy a different wordpress app on the lampstack.
14:57:55 <feima> i agree @zhipeng
14:58:10 <feima> yes,@tongli
14:58:23 <tongli> @dmellado, yes, just one workload. not two or more.
14:58:46 <dmellado> feima: zhipeng the idea is that, way before the summit the k8s workload would be totally operational
14:58:47 <dmellado> ;)
14:59:06 <garloff> we need to ensure this, yeah
14:59:12 <feima> so we show lampstack in Beijing, but need a different app?
14:59:21 <tongli> and many companies have completed the successful runs many times.
14:59:47 <tongli> that is not the current state of k8s workload.
15:00:11 <tongli> guys, we ran out time.
15:00:12 <feima> ok
15:00:24 <feima> thank you very much
15:00:29 <feima> all of you
15:00:29 <ricolin> thx:)
15:00:30 <tongli> #decision, global summit uses lampstack workload.
15:00:37 <feima> ok
15:01:14 <dmellado> tongli: didn't we just agree on k8s
15:01:21 <dmellado> or are you speaking about the beijing one
15:01:22 <dmellado> ?
15:01:23 <tongli> great. thanks everyone. sorry for cutting this short but we run out time.
15:01:42 <tongli> @dmellado, that decision is for april beijing summit.
15:01:49 <dmellado> tongli: ack then ;)
15:01:51 <tongli> for boston summit, we run k8s workload.
15:02:22 <tongli> that is also part of the reason why beijing summit should not use the k8s workload since it has not been used by the foundation.
15:03:12 <tongli> all right, guys, thanks so much and please review patches and run the workload.
15:03:29 <ricolin> :)
15:03:35 <tongli> if your company has not signed up for the boston on stage keystone demo, you should do that before the spots run out.
15:03:35 <feima> thx
15:03:54 <tongli> #endmeeting