15:00:17 <serverascode> #startmeeting operators_telco_nfv
15:00:17 <openstack> Meeting started Wed Feb 22 15:00:17 2017 UTC and is due to finish in 60 minutes.  The chair is serverascode. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:20 <openstack> The meeting name has been set to 'operators_telco_nfv'
15:00:39 <serverascode> hello do we have anyone online for the telecom/nfv meeting? :)
15:00:57 <shintaro> o/
15:01:08 <serverascode> hi shintaro :)
15:01:15 <shintaro> hi serverascode
15:01:49 <serverascode> we'll give it a couple of minutes to see if anyone else shows up
15:01:50 <ad_rien_> o/
15:02:04 <serverascode> hi ad_rien_
15:02:05 <shintaro> ok
15:02:24 <ad_rien_> Hi Guys
15:02:37 <shintaro> hi
15:03:08 <serverascode> one more minute
15:03:47 <serverascode> if you have anything you want ot discuss, put it on the agenda :)
15:03:55 <serverascode> #link https://etherpad.openstack.org/p/ops-telco-nfv-meeting-agenda
15:04:19 <serverascode> ok well let's get going
15:04:27 <serverascode> #topic Generic NFV Platform document
15:04:41 <serverascode> I have been adding bits and pieces and will just continue to do that this week, I have some time
15:04:51 <serverascode> #link https://etherpad.openstack.org/p/generic-nfv-platform
15:05:08 <serverascode> feel free to have a look at the table of contents and just add things that you think are important
15:05:08 <shintaro> looks great
15:05:28 <serverascode> I'm just going to keep typing and then come back and edit later
15:05:30 <shintaro> where should I start if I want to add stuff
15:05:44 <shintaro> table of contents?
15:06:04 <serverascode> add it to the table of contents, then if you want to write more, add it into the same area below
15:06:22 <serverascode> eventually we will import it into the openstack git/gerrit system, once we have a first draft
15:06:56 <serverascode> I'm trying to keep the format like the restructured text that most openstack projects use
15:06:58 <ad_rien_> maybe if we want to progress we should try to identify milestones.
15:07:13 <ad_rien_> (i.e. at least for me I need small but concrete tasks to move forward)
15:07:17 <serverascode> probably a good idea :)
15:07:32 <serverascode> I was just thinking a "first draft" but that is a big milestone
15:07:38 <serverascode> any thoughts on what we could set as milestones?
15:08:08 <shintaro> agree on the use-case?
15:08:24 <serverascode> yeah that is a big one
15:08:29 <shintaro> so we can have the same baseline
15:09:26 <serverascode> Ok I put a "milestones" sectoin in the document and added your use case suggestion
15:09:51 <shintaro> like NFV as in ETSI NFV (vECP, vCPE), or as in datacetner NFV (i.e. FW, LB, VPN)
15:10:07 <serverascode> I think more like ETSI NFV
15:11:05 <shintaro> ok that makes me more clear on the scope
15:11:56 <serverascode> the problem is that some of those use cases are pretty complicated
15:12:10 <serverascode> I would like to pick a very simple use case if possible
15:12:33 <shintaro> yes and focus on the OpenStack side
15:12:43 <serverascode> exactly
15:13:33 <ad_rien_> So which one?
15:13:48 <serverascode> I'm really not sure
15:13:50 <serverascode> :)
15:13:57 <ad_rien_> I'm not familiar with NFV use-cases (I'm just discussing wit Orange about CloudRAN use-cases)
15:14:17 <ad_rien_> But indeed, it would be nice to have one use-case that can drive our studies.
15:14:17 <serverascode> something scale out
15:14:30 <serverascode> one example we had at work was a scale out ip address manager maybe
15:14:35 <serverascode> like infoblox kind of thing
15:14:44 <serverascode> scale up and down with demand
15:15:01 <ad_rien_> should you consider locations of the VM/container ?
15:15:15 <serverascode> I would imagine so if there is more than one "cloud"
15:15:32 <ad_rien_> I think we should find a use-case where location (i.e. to address latency considerations for instance) is important
15:15:50 <serverascode> but that is also part of the problem b/c I don't necessarily want to get into multi-site/cloud etc in this first draft
15:16:10 <serverascode> maybe I am trying to oversimplify
15:17:08 <ad_rien_> if you have only one central cloud, it looks rather simple, doesn't it?
15:17:23 <serverascode> yeah
15:17:24 <ad_rien_> I.e. I do not see why OpenStack as it is now will not be able to satisfy all requirements
15:17:50 <ad_rien_> i.E. Starting a VM in a central cloud for performing any kind of workloads is something that is rather easy right now ;)
15:18:20 <serverascode> do you mean that is a good place to start for this document, or that it is too simple?
15:18:39 <ad_rien_> I think it is too simple
15:18:54 <serverascode> ok
15:18:57 <ad_rien_> i.e. what kind of obstacles can prevent such a use-case ?
15:19:13 <ad_rien_> sorry Let me reword, I do not see what is complex in such a ''simple'' use-case
15:19:24 <ad_rien_> where you have to start one VM to virtualize network functionalities
15:19:48 <serverascode> perhaps the only additional thing I was thinking was automated scale-up/down with heat
15:19:52 <ad_rien_> maybe I miss one point (because I'm not familiar) but it looks like there is no challenge
15:19:57 <ad_rien_> yes
15:20:04 <ad_rien_> so something that is rather well done now
15:20:09 <serverascode> I think part of the problem is that even that is quite challenging for a lot of people
15:20:31 <serverascode> all the functionality is there, but lots of times organizations get stuck trying to do too much
15:20:44 <ad_rien_> it is a retroaction loop (i.e a MAPE-K loop).
15:20:45 <serverascode> and ultimately fail with OpenStack and NFV
15:20:57 <ad_rien_> ok maybe I miss something
15:21:15 <ad_rien_> you have an application you will monitor your application and scale in/out according to your requirements?
15:21:22 <ad_rien_> HEAT can help you to do that, cannot it ?
15:21:27 <serverascode> right, yup it can
15:21:28 <shintaro> so there are large deployment team and muti-cloud team already, maybe we should focus on something different
15:22:08 <serverascode> or perhaps tie them together
15:22:21 <serverascode> I was thinking this group could do a lot in terms of bringing everyone together
15:22:29 <shintaro> that is one good point
15:22:31 <ad_rien_> So you believe there  are still challenges to address with a cloud that is deployed in only one location?
15:22:59 <serverascode> ad_rien_ I do, I see it at work with single NFV style clouds
15:23:03 <serverascode> people have tons of issues even doing that
15:23:10 <ad_rien_> such as?
15:23:24 <shintaro> NFV apps requre more SLA and performance on the infra than cloud apps
15:23:34 <serverascode> such as thinking openstack is vmware with instance HA
15:23:35 <ad_rien_> (apart from being trained to OpenStack)
15:23:43 <serverascode> and getting paralyzed with requirements
15:23:49 <serverascode> not understanding what Neutron is
15:23:52 <shintaro> which is not a cloud-like approach
15:23:58 <ad_rien_> ok sorry
15:24:02 <serverascode> and also trying to implement SDN and failing :)
15:24:14 <serverascode> lots of those more organizational issues
15:24:19 <serverascode> less technical
15:24:26 <ad_rien_> yes but is it due technical/scientific barriers ?
15:24:31 <ad_rien_> yes exactly what I mean
15:24:40 <serverascode> right
15:25:01 <serverascode> people and culture issues, lots of misunderstandings about OpenStack and scale-out workloads/elastic
15:25:04 <ad_rien_> I can be wrong (i.e. in the sense that this group can support people by providing guidelines, …. to run NFV applications in a central cloud)
15:25:45 <serverascode> maybe what  Iwill do is go through some of the existing NFV use cases the working groups have put forward and list them in the odcument and we can try to pick one
15:25:54 <serverascode> at the next meeting
15:25:55 <ad_rien_> but from my side (i.e. Inria) we are interested to identify the barriers/obstacles that prevent OpenStack to be the prevalent solution for operating a geo-distributed cloud.
15:26:22 <serverascode> right and that is certainly not a solved problem, lots of work to do there
15:26:33 <ad_rien_> such a geo-distributed cloud is mandatory for several use-cases, including some NFV ones.
15:26:44 <shintaro> agree
15:27:16 <serverascode> ok, let me take an action to review some existing use cases and we can discuss again next meeting
15:27:19 <serverascode> sound good?
15:27:22 <ad_rien_> ok
15:27:23 <ad_rien_> for me
15:27:29 <shintaro> +1
15:27:39 <serverascode> #action serverascode review existing use cases and list in document for discussion next meeting
15:27:55 <serverascode> #topic Open Discussion
15:28:09 <serverascode> do either of you have another topic you'd like to discuss?
15:28:42 <shintaro> anything to bring up for the OpsMeetup? if there are people in the sessin
15:28:44 * ad_rien_ is looking for a link
15:28:55 <shintaro> session
15:29:20 <ad_rien_> ?
15:29:40 <serverascode> I don't have anything right now
15:29:47 <ad_rien_> one short info from my side: the Meghdwar WG has been stopped :https://wiki.openstack.org/wiki/Meghdwar
15:29:54 <serverascode> #link https://etherpad.openstack.org/p/MIL-ops-meetup
15:29:58 <ad_rien_> they will join the Massively Distributed WG effort.
15:30:13 <serverascode> ad_rien_ there is a operators meetup in Milan
15:30:24 <ad_rien_> yes
15:30:44 <ad_rien_> please go ahead (I just wanted to share the information regarding Meghdwar)
15:30:59 <shintaro> is this like fog-computing?
15:31:29 <ad_rien_> they tried to propose a reference architecture for fog computing yes
15:31:56 <ad_rien_> but they did find enough contributors If I correctly understood
15:32:11 <serverascode> ok, I don't think I'd heard of that project before
15:32:49 <shintaro> me neither :)
15:32:59 <shintaro> and don't even know how to pronounce it
15:33:01 <serverascode> but it's good they went through the effort to try, and then officially closed
15:33:10 <ad_rien_> Actually, several actions try to address the same challenge but with a different name…
15:33:30 <ad_rien_> I spent many hours to try to identify all of them but it seems to be a problem that will never end
15:33:54 <serverascode> like too many projects doing the same thing?
15:33:59 <ad_rien_> yes
15:34:20 <ad_rien_> It is hard to federate all efforts and ever harder to prevent reinventing the wheel each time
15:34:29 <ad_rien_> s/ever/even
15:35:00 <serverascode> ok, any other stuff to mention?
15:35:18 <serverascode> if not I guess we can close off the meeting early
15:35:26 <shintaro> not from me
15:35:34 <ad_rien_> From the massively distributed WG we are starting to perform WANwide evaluations of OpenSTack
15:35:59 <ad_rien_> any value, order of magnitude in terms of latency/BW for NFV use-cases would be highly appreciated
15:36:09 <ad_rien_> :-)
15:36:31 <serverascode> so what does WANwide mean? like connecting two openstack cloud networks? like evpn and such?
15:36:31 <shintaro> sounds interesting
15:36:41 <ad_rien_> no
15:36:55 <ad_rien_> like a single OpenStack instance that operates remote compute nodes.
15:37:00 <serverascode> ah ok
15:37:01 <ad_rien_> (basis scenarios)
15:37:13 <ad_rien_> then we will investigate more complex scenarios such as multi-regions
15:37:24 <ad_rien_> and finally we hope to be able to investigate Cell V2 scenarios
15:37:43 <shintaro> RabbitMQ in each AZ?
15:37:53 <ad_rien_> we want to identify technical/scientific barriers that will lead to revise Openstack internals
15:38:07 <ad_rien_> shintaro: this is indeed one question ;)
15:38:11 <serverascode> cool, are there some central documents you could link to?
15:38:25 <ad_rien_> for the moment we are investigating RabbitMQ but folks from redhat are pushing QPiD
15:38:35 <ad_rien_> and we should also evaluate ZeroMQ
15:38:39 <serverascode> ah RedHat :)
15:38:54 <ad_rien_> http://enos.readthedocs.io/en/latest/
15:39:04 <ad_rien_> http://enos.readthedocs.io/en/latest/network-emulation/index.html
15:39:24 <ad_rien_> for further information please refer to the massively distributed WG etherpad
15:39:32 <serverascode> ok
15:39:53 <ad_rien_> #link https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 Massively Distributed WG etherpad
15:39:56 <shintaro> we upstreamed AZ for Neutron for similar usecase
15:40:02 <ad_rien_> that's all from my side thanks.
15:40:13 <ad_rien_> shintaro:  ?
15:40:58 <shintaro> Neutron resources did not have AZ concept so when you want to use distributed routers, you can not specify AZ
15:41:05 <ad_rien_> ok
15:41:28 <ad_rien_> that looks interesting.
15:41:39 <serverascode> ok well thank you both very much for attending the meeting :)
15:41:45 <serverascode> we will chat in a couple of weeks
15:41:47 <ad_rien_> thanks for chairing
15:41:48 <ad_rien_> ok
15:41:49 <shintaro> thank you
15:41:52 <ad_rien_> ++
15:41:54 <serverascode> #endmeeting