15:00:17 #startmeeting operators_telco_nfv 15:00:17 Meeting started Wed Feb 22 15:00:17 2017 UTC and is due to finish in 60 minutes. The chair is serverascode. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:20 The meeting name has been set to 'operators_telco_nfv' 15:00:39 hello do we have anyone online for the telecom/nfv meeting? :) 15:00:57 o/ 15:01:08 hi shintaro :) 15:01:15 hi serverascode 15:01:49 we'll give it a couple of minutes to see if anyone else shows up 15:01:50 o/ 15:02:04 hi ad_rien_ 15:02:05 ok 15:02:24 Hi Guys 15:02:37 hi 15:03:08 one more minute 15:03:47 if you have anything you want ot discuss, put it on the agenda :) 15:03:55 #link https://etherpad.openstack.org/p/ops-telco-nfv-meeting-agenda 15:04:19 ok well let's get going 15:04:27 #topic Generic NFV Platform document 15:04:41 I have been adding bits and pieces and will just continue to do that this week, I have some time 15:04:51 #link https://etherpad.openstack.org/p/generic-nfv-platform 15:05:08 feel free to have a look at the table of contents and just add things that you think are important 15:05:08 looks great 15:05:28 I'm just going to keep typing and then come back and edit later 15:05:30 where should I start if I want to add stuff 15:05:44 table of contents? 15:06:04 add it to the table of contents, then if you want to write more, add it into the same area below 15:06:22 eventually we will import it into the openstack git/gerrit system, once we have a first draft 15:06:56 I'm trying to keep the format like the restructured text that most openstack projects use 15:06:58 maybe if we want to progress we should try to identify milestones. 15:07:13 (i.e. at least for me I need small but concrete tasks to move forward) 15:07:17 probably a good idea :) 15:07:32 I was just thinking a "first draft" but that is a big milestone 15:07:38 any thoughts on what we could set as milestones? 15:08:08 agree on the use-case? 15:08:24 yeah that is a big one 15:08:29 so we can have the same baseline 15:09:26 Ok I put a "milestones" sectoin in the document and added your use case suggestion 15:09:51 like NFV as in ETSI NFV (vECP, vCPE), or as in datacetner NFV (i.e. FW, LB, VPN) 15:10:07 I think more like ETSI NFV 15:11:05 ok that makes me more clear on the scope 15:11:56 the problem is that some of those use cases are pretty complicated 15:12:10 I would like to pick a very simple use case if possible 15:12:33 yes and focus on the OpenStack side 15:12:43 exactly 15:13:33 So which one? 15:13:48 I'm really not sure 15:13:50 :) 15:13:57 I'm not familiar with NFV use-cases (I'm just discussing wit Orange about CloudRAN use-cases) 15:14:17 But indeed, it would be nice to have one use-case that can drive our studies. 15:14:17 something scale out 15:14:30 one example we had at work was a scale out ip address manager maybe 15:14:35 like infoblox kind of thing 15:14:44 scale up and down with demand 15:15:01 should you consider locations of the VM/container ? 15:15:15 I would imagine so if there is more than one "cloud" 15:15:32 I think we should find a use-case where location (i.e. to address latency considerations for instance) is important 15:15:50 but that is also part of the problem b/c I don't necessarily want to get into multi-site/cloud etc in this first draft 15:16:10 maybe I am trying to oversimplify 15:17:08 if you have only one central cloud, it looks rather simple, doesn't it? 15:17:23 yeah 15:17:24 I.e. I do not see why OpenStack as it is now will not be able to satisfy all requirements 15:17:50 i.E. Starting a VM in a central cloud for performing any kind of workloads is something that is rather easy right now ;) 15:18:20 do you mean that is a good place to start for this document, or that it is too simple? 15:18:39 I think it is too simple 15:18:54 ok 15:18:57 i.e. what kind of obstacles can prevent such a use-case ? 15:19:13 sorry Let me reword, I do not see what is complex in such a ''simple'' use-case 15:19:24 where you have to start one VM to virtualize network functionalities 15:19:48 perhaps the only additional thing I was thinking was automated scale-up/down with heat 15:19:52 maybe I miss one point (because I'm not familiar) but it looks like there is no challenge 15:19:57 yes 15:20:04 so something that is rather well done now 15:20:09 I think part of the problem is that even that is quite challenging for a lot of people 15:20:31 all the functionality is there, but lots of times organizations get stuck trying to do too much 15:20:44 it is a retroaction loop (i.e a MAPE-K loop). 15:20:45 and ultimately fail with OpenStack and NFV 15:20:57 ok maybe I miss something 15:21:15 you have an application you will monitor your application and scale in/out according to your requirements? 15:21:22 HEAT can help you to do that, cannot it ? 15:21:27 right, yup it can 15:21:28 so there are large deployment team and muti-cloud team already, maybe we should focus on something different 15:22:08 or perhaps tie them together 15:22:21 I was thinking this group could do a lot in terms of bringing everyone together 15:22:29 that is one good point 15:22:31 So you believe there are still challenges to address with a cloud that is deployed in only one location? 15:22:59 ad_rien_ I do, I see it at work with single NFV style clouds 15:23:03 people have tons of issues even doing that 15:23:10 such as? 15:23:24 NFV apps requre more SLA and performance on the infra than cloud apps 15:23:34 such as thinking openstack is vmware with instance HA 15:23:35 (apart from being trained to OpenStack) 15:23:43 and getting paralyzed with requirements 15:23:49 not understanding what Neutron is 15:23:52 which is not a cloud-like approach 15:23:58 ok sorry 15:24:02 and also trying to implement SDN and failing :) 15:24:14 lots of those more organizational issues 15:24:19 less technical 15:24:26 yes but is it due technical/scientific barriers ? 15:24:31 yes exactly what I mean 15:24:40 right 15:25:01 people and culture issues, lots of misunderstandings about OpenStack and scale-out workloads/elastic 15:25:04 I can be wrong (i.e. in the sense that this group can support people by providing guidelines, …. to run NFV applications in a central cloud) 15:25:45 maybe what Iwill do is go through some of the existing NFV use cases the working groups have put forward and list them in the odcument and we can try to pick one 15:25:54 at the next meeting 15:25:55 but from my side (i.e. Inria) we are interested to identify the barriers/obstacles that prevent OpenStack to be the prevalent solution for operating a geo-distributed cloud. 15:26:22 right and that is certainly not a solved problem, lots of work to do there 15:26:33 such a geo-distributed cloud is mandatory for several use-cases, including some NFV ones. 15:26:44 agree 15:27:16 ok, let me take an action to review some existing use cases and we can discuss again next meeting 15:27:19 sound good? 15:27:22 ok 15:27:23 for me 15:27:29 +1 15:27:39 #action serverascode review existing use cases and list in document for discussion next meeting 15:27:55 #topic Open Discussion 15:28:09 do either of you have another topic you'd like to discuss? 15:28:42 anything to bring up for the OpsMeetup? if there are people in the sessin 15:28:44 * ad_rien_ is looking for a link 15:28:55 session 15:29:20 ? 15:29:40 I don't have anything right now 15:29:47 one short info from my side: the Meghdwar WG has been stopped :https://wiki.openstack.org/wiki/Meghdwar 15:29:54 #link https://etherpad.openstack.org/p/MIL-ops-meetup 15:29:58 they will join the Massively Distributed WG effort. 15:30:13 ad_rien_ there is a operators meetup in Milan 15:30:24 yes 15:30:44 please go ahead (I just wanted to share the information regarding Meghdwar) 15:30:59 is this like fog-computing? 15:31:29 they tried to propose a reference architecture for fog computing yes 15:31:56 but they did find enough contributors If I correctly understood 15:32:11 ok, I don't think I'd heard of that project before 15:32:49 me neither :) 15:32:59 and don't even know how to pronounce it 15:33:01 but it's good they went through the effort to try, and then officially closed 15:33:10 Actually, several actions try to address the same challenge but with a different name… 15:33:30 I spent many hours to try to identify all of them but it seems to be a problem that will never end 15:33:54 like too many projects doing the same thing? 15:33:59 yes 15:34:20 It is hard to federate all efforts and ever harder to prevent reinventing the wheel each time 15:34:29 s/ever/even 15:35:00 ok, any other stuff to mention? 15:35:18 if not I guess we can close off the meeting early 15:35:26 not from me 15:35:34 From the massively distributed WG we are starting to perform WANwide evaluations of OpenSTack 15:35:59 any value, order of magnitude in terms of latency/BW for NFV use-cases would be highly appreciated 15:36:09 :-) 15:36:31 so what does WANwide mean? like connecting two openstack cloud networks? like evpn and such? 15:36:31 sounds interesting 15:36:41 no 15:36:55 like a single OpenStack instance that operates remote compute nodes. 15:37:00 ah ok 15:37:01 (basis scenarios) 15:37:13 then we will investigate more complex scenarios such as multi-regions 15:37:24 and finally we hope to be able to investigate Cell V2 scenarios 15:37:43 RabbitMQ in each AZ? 15:37:53 we want to identify technical/scientific barriers that will lead to revise Openstack internals 15:38:07 shintaro: this is indeed one question ;) 15:38:11 cool, are there some central documents you could link to? 15:38:25 for the moment we are investigating RabbitMQ but folks from redhat are pushing QPiD 15:38:35 and we should also evaluate ZeroMQ 15:38:39 ah RedHat :) 15:38:54 http://enos.readthedocs.io/en/latest/ 15:39:04 http://enos.readthedocs.io/en/latest/network-emulation/index.html 15:39:24 for further information please refer to the massively distributed WG etherpad 15:39:32 ok 15:39:53 #link https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 Massively Distributed WG etherpad 15:39:56 we upstreamed AZ for Neutron for similar usecase 15:40:02 that's all from my side thanks. 15:40:13 shintaro: ? 15:40:58 Neutron resources did not have AZ concept so when you want to use distributed routers, you can not specify AZ 15:41:05 ok 15:41:28 that looks interesting. 15:41:39 ok well thank you both very much for attending the meeting :) 15:41:45 we will chat in a couple of weeks 15:41:47 thanks for chairing 15:41:48 ok 15:41:49 thank you 15:41:52 ++ 15:41:54 #endmeeting