15:01:09 #startmeeting fog-edge-massively-distributed-clouds 15:01:10 Meeting started Wed Jan 31 15:01:09 2018 UTC and is due to finish in 60 minutes. The chair is rcherrueau. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:13 The meeting name has been set to 'fog_edge_massively_distributed_clouds' 15:01:15 #chair  ad_ri3n_ 15:01:19 #topic roll call 15:01:22 Hi guys 15:01:43 let's see who will be there today 15:01:54 #chair ad_ri3n_1 15:01:55 Current chairs: ad_ri3n_1 rcherrueau 15:01:59 parus and a few folks already apologized 15:02:06 #topic roll call 15:02:19 o/ 15:02:26 and actually I'm also travelling so the meeting should be rather short ;) 15:02:31 o/ 15:02:31 Hi kgiusti 15:02:42 hello everyone! 15:02:46 hi ansmith 15:03:08 msimonin has some contraints and we (Alex me,…) are travelling 15:03:24 So I don't think we will have a long meeting today ;) 15:03:33 anyone else? 15:03:44 #info agenda 15:04:00 #link https://etherpad.openstack.org/p/massivfbely_distributed_ircmeetings_2018 (line 140) 15:05:12 So maybe the most important one. We started a gap analysis of the different openstack deployment scenarios according to some features end-users/developpers of edge computing infrastructures may expect 15:05:28 I will come back later as I would like to ask you to go and comment it. 15:05:55 Regarding the edge sessions that are done simultaneously, the meeting has been cancelled yesterday 15:05:58 so nothing new 15:06:23 finally regarding the presentations I listed the ones I'm aware about it 15:07:04 I'm not sure there will be other ones 15:07:21 if you are aware of others please can you add them. 15:07:39 kgiusti: ansmith maybe some edge presentations from redhat have been already planned to be submitted ? 15:08:18 I'm not specifically aware of any, but we'll ping the right people - ansmith? 15:08:20 I am not aware but will look into it 15:08:39 ok thanks 15:09:23 any news from your side ? 15:09:32 Hi all :) 15:09:40 #topic ongoing-action-amqp 15:09:49 kgiusti: I saw you put a lot of text. 15:10:04 please 15:10:08 Yeah, spent some time working on a plan for dragonflow integration 15:10:10 can you give us an overwiew? 15:10:33 Yes - there's interest in adding oslo.messaging support for dragonflow project 15:10:40 Hi. Sorry I'm late. I thought this was next week. 15:10:44 url: https://wiki.openstack.org/wiki/Dragonflow 15:10:46 Hi oanson no worry 15:10:47 thanks 15:10:53 hi oanson! 15:11:02 (BTW the meeting today will be a bit shorter as most of us are travelling today) 15:11:03 we're talking dragonflow btw 15:11:12 kgiusti: please go ahead ;) 15:11:13 Good timing oanson :) 15:11:15 Great! I'll catch up now :) 15:11:17 tl;dr 15:11:33 oslo.messaging's current API is not exactly a good fit 15:12:12 to emulate the API used by dragonflow's pub/sub clients in o.m. would be very heavyweight (in terms of resource use - queues, connections) 15:12:41 now investigating different approach - possible have a bespoke AMQP 1.0 driver for dragonflow instead 15:12:52 +1 15:13:07 or a quick 'n' dirty POC over oslo.messaging to get an accurate sense of cost 15:13:15 that's about it 15:13:41 My understanding was that rabbitmq can't handle the distributed load we need? 15:13:47 Or did I miss something? 15:14:10 kgiusti: you know I'm in for the new oslo.messaging api 15:14:33 I really think other project can benefit from it 15:14:39 oanson: amqp + router may be able to scale higher 15:14:59 Sure. That makes sense 15:15:08 oanson: for a distributed use case, but msimonin et. al. is testing that ATM 15:15:30 msimonin: yeah I think it would be very handy too 15:15:35 I see. That's the ombt project? 15:15:36 rabbitmq is probably not a good fit in a massively distributed environment 15:15:41 ok what are the actions to conduct? I guess kgiusti you willl discuss about this point during the PTG? 15:16:18 ad_ri3n_1: 2 things: propose new API at oslo PTG, and build a POC for dragonflow to kick the tires a bit 15:16:34 kgiusti: +1 15:16:51 ok 15:16:51 sadly next week I'm stuck in meetings so I probably won't get much real work started until week after :( 15:17:13 yes it seems we are all busy with meetings and project submissions :( 15:17:15 ok 15:17:21 kgiusti thanks for having done this effort on this ! 15:17:28 so maybe a new update next time? 15:17:43 ad_ri3n_1: yep I think that's reasonable 15:17:50 msimonin: my pleasure! 15:17:54 (thanks msimonin ;) a bit stressed today especially because I have to catch my train in 10 min max) 15:18:15 I can backup you ad_ri3n_1 if needed 15:18:41 cool 15:18:42 ;) 15:18:51 I was not sure you can attend today. 15:19:16 OK maybe I can give a few words regarding the gap analysis, especially because I add a todo for myself regarding dragonflow 15:19:31 kgiusti: anything else? Can i move to the next topic? 15:19:43 take the floor, sir! 15:19:49 Thanks 15:19:50 #topic ongoing-action-gap-analysis 15:20:15 so we spent a couple of hours with Inria and Ericsson folks to produce the following etherpad 15:20:23 #link https://etherpad.openstack.org/p/edge-gap-analysis 15:21:08 so our goal was to conduct a gap analysis of the different deployment scenarios administrators can leverage to operate a middle edge infrastructure 15:21:41 so first part we defined the topology we consider 15:22:08 second part we defined expected features (i.e. operations you would to perform either as a developer/user of the edge infrastructure or as an administrator) 15:22:15 We defined different levels actually 7 15:22:35 The first level include the simplest operations such as provisioning a VM locally 15:22:47 The last level address interoperability challenges 15:23:22 in the third part, we analysed different scenarios (i.e. openstack vanilla, cells, regions, federations) according to the aforementioned expected features 15:23:41 It would be great if you can go through the analysis and give us comments/feedbacks 15:24:15 especiall oanson 15:24:30 Sure. Will do. 15:24:30 it would be nice to integrate the dragonflow proposal 15:24:40 in the different deployment scenarios 15:24:44 Would be great for us too :) 15:24:48 for instance dragonflow and regions 15:24:53 how this behaves? 15:24:58 so either you can do that offline 15:25:08 or you can take part to our next discussion which is scheduled friday 15:25:20 Feb 9th 15:25:29 at 15:30 Paris time 15:25:35 that's all from my side ;) 15:25:51 Sure. I'll let you know one way or the other. 15:25:58 What you mean ad_ri3n_1 is to get tenant network spanning different regions ? 15:26:00 great 15:26:07 msimonin: yes 15:26:09 Level 3 15:26:11 features 15:26:53 sorry Level 4 features 15:27:00 this would be great actually if oanson can give us some insight on "distributed" tenant network : tenant network over several regions/sites 15:27:06 oanson: adrien.lebre@inria.fr ;) 15:27:13 msimonin: +1 15:27:22 * ad_ri3n_1 must leave :( 15:27:27 @msimonin can you please chair 15:27:31 #chair msimonin 15:27:32 Current chairs: ad_ri3n_1 msimonin rcherrueau 15:27:39 Ok :) 15:27:41 I apologize but I really have to go 15:27:46 thanks @msimonin 15:27:48 msimonin, are talking one Openstack distribution across sites, or multiple distributions with e.g. l2gw? 15:27:56 ++ 15:28:18 I would say the pros/cons of both solutions 15:28:31 to address the level of features of the document 15:28:49 btw oanson are you going to the PTG ? 15:28:57 I probably missed this information 15:29:04 yes on the PTG 15:29:20 I'll check if I registered in the etherpad after the meeting 15:29:27 ok 15:30:02 From the Dragonflow PoV, a single deployment is easiest. Easier to configure and manage, and the distributed DB (e.g. etcd, redis) and pub/sub takes care of synchronizing the controllers and hosts 15:30:02 Any questions regarding this topic ? 15:30:33 oanson: ack 15:31:31 I think we can move on the next topic 15:31:36 # open discussions 15:31:44 #topic open discussions 15:32:29 Anything to share, kgiusti, ansmith, oanson, rcherrueau (or other people) ? 15:32:38 Nothing from me 15:32:47 nothing at this point 15:32:52 I'm done 15:33:26 me neither 15:34:17 Ok let's end this :) 15:34:43 Have a nice day/evening everyone ! 15:34:47 #end meeting 15:35:00 #endmeeting