13:00:40 #startmeeting Review of Dublin edge notes 03 13:00:41 Meeting started Fri May 11 13:00:40 2018 UTC and is due to finish in 60 minutes. The chair is csatari. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:45 The meeting name has been set to 'review_of_dublin_edge_notes_03' 13:00:52 Hi 13:00:56 Mornin' 13:01:10 #topic Roll Call 13:01:19 #info Eric Sarault - eric.sarault@kontron.com 13:01:45 #info Gergely Csatari - gergely.csatari@nokia.com 13:02:37 #info Vishnu Ram - vishnu.n@ieee.org 13:02:39 csatari, hello! I am just interested in here and watching your meeting :) 13:02:56 welcome 13:03:01 Thank u :) 13:04:30 Let's wait 2 more mins. I have ~5 accepted responses to the meeting invite. 13:04:53 * esarault runs to grab a coffee 13:05:51 #topic Vancouver forum sessions 13:06:26 * esarault is back 13:06:36 We will have two forum sessions in Vancouver related to edge requirements vs OpenStack projects. 13:07:14 Are the times and dates set? 13:07:46 #info Possible edge architectures for Keystone: Tuesday, May 22, 11:00am-11:40am - Room 224 13:08:02 #link https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21737/possible-edge-architectures-for-keystone 13:08:18 #link https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming 13:08:39 Please review the disucssion topics in the etherpad and add your own. 13:09:08 #action All to check the forum etherpads. 13:09:21 And the other one is 13:10:08 #info Image handling in an edge cloud infrastructure: Tuesday, May 22, 9:00am-10:30am - Room 221-222 13:10:20 #link https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure 13:10:29 This one has no etherpad yet. 13:11:29 #action csatari to add an etherpad to the Image handling session and/or discuss with Erno Kuvaja 13:12:02 This is the two Forum sessions I remember on the topic. 13:12:33 #topic Correction of previous comments 13:12:54 #info I've corrected the comments received in the previous meetings. 13:13:03 Okay 13:13:10 Let's jump to the review part. 13:14:12 #topic Review of 5.2.2.7 Administration features 13:14:23 #link https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG#Administration_features 13:14:56 Does anyone know what is an rsc? 13:15:12 I copyed this from an therpad, but I have no idea what is it. 13:16:16 #action (5.2.2.7) csatari to figure out what an rsc is. 13:16:58 The only think i can think of is rack scale compute or something around that 13:17:13 a bare metal is a bare metal, can't get more basic than that 13:17:19 :) 13:17:22 Agreed 13:17:58 Any other comments to this section? 13:18:06 Negative 13:18:22 remote site controller i think 13:18:44 #topic Review of 5.2.2.8 Multiple cloud stacks 13:18:53 Thanks vrv_ 13:19:08 I will run a google search to double check it :) 13:19:17 #link https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG#Multiple_cloud_stacks 13:20:18 #action (5.2.2.8) csatari to correct typo "instancesand" 13:20:32 Any comments to thie section? 13:20:39 Any targeted versions? 13:20:44 s/thie/this/ 13:20:49 1.10 and Queens? 13:21:11 It would be too optimistic to talk about targeted versions I think :) 13:21:14 Here's assuming there's a line int he sand that needs to be drawn with the K8S 3-month release cycle 13:21:37 Given we're look at a 12 month update window at best for some fo the use cases 13:21:41 *looking 13:22:57 Should we build this limitation to the solution? 13:23:32 Not sure 13:23:40 Just wondering how that'll impact the upgrade process 13:23:48 I think it should be the edge cloud infrastructure administrators decision how close they would like to follow the new releases of their components. 13:24:00 True, it shouldn'T be built in 13:24:14 * ildikov is sneaking in to the room and lurking :) 13:24:15 but then how do you ensure that capability remains viable when upgrading 13:24:41 Last thing they'll want to here is "Process update and good luck mate!" 13:24:43 The how we do not know yet. 13:24:44 *hear 13:24:58 Gotcha 13:25:29 Here the requirement is that the edge cloud infrastructure should work with non homogenous versions in the edge cloud instances. 13:25:48 The how we will figure out later ;) 13:25:53 And that's perfect. Dug into the details, my mistake :) 13:26:02 This part is about what do we want to solve and what not. 13:26:53 Okay 13:27:00 moving on then 13:27:20 #topic Review of 5.2.2.9 Multi operator scenarios 13:27:34 #link https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG#Multi_operator_scenarios 13:28:27 In general I have a feeling that this section is a bit raw. 13:29:01 Yeah the content is light for the heavily implication it has 13:29:27 Connectivity should be a seperate sub-section and not under secuirty IMO 13:29:37 Connectivity is a rabbit hole all by itself 13:29:43 :) 13:30:05 This is about securing the communication of the VM-s/containers of one app. 13:30:21 newbie question: is federation for auth in scope of this? 13:31:12 vrv_: You mean that the user data is synched, so the same credentials can be used in all edge cloud instances? 13:32:09 I didnt think that far, I was wondering such mechanisms could be discussed under this or not. 13:32:23 This is implicitly part of 5.2.2.2 Use of a remote site 13:32:38 Where we say this "Credentials are only present on the original site " 13:33:09 Okay, is that true for multi-operator scenarios too? 13:33:36 I think yes. 13:33:44 Are we saying each edge site has its own controllers for OpenStack and K8s? What if an operator wants to control multiple edge locations nearby out of one region and have a backup site within that "geo" to cut down on orchestration cost and have more compute resources available. You might have a site that has K8S and another site within the same "geo" that has OpenStack? 13:35:10 esarault: Yes, the idea is every edge cloud instance is a full OpenStack or Kubernetes instance. 13:35:35 The 2nd part of the question I could not get. 13:35:46 Train of thought is 13:35:53 and this comes from dealing with customers building 5G networks now 13:36:04 The issue at the base of radio towers and such as cabinet space 13:36:18 Some dont have more than 4-8U 13:36:27 So if you take half of that for orchestrators 13:36:41 that significantly reduce the amount of VNFs you can run there 13:36:49 We should not use half of that for the control pane. 13:37:01 Depedns the hardware they have 13:37:11 if you go with COTS, it'll be hard 13:37:20 This means you'll need tailored hardware to run in those sites 13:37:24 I agree with that train of thought. 13:37:33 Or 13:37:54 We should be able to scale down OpenStack and Kubernetes to a reasonable scale. 13:38:05 Oh that would be lovely 13:38:23 Best example I can give is Wind River TitaniumM Cloud 13:38:32 For it to be considered at every site 13:38:44 It would need to shrink down to Titanium Cloud Edge/Edge SX type of scale 13:38:50 1 or 2 servers max to run OpenStack 13:38:59 So we should definitely look at what'll be in Akraino Release A 13:39:19 Because that's the use case I'm seeing asked from 5G deployers right now 13:39:45 Shrinking that footprint for the orchestration is definitely the ideal solution here 13:40:11 But then how do you monitor all those sites into one single window pane? 13:40:17 Aren't there single server instances of Openstack running today? 13:41:05 Well I've sent you some Nokia marketing privately. 13:41:53 Back to business. I feel that we should formulate some non funcitional requirements. 13:42:29 I add an open question to the wiki about this, as this needs more discussion. 13:43:43 #action csatari Add a question to the wiki "What should be the sice of the all the OpenStack components in the different edge deployment scenarios in terms of CPU, memory, disk and hardware units". 13:43:47 Is this ok? 13:44:04 Looks good 13:44:19 Cool 13:44:19 My point is, not all those sites have 36" depth capability 13:45:18 Should we add this as a note to the Small edge chapter/ 13:45:23 ? 13:46:00 Yes that owuld be a good idea 13:46:13 I've seen requirements as small as 12" depth 13:46:55 #action (4.1) csatari add a note, that not all those sites have 36" depth capability, some even have only 12'' depth 13:47:18 How can we relate depth to compute/storage resources? 13:47:53 I suspect 12" means you would have one third of the capacity compared to 36". right? 13:48:12 It's more the type of processor and the amount of drives you can put there 13:48:25 With 12", you'll get two ARM or Xeon-D type processor 13:48:30 4DIMM memmory at best 13:48:35 with 2x 2.5" if lucky 13:48:41 probably 2x M.2 SSD instead 13:49:01 Where as 36" you'll be able to get Xeon Scalable or AMD EPYCs with crazy storage and I/O 13:50:05 That is great info... Why don't we express it this way? I hear there is a lot of interest in ARM today ... especially at the edge. 13:50:25 Problem is with "Edge washing" they get blended in the same smoothy and this means you'll always have a piece of the edge story that won't be covered 13:51:29 The challenge with ARM is the ecosystem 13:51:35 #action (4.1) csatari to add the hardware related info to the Deployment Scenarios. 13:51:38 Any (more) comments to 5.2.2.9? 13:51:55 No, all good, we're diverging form the meeting purpose here I believe :) 13:52:05 I would like to add a note, that the section is still under work or someting, but it is true to the whole wiki.... 13:52:14 Yes, a bit. 13:52:40 But it is an interesting discussion ;) 13:52:59 I afgree :) 13:54:04 Is there a need to include the operator name in the descriptor of the edge node? 13:54:17 I will add some text to 5.2.2.9 saying that it is even more draft then the other parts of the page. 13:54:37 parus: I do not think. 13:55:11 what we need is an unique ID of everu edge cloud instance in the edge cloud infra and an unique id for every node in the edge cloud instance. 13:55:56 Then somewhere central, we would need to map ID to operator. 13:56:07 #action (5.2.2.9) csatari add a text to the chapter saying that it is even more draft then the other parts of the page. 13:56:42 #topic Vancouver edge computing group gathering 13:56:54 csatari: can we make a note that operator to Edg ID mapping needs consideration. 13:57:00 Do youhave any willingness to have an ad hoc meeting in vancouver? 13:57:11 parus: Sure, we can. 13:57:14 Sure thing 13:57:15 Which chapter? 13:57:25 5.2.2.9 13:57:34 I will create a doodle for us. 13:58:00 csatari: I would like to join. 13:58:07 #action (5.2.2.9) csatari to add a note that operator to Edge ID mapping needs consideration. 13:58:55 #action csatari to create a Doodle for a(not that) ad hoc edge computing group gatering in vancouver 13:59:08 #topic Next steps 13:59:24 #info we will continue from 5.3 Requirements 13:59:40 #action csatari to organize the next meeting. 13:59:55 csatari: Thanks for leading this discussion !!! Great Job ! 13:59:55 Thanks for the discussion. 14:00:01 Thanks for organizing this csatari! 14:00:09 Thanks parus 14:00:10 thanks 14:00:15 Looking forward to see you guys in Vancouver 14:00:23 #endmeeting