13:00:40 <csatari> #startmeeting Review of Dublin edge notes 03
13:00:41 <openstack> Meeting started Fri May 11 13:00:40 2018 UTC and is due to finish in 60 minutes.  The chair is csatari. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:45 <openstack> The meeting name has been set to 'review_of_dublin_edge_notes_03'
13:00:52 <csatari> Hi
13:00:56 <esarault> Mornin'
13:01:10 <csatari> #topic Roll Call
13:01:19 <esarault> #info Eric Sarault - eric.sarault@kontron.com
13:01:45 <csatari> #info Gergely Csatari - gergely.csatari@nokia.com
13:02:37 <vrv_> #info Vishnu Ram - vishnu.n@ieee.org
13:02:39 <ianychoi> csatari, hello! I am just interested in here and watching your meeting :)
13:02:56 <csatari> welcome
13:03:01 <ianychoi> Thank u :)
13:04:30 <csatari> Let's wait 2 more mins. I have ~5 accepted responses to the meeting invite.
13:04:53 * esarault runs to grab a coffee
13:05:51 <csatari> #topic Vancouver forum sessions
13:06:26 * esarault is back
13:06:36 <csatari> We will have two forum sessions in Vancouver related to edge requirements vs OpenStack projects.
13:07:14 <esarault> Are the times and dates set?
13:07:46 <csatari> #info Possible edge architectures for Keystone: Tuesday, May 22, 11:00am-11:40am - Room 224
13:08:02 <csatari> #link https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21737/possible-edge-architectures-for-keystone
13:08:18 <csatari> #link https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming
13:08:39 <csatari> Please review the disucssion topics in the etherpad and add your own.
13:09:08 <csatari> #action All to check the forum etherpads.
13:09:21 <csatari> And the other one is
13:10:08 <csatari> #info Image handling in an edge cloud infrastructure: Tuesday, May 22, 9:00am-10:30am - Room 221-222
13:10:20 <csatari> #link https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure
13:10:29 <csatari> This one has no etherpad yet.
13:11:29 <csatari> #action csatari to add an etherpad to the Image handling session and/or discuss with Erno Kuvaja
13:12:02 <csatari> This is the two Forum sessions I remember on the topic.
13:12:33 <csatari> #topic Correction of previous comments
13:12:54 <csatari> #info I've corrected the comments received in the previous meetings.
13:13:03 <csatari> Okay
13:13:10 <csatari> Let's jump to the review part.
13:14:12 <csatari> #topic Review of Administration features
13:14:23 <csatari> #link https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG#Administration_features
13:14:56 <csatari> Does anyone know what is an rsc?
13:15:12 <csatari> I copyed this from an therpad, but I have no idea what is it.
13:16:16 <csatari> #action ( csatari to figure out what an rsc is.
13:16:58 <esarault> The only think i can think of is rack scale compute or something around that
13:17:13 <esarault> a bare metal is a bare metal, can't get more basic than that
13:17:19 <csatari> :)
13:17:22 <csatari> Agreed
13:17:58 <csatari> Any other comments to this section?
13:18:06 <esarault> Negative
13:18:22 <vrv_> remote site controller i think
13:18:44 <csatari> #topic Review of Multiple cloud stacks
13:18:53 <csatari> Thanks vrv_
13:19:08 <csatari> I will run a google search to double check it :)
13:19:17 <csatari> #link https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG#Multiple_cloud_stacks
13:20:18 <csatari> #action ( csatari to correct typo "instancesand"
13:20:32 <csatari> Any comments to thie section?
13:20:39 <esarault> Any targeted versions?
13:20:44 <csatari> s/thie/this/
13:20:49 <esarault> 1.10 and Queens?
13:21:11 <csatari> It would be too optimistic to talk about targeted versions I think :)
13:21:14 <esarault> Here's assuming there's a line int he sand that needs to be drawn with the K8S 3-month release cycle
13:21:37 <esarault> Given we're look at a 12 month update window at best for some fo the use cases
13:21:41 <esarault> *looking
13:22:57 <csatari> Should we build this limitation to the solution?
13:23:32 <esarault> Not sure
13:23:40 <esarault> Just wondering how that'll impact the upgrade process
13:23:48 <csatari> I think it should be the edge cloud infrastructure administrators decision how close they would like to follow the new releases of their components.
13:24:00 <esarault> True, it shouldn'T be built in
13:24:14 * ildikov is sneaking in to the room and lurking :)
13:24:15 <esarault> but then how do you ensure that capability remains viable when upgrading
13:24:41 <esarault> Last thing they'll want to here is "Process update and good luck mate!"
13:24:43 <csatari> The how we do not know yet.
13:24:44 <esarault> *hear
13:24:58 <esarault> Gotcha
13:25:29 <csatari> Here the requirement is that the edge cloud infrastructure should work with non homogenous versions in the edge cloud instances.
13:25:48 <csatari> The how we will figure out later ;)
13:25:53 <esarault> And that's perfect. Dug into the details, my mistake :)
13:26:02 <csatari> This part is about what do we want to solve and what not.
13:26:53 <csatari> Okay
13:27:00 <csatari> moving on then
13:27:20 <csatari> #topic Review of Multi operator scenarios
13:27:34 <csatari> #link https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG#Multi_operator_scenarios
13:28:27 <csatari> In general I have a feeling that this section is a bit raw.
13:29:01 <esarault> Yeah the content is light for the heavily implication it has
13:29:27 <esarault> Connectivity should be a seperate sub-section and not under secuirty IMO
13:29:37 <esarault> Connectivity is a rabbit hole all by itself
13:29:43 <csatari> :)
13:30:05 <csatari> This is about securing the communication of the VM-s/containers of one app.
13:30:21 <vrv_> newbie question: is federation for auth in scope of this?
13:31:12 <csatari> vrv_: You mean that the user data is synched, so the same credentials can be used in all edge cloud instances?
13:32:09 <vrv_> I didnt think that far, I was wondering such mechanisms could be discussed under this or not.
13:32:23 <csatari> This is implicitly part of Use of a remote site
13:32:38 <csatari> Where we say this "Credentials are only present on the original site "
13:33:09 <vrv_> Okay, is that true for multi-operator scenarios too?
13:33:36 <csatari> I think yes.
13:33:44 <esarault> Are we saying each edge site has its own controllers for OpenStack and K8s? What if an operator wants to control multiple edge locations nearby out of one region and have a backup site within that "geo" to cut down on orchestration cost and have more compute resources available. You might have a site that has K8S and another site within the same "geo" that has OpenStack?
13:35:10 <csatari> esarault: Yes, the idea is every edge cloud instance is a full OpenStack or Kubernetes instance.
13:35:35 <csatari> The 2nd part of the question I could not get.
13:35:46 <esarault> Train of thought is
13:35:53 <esarault> and this comes from dealing with customers building 5G networks now
13:36:04 <esarault> The issue at the base of radio towers and such as cabinet space
13:36:18 <esarault> Some dont have more than 4-8U
13:36:27 <esarault> So if you take half of that for orchestrators
13:36:41 <esarault> that significantly reduce the amount of VNFs you can run there
13:36:49 <csatari> We should not use half of that for the control pane.
13:37:01 <esarault> Depedns the hardware they have
13:37:11 <esarault> if you go with COTS, it'll be hard
13:37:20 <esarault> This means you'll need tailored hardware to run in those sites
13:37:24 <vrv_> I agree with that train of thought.
13:37:33 <csatari> Or
13:37:54 <csatari> We should be able to scale down OpenStack and Kubernetes to a reasonable scale.
13:38:05 <esarault> Oh that would be lovely
13:38:23 <esarault> Best example I can give is Wind River TitaniumM Cloud
13:38:32 <esarault> For it to be considered at every site
13:38:44 <esarault> It would need to shrink down to Titanium Cloud Edge/Edge SX type of scale
13:38:50 <esarault> 1 or 2 servers max to run OpenStack
13:38:59 <esarault> So we should definitely look at what'll be in Akraino Release A
13:39:19 <esarault> Because that's the use case I'm seeing asked from 5G deployers right now
13:39:45 <esarault> Shrinking that footprint for the orchestration is definitely the ideal solution here
13:40:11 <esarault> But then how do you monitor all those sites into one single window pane?
13:40:17 <parus> Aren't there single server instances of Openstack running today?
13:41:05 <csatari> Well I've sent you some Nokia marketing privately.
13:41:53 <csatari> Back to business. I feel that we should formulate some non funcitional requirements.
13:42:29 <csatari> I add an open question to the wiki about this, as this needs more discussion.
13:43:43 <csatari> #action csatari Add a question to the wiki "What should be the sice of the all the OpenStack components in the different edge deployment scenarios in terms of CPU, memory, disk and hardware units".
13:43:47 <csatari> Is this ok?
13:44:04 <esarault> Looks good
13:44:19 <csatari> Cool
13:44:19 <esarault> My point is, not all those sites have 36" depth capability
13:45:18 <csatari> Should we add this as a note to the Small edge chapter/
13:45:23 <csatari> ?
13:46:00 <esarault> Yes that owuld be a good idea
13:46:13 <esarault> I've seen requirements as small as 12" depth
13:46:55 <csatari> #action (4.1) csatari add a note, that not all those sites have 36" depth capability, some even have only 12'' depth
13:47:18 <parus> How can we relate depth to compute/storage resources?
13:47:53 <parus> I suspect 12" means you would have one third of the capacity compared to 36". right?
13:48:12 <esarault> It's more the type of processor and the amount of drives you can put there
13:48:25 <esarault> With 12", you'll get two ARM or Xeon-D type processor
13:48:30 <esarault> 4DIMM memmory at best
13:48:35 <esarault> with 2x 2.5" if lucky
13:48:41 <esarault> probably 2x M.2 SSD instead
13:49:01 <esarault> Where as 36" you'll be able to get Xeon Scalable or AMD EPYCs with crazy storage and I/O
13:50:05 <parus> That is great info... Why don't we express it this way? I hear there is a lot of interest in ARM today ... especially at the edge.
13:50:25 <esarault> Problem is with "Edge washing" they get blended in the same smoothy and this means you'll always have a piece of the edge story that won't be covered
13:51:29 <esarault> The challenge with ARM is the ecosystem
13:51:35 <csatari> #action (4.1) csatari to add the hardware related info to the Deployment Scenarios.
13:51:38 <csatari> Any (more) comments to
13:51:55 <esarault> No, all good, we're diverging form the meeting purpose here I believe :)
13:52:05 <csatari> I would like to add a note, that the section is still under work or someting, but it is true to the whole wiki....
13:52:14 <csatari> Yes, a bit.
13:52:40 <csatari> But it is an interesting discussion ;)
13:52:59 <esarault> I afgree :)
13:54:04 <parus> Is there a need to include the operator name in the descriptor of the edge node?
13:54:17 <csatari> I will add some text to saying that it is even more draft then the other parts of the page.
13:54:37 <csatari> parus: I do not think.
13:55:11 <csatari> what we need is an unique ID of everu edge cloud instance in the edge cloud infra and an unique id for every node in the edge cloud instance.
13:55:56 <parus> Then somewhere central, we would need to map ID to operator.
13:56:07 <csatari> #action ( csatari add a text to the chapter saying that it is even more draft then the other parts of the page.
13:56:42 <csatari> #topic Vancouver edge computing group gathering
13:56:54 <parus> csatari: can we make a note that operator to Edg ID mapping needs consideration.
13:57:00 <csatari> Do youhave any willingness to have an ad hoc meeting in vancouver?
13:57:11 <csatari> parus: Sure, we can.
13:57:14 <esarault> Sure thing
13:57:15 <csatari> Which chapter?
13:57:25 <parus>
13:57:34 <csatari> I will create a doodle for us.
13:58:00 <parus> csatari: I would like to join.
13:58:07 <csatari> #action ( csatari to add a note that operator to Edge ID mapping needs consideration.
13:58:55 <csatari> #action csatari to create a Doodle for a(not that) ad hoc edge computing group gatering in vancouver
13:59:08 <csatari> #topic Next steps
13:59:24 <csatari> #info we will continue from 5.3 Requirements
13:59:40 <csatari> #action csatari to organize the next meeting.
13:59:55 <parus> csatari: Thanks for leading this discussion !!! Great Job !
13:59:55 <csatari> Thanks for the discussion.
14:00:01 <esarault> Thanks for organizing this csatari!
14:00:09 <csatari> Thanks parus
14:00:10 <vrv_> thanks
14:00:15 <esarault> Looking forward to see you guys in Vancouver
14:00:23 <csatari> #endmeeting