14:00:21 <mestery> #startmeeting networking_ml2
14:00:22 <openstack> Meeting started Wed Oct  2 14:00:21 2013 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:26 <openstack> The meeting name has been set to 'networking_ml2'
14:00:33 <mestery> #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda
14:01:06 <mestery> So, I'd like to spend 15-20 minutes on bugs and docs, and the remainder of the meeting on Icehouse planning.
14:01:18 <mestery> #topic Bugs and Docs
14:01:30 <mestery> rkukura: Can you update us on docs?
14:01:43 <rkukura> OK
14:02:14 <rkukura> emagana is doing a 1st pass at updating the cloud admin guide
14:02:23 <rkukura> review is at https://review.openstack.org/#/c/49308/
14:02:45 <rkukura> Does anyone have progress to report on the other guides?
14:03:03 <Sukhdev> I have not been able to start yet
14:04:13 <rkukura> I don't see reviews attached to any of the other doc bugs
14:04:34 <mestery> OK. We'll have to keep an eye on those reviews this week to ensure some progress is made on at least some of them.
14:04:36 <rkukura> I guess that's it for now, but we need to make progress
14:04:53 <mestery> rkukura: Thanks and I agree.
14:05:11 <mestery> On the bug front, are there any ML2 bugs we should discuss in the meeting today?
14:05:12 <Sukhdev> I will start on mine either later this week or early next week
14:05:19 <mestery> Sukhdev: Thanks!
14:06:03 <mestery> So I guess on the bug front, ML2 is looking good then!
14:06:03 <Sukhdev> I have a patch out for Arista cleanup bug -will be pushing updated patch later today
14:06:07 <rkukura> I just filed https://bugs.launchpad.net/neutron/+bug/1234195 and will propose a fix similar to that for https://bugs.launchpad.net/neutron/+bug/1230330
14:06:09 <uvirtbot> Launchpad bug 1234195 in neutron "ML2 mechanism drivers not called for ports auto-deleted when subnet deleted" [Undecided,New]
14:06:11 <mestery> Sukhdev: Thanks!
14:06:27 <matrohon> I also updated my patch about live-migration
14:06:30 <mestery> rkukura: Awesome, thanks for cleaning both of those up!
14:06:41 <matrohon> https://review.openstack.org/#/c/48261/
14:06:43 <mestery> matrohon: I saw that, thanks for adding similar support to OVS and LB plugins.
14:07:11 <matrohon> as I understood, mark want it for RC1?
14:07:58 <rkukura> I haven't reviewed it yet, but am slightly concerned its scope may too big for RC1 or RC2
14:08:04 <mestery> matrohon: I think RC1 was cut already or will be soon.
14:08:10 <mestery> But there will be an RC2 likely.
14:08:43 <rkukura> But I think we should try to get it in
14:08:48 <mestery> markmcclain says that bug is the last blocker for RC.
14:08:58 <mestery> rkukura: Lets review that one ASAP once the meeting is done.
14:09:10 <markmcclain> yes it is the last one I've been waiting on for RC1
14:09:14 <mestery> markmcclain: We're discussing https://review.openstack.org/#/c/48261/
14:09:27 <mestery> OK, lets review this one diligently after the ML2 meeting.
14:09:47 <rkukura> markmcclain: Is this the one rc1 is waiting for?
14:09:52 <mestery> #action rkukura and mestery to review https://review.openstack.org/#/c/48261/ as it's the last RC1 blocker
14:10:02 <markmcclain> yeah mainly because there's a change to the RPC interface
14:10:02 <rkukura> I'll review it ASAP
14:10:12 <markmcclain> ideally I we don't want to change those between RC releases
14:10:18 <rkukura> makes sense
14:10:48 <mestery> OK, thanks for clarifying markmcclain.
14:10:55 <mestery> Any other bugs to discuss?
14:10:59 <Sukhdev> what is the timeline for RC2?
14:11:32 <markmcclain> Sukhdev: there is not a definitive date for RC2
14:11:49 <Sukhdev> OK
14:11:55 <markmcclain> it depends on any issues we find the first RC
14:12:35 <Sukhdev> I will be putting out an updated patch today - wanted to check when will it make it in
14:13:19 <mestery> OK, just a note for everyone in case you missed it: ML2 is now the default plugin for devstack when running with Neutron.
14:13:37 <Sukhdev> saw that - thanks
14:13:55 <mestery> Thanks to rkukura for working with me last week for 2.5 days to track down the tempest failures!
14:14:15 <pcm_> Are there published notes for users wanting to migrate from OVS-> ML2?
14:14:31 <mestery> pcm_: Not yet, we're working on that.
14:14:38 <mestery> And in fact, that leads us to the next topic ...
14:14:44 <mestery> #topic ML2 Icehouse Planning
14:15:10 <mestery> If you look at the agenda, we have a list of ML2 Icehouse items listed there, collected from anyone who wanted to put an idea down.
14:15:18 <rkukura> do we want to consider a conversion tool that populates ML2's tables based on existing OVS tables?
14:15:32 <mestery> rkukura: That would be interesting actually.
14:15:48 <roaet> hello. I am back.
14:16:18 <mestery> roaet: Hi, we're discussing Icehouse planning now.
14:16:42 <mestery> So, the thinking is that we'd like to see if we could group all the ML2 items into maybe two Icehouse sessions.
14:16:44 <roaet> Thank you, I apologize for lateness. Flu season hitting my family hard. Have we done doc-project status?
14:16:48 <rkukura> would such a conversion tool be required at the point when icehouse drops the deprecated plugins?
14:17:03 <mestery> rkukura: That's when it would make most sense, yes.
14:18:06 <mestery> rkukura: I was hoping to walk down the list and discuss the items now, make sense?
14:18:15 <rkukura> sure
14:18:34 <rkukura> I'm not sure the conversion tool would need a session, but maybe be part of one
14:18:48 <mestery> Agreed
14:18:58 <mestery> So, the first item which is on the list is RPC handling in ML2.
14:19:13 <mestery> asomya has looked at this a bit, and we even talked about this a bit in May/June of this year.
14:19:31 <mestery> But deferred doing anything until Icehouse.
14:19:52 <mestery> asomya: Here? Want to elaborate on this one?
14:20:00 <asomya> @mestery sure
14:20:34 <asomya> In a nutshell, we want RPC messages to be sent all the way to they type and mechanism managers instead of the ml2 plugin answering them.
14:21:33 <rkukura> asomya: what about when several mechanism (all the l2 agent drivers) receive the same RPC?
14:21:36 <asomya> This does crop up some issues with multiple drivers loaded, so I was thinking we could either implement multiple topics and send all messages to all drivers to be consumed as seen fit or implement an RPC manager that collects replies from all dirvers and sends it down to the sender
14:22:24 <rkukura> asomya: What is motivating this?
14:22:51 <Sukhdev> asomya: what is the goal for doing this>?
14:23:14 <asomya> @rkukura A couple of ML2 drivers that we were tying to implement, including Opendaylight.. where we need custom RPC calls to the mechanism manager that talks to the controller
14:23:22 <rkukura> For "common" RPC methods like get_device_details(), could common plugin code handle the RPC, find the bound mech driver, and call it?
14:23:45 <mestery> rkukura: For common RPC methods, that makes sense to me.
14:24:03 <asomya> We could take the common code and make a class out of it so that if any mech manager just wants the generic replies.. it can simply inherit that otherwise override calls as needed
14:24:18 <asomya> *mech driver.. not mech manager
14:24:29 <rkukura> does anything prevent mech drivers currently from registering additional RPC methods they handle, or do we need to provide an interface for this?
14:25:08 <matrohon> this is done with type_tunnel, which handle tunnel_sync
14:25:10 <asomya> There's shouldn't be anything blocking additional registering, this is for the case where we want to override the common methods as well
14:26:29 <rkukura> sounds like something we could work out at the summit as part of a session on ML2 driver APIs or something
14:26:53 <asomya> @rkukura yup, i was thinking the same
14:26:55 <mestery> Makes sense to me
14:27:03 <mestery> LEts move on, considering the time and the rest of hte list.
14:27:16 <mestery> #action RPC handling in ML2 to be added to an ML2 sessions for Icehouse Summit
14:27:27 <mestery> Next item on the list: More extensible TypeDrivers.
14:27:38 <mestery> This came out of controller-based MEchanismDrivers, specifically OpenDaylight.
14:27:55 <mestery> asomya and I hit this, and I know rkukura has talked to cdub about this as well.
14:28:21 <rkukura> is the crux of this that some controller want network_type to be opaque?
14:28:24 <mestery> Specifically, if a controller wants to control segment allocation, it's possible a TypeDriver may need to make a REST call to a controller.
14:28:29 <mestery> rkukura: Yes.
14:28:45 <rkukura> Really opaque, or just selected by the controller?
14:29:18 <mestery> Both, to some extent.
14:29:56 <asomya> Currently the segementation management is distributed across the plugin and the type drivers are just selectors. We should move everything segmentation related to the type drivers and the Ml2 plugin could just call the type drivers tfor segmentation ids
14:29:59 <rkukura> A completely opaque network_type should just be a new driver in the existing framework, but the later might take some work
14:30:09 <Sukhdev> Will this mean the segmentation ID allocation be pushed to controller as well?
14:30:29 <mestery> Sukhdev: That was part of the motiviation in the case of ODL, yes.
14:30:32 <rkukura> asomya: Isn't that how it works now?
14:31:23 <asomya> @rkukura from what i saw, the plugin expects an integer segmentation id and stores it in the DB itself.. the network_id is not sent to type drivers.. at least in the case of the vlan type driver that i saw.. it was jsut selecting a vlan id available and sending it back to the ml2 plugin
14:31:45 <rkukura> clearly this is something to discuss at the summit - should it be part of the same "ML2 driver API" session as above?
14:32:11 <rkukura> asomya: I thinkk https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info already addresses at least part of this
14:32:14 <mestery> rkukura: I think that makes sense.
14:32:29 <mestery> rkukura: I forgot about that one, thanks for reminding me!
14:32:37 <mestery> OK, lets add this one to the Summit list as well.
14:32:56 <mestery> #action TypeDriver extension in the context of controller MechanismDrivers to be part of Icehouse Summit sessions.
14:33:14 <rkukura> I'm thinking that BP and the two items we've discussed all relate to each other
14:33:14 <mestery> Next on the list: Monolithic plugins vs. ML2 MechanismDrivers: current and future plans
14:33:24 <mestery> rkukura: Agreed.
14:33:31 <mestery> rkukura: You put this on on the agenda, right?
14:33:34 <mestery> *one
14:33:35 <rkukura> yes
14:34:30 <rkukura> The idea was to get the maintainers of existing monolithic controller-based plugins, and those considering writing new ones, into a room to discuss tradeoffs/obstacles/etc regarding ML2 driver vs. monolithic
14:35:01 <mestery> So, more of a discussion to see if ML2 is missing something to support their needs?
14:35:12 <amotoki> hi, i am interested in this topic too. If ML2 allows additional extension support by mechanism drivers, migration to ML2 becomes easier.
14:35:35 <rkukura> so the issues we discussed already are deep technical details, but I'm looking more at the ecosystem and community aspects of it
14:35:36 <mestery> amotoki: What do you mean by additional extension support? Extension APIs?
14:35:53 <mestery> rkukura: I think this aspect is perfect to start off the ML2 sessions with at the Summit.
14:36:08 <rkukura> and also whether, for example, the L3 and other service plugability need to evolve for this
14:36:14 <amotoki> Some monilithic plugin supports extra extensions and it can be a blocker to migratoin to ML2.
14:36:34 <amotoki> service plugin mechanism may help it of course.
14:36:40 <rkukura> amotoki: that's a great item for that session
14:37:04 <amotoki> that sounds nice thought i haven't break down the problem yet.
14:37:15 <mestery> #action Add Ecosystem and community aspects of ML2 to the Icehouse list
14:37:27 <mestery> I think that one is a no-brainer to add the Summit list rkukura.
14:37:28 <rkukura> My suggestion is this one is not a BP, but more of a debate
14:37:36 <mestery> rkukura: Agreed.
14:37:38 <rkukura> OK, I'll submit it
14:37:45 <mestery> Cool.
14:37:57 <amotoki> agree.
14:38:28 <mestery> OK, next item on the list: Future directions for ML2 (orchestration, deployment, management, ...)
14:38:36 <mestery> rkukura: Looks like this one is more of a discussion as well, correct?
14:39:14 <rkukura> yes, various ideas were in my ML2 proposals at the last two summits, and others may have ideas too
14:39:23 <rkukura> modular agent is another
14:39:54 <mestery> Yes, I agree.
14:40:05 <mestery> Also, Modular Layer 3 as well, though that may be outside the scope of ML2 perhaps.
14:40:48 <rkukura> so not sure if a free-form session on futures makes sense, or a set of  BPs to discuss
14:41:07 <roaet> mestery: I've been interested in working with pluggable/configurable IPAM. Would that be Modular Layer 3?
14:41:17 <Sukhdev> Do we have plans to cover ML3 in IceHouse or only limit it to ML2?
14:41:31 <rkukura> is these where discussion of https://blueprints.launchpad.net/neutron/+spec/campus-network fits?
14:41:46 <mestery> roaet: I think it's separate, modular layer 3 would be allowing multiple disparite L3 agents to work together, similar to how ML2 does it for MechanismDrivers.
14:41:54 <roaet> Ah got it.
14:41:54 <mestery> rkukura: I think so, yes.
14:42:20 <mestery> Sukhdev: I'm throwing ML3 out there, since in Havana L3 was made pluggable, it only makes sense to make it more extensible in the same way as ML2 IMHO.
14:43:14 <rkukura> I was touching on this under the monolithic-vs-driver session as well, but a specific BP to discuss for ML3 would be great
14:43:34 <mestery> #action File blueprint for Modular Layer 3 (ML3), good Icehouse Summit session.
14:43:47 <mestery> OK, moving on to the next item.
14:43:47 <Sukhdev> So, shall we add a session to discuss ML3 extensibility as well?
14:43:56 <mestery> Sukhdev: Yes, just noted that. :)
14:44:04 <mestery> NExt item: Migration from deprecated plugins.
14:44:07 <mestery> We touched on this a bit already.
14:44:22 <mestery> I think it's worth talking about this at the Summit as well, perhaps in a combined session.
14:44:26 <rkukura> Seems we might have some ideas how to combine topics into a set of sessions, but cannot finalize that until we have them all
14:44:37 <mestery> rkukura: Yes.
14:45:33 <mestery> So, regarding migration, rkukura, something to add to the Summit list?
14:45:47 <rkukura> not sure how much there is to discuss about it
14:46:11 <mestery> rkukura: Fair point.
14:46:12 <rkukura> We could file a BP for a tool and discuss it
14:46:17 <mestery> true.
14:46:24 <rkukura> but something like 10 minutes might be enough
14:46:28 <mestery> #action File BP for migration from deprecated plugins to ML2.
14:46:30 <mestery> yes
14:46:30 <rkukura> unless there is controversy
14:46:39 <mestery> Heh :)
14:47:04 <mestery> OK, next item, and something matrohon may be interested in:
14:47:18 <mestery> Adding VXLAN multicast into the OVS agent.
14:47:31 <mestery> This is somewhat related to ML2, and was brought up on #openstack-neutron this week, so I thought I'd put it in here.
14:48:18 <mestery> Anyways, this is something which is possible with newer versions of OVS, Linux kernel, and iproute2 packages.
14:48:39 <mestery> And would make LB and OVS with VXLAN very close to the same, as both would use VXLAN ports from the Linux kernel.
14:49:01 <rkukura> Is using it just a matter of turning off the endpoint management, or is there more to it?
14:49:06 <mestery> #action File BP for VXLAN multicast support in OVS plugin using upstream Linux kernel VXLAN ports.
14:49:30 <mestery> rkukura: That may be it, but there may be a bit of underlying configuration required, I need to look into it a bit to verify.
14:50:03 <rkukura> trying to get a sense of which are 10 minute topics vs. needing 40 minutes (or more)
14:50:17 <matrohon> mestery: sounds great, we need to have the same beahvior between ovs and lb with vxlan
14:50:18 <Sukhdev> I thought OVS plugin was being deprecated, am I missing something?
14:50:32 <mestery> Sukhdev: I mean OVS agent.
14:50:36 <mestery> With the OVS MD
14:50:40 <Sukhdev> ah thanks
14:50:48 <mestery> matrohon: Yes, I think that's a good goal to have, and it's achievable.
14:51:00 <mestery> rkukura: I think this is a 10 minute topic, or maybe not even worth discussing at the summit.
14:51:04 <mestery> BP may be all that is needed.
14:51:34 <matrohon> agreed, this is just an aligment betwen the two agent
14:51:51 <mestery> OK, the last topic: Multisegment provider network implementations in MechanismDrivers and OVS.
14:52:11 <mestery> This came up from a discussion amongst apech rcurran myself and a few others yesterday.
14:52:27 <rkukura> mestery: Can you explain?
14:52:33 <mestery> Yes.
14:52:46 <apech> i can try too :)
14:52:56 <mestery> The thinking was that we could modify the OVS agent to support "bridging" provider networks, if needed.
14:53:01 <mestery> apech: Please, jump in here. :)
14:53:14 <mestery> But there was slightly more to it as well, I'm missing some nuance perhaps.
14:53:34 <rkukura> interesting
14:53:53 <apech> so i'm not sure if this needs to be discussed at the summit, but one area of interest is how neutron can define VXLAN gateways (basically a translation of VXLAN to VLAN). This seems to be something a bunch of different plugins solve
14:53:55 <rkukura> this kind of fits in with the futures and orchestration
14:54:14 <rkukura> bridging service?
14:54:29 <apech> yeah
14:54:31 <mestery> rkukura apech: Yes to both.
14:54:32 <mestery> :)
14:54:36 <apech> looking at what's out there, there seem to be a couple possible ways of doing this - provider networks (and multi provider networks), and the Nicira plugin defines a NetworkGateway class
14:54:41 <mestery> So, I think this may be something to discuss at the summit.
14:54:46 <matrohon> you can use l2-pop two
14:54:50 <apech> so I'm not sure what the right way to push this is, but wondering if others have thoughts, or if it is worthwhile discussing
14:54:54 <apech> and what that means for the mech drivers
14:54:56 <mestery> matrohon: How so?
14:55:46 <matrohon> with l2-pop you can tell what next-hop you will use for your vxlan traffic
14:56:02 <matrohon> what tunnel, and so what gateway to use
14:56:16 <apech> sure, but you need some way to define that the gateway exists, and what attributes it has (ie VTEP IP) - does l2-pop provide this?
14:56:22 <mestery> matrohon: Interesting. I think this will be worth discussing in Hong Kong.
14:56:25 <rkukura> seems like a good topic to discuss at the summit - maybe a focus on multi-segment networks
14:56:29 <apech> then you can bind an existing network to that gateway
14:56:41 <mestery> rkukura: Agreed.
14:56:45 <mestery> Lets add this one to the agenda.
14:56:46 <apech> okay sounds good
14:56:49 <matrohon> mestery: agreed
14:57:01 <mestery> #action Add multi-segment networks in ML2 to the Icehouse Summit list.
14:57:04 <mestery> OK, that was the last one I had.
14:57:07 <rkukura> we should see if others in the meeting have additional topics to suggest
14:57:13 <amotoki> apech: i think  a gateway is to communicate outside  to neutron network.
14:57:15 <mestery> IF people have more, please add them to the meeting page for now.
14:57:21 <amotoki> Do we have a plan to support multiple provider networks on a compute node? if i remember correctly rkukura presented the similar topic in last summit.
14:57:42 <mestery> amotoki: That is what we'll discuss in Hong Kong.
14:57:50 <mestery> We could do it using L2 Population it seems.
14:58:00 <amotoki> mestery: nice
14:58:13 <mestery> OK, so please list more ML2 ideas on the meeting page.
14:58:19 <matrohon> amotoki : we also need gateway to communicate between segment
14:58:29 <mestery> Next week, rkukura and I will see which ones we can combine and which ones need more time.
14:58:35 <amotoki> it seems better to create a wiki page such as Nuetron/ML2/Havana.
14:58:38 <rkukura> amotoki: Not sure I understand the question - even grizzly supports multiple provider networks
14:58:58 <amotoki> can we use multiple provider networks on a single node?
14:58:58 <mestery> amotoki: This is very temporary for now I think. :)
14:59:08 <mestery> amotoki: Yes.
14:59:33 <amotoki> but we need a single br-int.
15:00:01 <matrohon> one more subject : do we need a new agent with drivers for ovs and lb?
15:00:04 <mestery> We may need to modify the agent to do this or use L2 Pop, we'll discuss more in Hong Kong it seems.
15:00:14 <rkukura> that br-int is managed by a single L2 agent, and can support multiple provider and tenant nets
15:00:18 <mestery> matrohon: Good one! rkukura and I were thinking a combined agent perhaps?
15:00:40 <rkukura> amotoki: Are you looking for multiple mechanisms (agents, etc) on same compute node?
15:00:42 <mestery> #action Discuss a combined Linuxbridge and OVS agent for ML2 as the two converge more nad more.
15:00:47 <matrohon> mestery :sounds great!
15:00:50 <mestery> OK, thanks again everyone!
15:00:52 <mestery> #endmeeting