14:00:38 <russellb> #startmeeting nfv
14:00:39 <smazziotta> hi
14:00:39 <openstack> Meeting started Wed Jun 25 14:00:38 2014 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:43 <openstack> The meeting name has been set to 'nfv'
14:00:44 <cloudon> hi
14:00:46 <bauzas> o/
14:00:53 <nijaba> bonour
14:01:07 <russellb> #chair sgordon
14:01:07 <nijaba> bonjour too (sorry for my bad french)
14:01:07 <openstack> Current chairs: russellb sgordon
14:01:09 <russellb> #chair nijaba
14:01:10 <openstack> Current chairs: nijaba russellb sgordon
14:01:16 <russellb> so a couple more people can do meetbot commands
14:01:20 <yamahata> hi
14:01:24 <russellb> #link https://etherpad.openstack.org/p/nfv-meeting-agenda
14:01:30 <nbouthors> hi
14:01:36 <russellb> #topic review actions
14:01:37 <sgordon> needs more shades of pink/purple
14:01:44 <russellb> sgordon: ha, i know ..
14:01:57 <sgordon> so bauzas appears to have made some progress on the nfv dash
14:02:06 <sgordon> #link https://github.com/sbauza/gerrit-dashboard-nfv
14:02:07 <bauzas> yey
14:02:08 <russellb> yeah want to share what you have?
14:02:13 <sgordon> #link http://bit.ly/1iFdldx
14:02:13 <bauzas> sure
14:02:31 <russellb> awesome progress :)
14:02:37 <bauzas> so, I just wrote a quick project for gathering blueprints from the wikipage
14:02:45 <russellb> that's going to be very helpful to me when i go to review
14:02:48 <bauzas> asking launchpad to get the gerrit urls
14:03:01 <nijaba> bauzas: awsome!
14:03:02 <bauzas> and then doing some fancy stuff around it
14:03:16 <cloudon> +1
14:03:44 <russellb> so i suppose the next step is some hosted instance of the script that keeps it updated + a redirect?
14:03:48 <bauzas> so, another feature is nice to have, which is the shortening service to request
14:03:49 <sgordon> so the main thing now is making it run on a regular basis and update a tiny url right?
14:03:59 <bauzas> and host it indeed
14:04:09 <bauzas> sgordon: russellb: +1
14:04:09 <sgordon> v. handy given today is the nova review day though
14:04:14 <russellb> have a place to put it?
14:04:23 <bauzas> frankly not
14:04:26 <russellb> ok
14:04:31 <russellb> i can give you a small vm on rackspace if you want
14:04:39 <bauzas> would be awesome
14:04:41 <russellb> OK
14:04:55 <russellb> #action russellb to spin up a cloud VM for bauzas to host NFV dashboard script
14:05:11 <bauzas> anyone can get the project and test it
14:05:12 <russellb> bauzas: i'll get with you about this outside of the meeting
14:05:15 <bauzas> bugs are welcome
14:05:22 <bauzas> russellb: sure
14:05:29 <sgordon> a few people drifting in late, here is the output we are discussing: http://bit.ly/1iFdldx
14:05:40 <russellb> and it's awesome :)
14:05:49 <nijaba> bauzas: where is the code?
14:05:56 <sgordon> i havent dug into it enough to comment on the arrangement of the dash, but looks pretty good to me
14:06:05 <sgordon> nijaba, link to github up there somewhere ^^
14:06:08 <bauzas> nijaba: https://github.com/sbauza/gerrit-dashboard-nfv
14:06:08 <russellb> https://github.com/sbauza/gerrit-dashboard-nfv
14:06:12 <smazziotta> +1
14:06:20 <nijaba> sorry, missed it
14:06:25 <nijaba> thnaks
14:06:25 <bauzas> no pb
14:06:28 <russellb> bauzas: https://xkcd.com/208/ ?
14:06:29 <russellb> :)
14:06:40 <russellb> that basically the summary of the code?  heh
14:06:46 <russellb> except hopefully not perl.
14:06:56 <bauzas> russellb: ^^
14:07:09 <russellb> Python, of course :)
14:07:12 <sgordon> ok
14:07:14 <russellb> next item!
14:07:16 <bauzas> russellb: and some DOM parser :)
14:07:19 <sgordon> so that is probably that covered for now?
14:07:31 <russellb> cloudon posted another use case to the ML
14:07:36 <sgordon> the next action from last week was cloudon to provide a use case
14:07:36 <sgordon> yes
14:07:41 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2014-June/038556.html
14:07:45 <cloudon> Any feedback?
14:08:24 <sgordon> cloudon, just looking at the HA requirements
14:08:30 <sgordon> seems to be the main gap?
14:08:31 <russellb> so the affinity requirement
14:08:36 <russellb> i think that's already covered?
14:08:49 <russellb> you can boot VMs into an (anti-)affinity group
14:08:52 <sgordon> sort of
14:08:57 <sgordon> this looks like having a second layer
14:09:03 <sgordon> where you would have two groups with affinity
14:09:10 <sgordon> but then placement of those groups with anti-affinity
14:09:14 <russellb> ohhh..
14:09:16 <sgordon> so all VMs from group A on one host
14:09:22 <sgordon> and all VMs from group B on another
14:09:27 <russellb> but anti-affinity between A and B groups
14:09:28 <sgordon> and have that be enforced by the scheduler
14:09:35 <russellb> right.
14:09:38 <cloudon> Not quite - gropus wouldn't  necessarily have affinitty within them
14:09:49 <sgordon> cloudon, hmm ok
14:09:52 <russellb> i see
14:09:54 <cloudon> Key thing is to be able to say a single host failure won't knock out more than x% of my VMs
14:09:54 <sgordon> cloudon, so just not allowed to overlap?
14:09:58 <russellb> but at least anti-affinity between 2 groups ...
14:10:18 <imendel> assume it's not jsut %
14:10:19 <cloudon> Don't want to go as far as saying all VMs must be on different hosts - limits scale for one thing...
14:10:25 <russellb> right
14:10:31 <russellb> but i understand now
14:10:37 <russellb> that's good input
14:10:43 <cloudon> Think it's a fairly geenric req for N+k pool architectures
14:10:50 <sgordon> cloudon, right - so the soft anti-affinity proposal helps here i think
14:10:50 <adrian-hoban> cloudon: Do you include any latency considerations for this application?
14:10:51 <imendel> but you have within one "tier" the need to anti affinity?
14:10:56 * russellb also got sidetracked by project clearwater ... heh
14:10:59 <sgordon> cloudon, doesnt handle the buckets approach though
14:11:08 <russellb> interesting stuff (i like voip stuff, too)
14:11:24 <cloudon> Latency not vital so long as not silly
14:11:39 <bauzas> cloudon: I think you can make use of aggregates isolation with tenants
14:12:01 <cloudon> V open to other approahces, particularly if can be done with existing function
14:12:08 <nijaba> should the scheduler understand the group anti afinity semantic, or should it just be possible to say for each VM start that it should not be on same host as this list of VM?
14:12:10 <russellb> cloudon: for control plane application you mean, right?
14:12:29 <cloudon> russelb: yes, for control plane; clearly not for data plane...
14:12:33 <adrian-hoban> cloudon: I think the CPU pinning/NUMA type work helps to ensure the latency is predictable
14:12:33 <russellb> right :)
14:12:46 <russellb> cloudon: not sure how we'd implement, but it's definitely a gap
14:12:58 <bauzas> IMHO, what we call group anti-affinity is managed thru aggregates, ie not spin that VM on that aggregate
14:13:07 <sgordon> adrian-hoban, right - it's not QoS as such but optimizing everything in the infrastructure to provide deterministic performance
14:13:18 <nijaba> bauzas: makes sense
14:13:27 <cloudon> We had some thoughts on a blueprint but were waiting for scheduler re-factoring...
14:13:44 <sgordon> yeah that is a good use case
14:14:09 <smazziotta> when do we expect scheduler re-factoring to be completed
14:14:14 <russellb> that's true, could do something with aggregates
14:14:16 <adrian-hoban> sgordon: Exactly, so even if scale out is working well, we typically need deterministic behaviours.
14:14:20 <russellb> a bit more static than i'd like though
14:14:43 <bauzas> well, I was saying that because notion of host grouping is the aggregates
14:14:45 <russellb> on a related note, I shared our wiki page in the nova meeting last week, seemed to get a good response
14:14:57 <sgordon> bauzas, re. smazziotta's question? ^^
14:14:59 <russellb> devs really appreciated the organized info -- overview, usecases, and list of work
14:15:09 <russellb> so this stuff is helping get stuff done
14:15:11 <bauzas> well, good question, no clear response there
14:15:16 <bauzas> the sooner is the better
14:15:38 <bauzas> I mean, we have some blueprints in progress, we expect to do good progress by end of Juno
14:15:57 <cloudon> Any suggestions for good nova folk to discuss this particular HA use case with?
14:16:00 <bauzas> but I'm unsure if we can just spin out Gantt by beginning of K
14:16:09 <russellb> cloudon: bauzas :)
14:16:15 <cloudon> cool
14:16:22 <ian_ott> what do people think about server groups, could be considered for anti-affinity support - needs work
14:16:33 <russellb> ian_ott: yeah, works for some basic cases
14:16:36 <smazziotta> we have a lot of BP that have a dependency to scheduler re-factoring completed. What is NFV sub-team recommendation ? we wait for refactoring or do something else ?
14:16:50 <russellb> #topic blueprints
14:16:56 <russellb> we're moving to that topic anyway
14:16:58 <sgordon> ian_ott, the problem is there is a significant jump required to get to what people what on top of the basic functionality
14:16:59 <russellb> smazziotta: that's a good question
14:17:09 <russellb> IMO, the breakout has to come first
14:17:17 <sgordon> ian_ott, which is to be able to change the group membership on a running vm and have the policy apply straight away
14:17:18 <bauzas> smazziotta: reviews on scheduler breakout is worth to do the :)
14:17:19 <russellb> so, asking how you can help speed it up is the best next step
14:17:20 <bauzas> then
14:17:22 <sgordon> ian_ott, using migration etc
14:17:33 <russellb> bauzas <-- best contact to the status of that work
14:17:43 <sgordon> ian_ott, also some concerns about the way the API is exposed
14:18:02 <bauzas> scheduler team is doing weekly meetings each Tues @3pm UTC
14:18:16 <sgordon> (jay started a thread on this a while back but we never really got to it in the relevant summit sessions)
14:18:44 <bauzas> if we consider here that sched breakout is high priority (which I agrees), then I would love seeing people helping us reviewing patches :)
14:18:56 <russellb> yeah, that's my opinion at least
14:19:00 <bauzas> or contributing if people are willing :)
14:19:00 <russellb> it's not the sexy feature work
14:19:06 <russellb> but a necessary pre-requisite
14:19:18 <russellb> so other blueprint news ...
14:19:22 <russellb> some specs have been approved!
14:19:26 <russellb> SR-IOV for Nova was approved
14:19:34 <russellb> a couple of the NUMA related blueprints were approved
14:19:47 <nijaba> \o/
14:19:52 <russellb> today is also a nova spec review day, so should be a lot of movement on others today
14:20:01 <russellb> hop in #openstack-nova if you'd like to participate / discuss
14:20:31 <russellb> some organization on this etherpad ... https://etherpad.openstack.org/p/nova-juno-spec-priorities
14:20:37 <russellb> i added links to NFV stuff on there
14:20:44 <smazziotta> bauzas: let's discuss off-line
14:20:45 <russellb> might break that out in more detail
14:21:19 <russellb> any specific blueprints people here would like to discuss?
14:21:20 <bauzas> russellb: oh thanks for the link, totally missed it
14:21:35 <russellb> really my priority is to dive into more of the nova ones and review ...
14:21:47 <sgordon> ah yes that reminds me
14:21:50 <russellb> if you look at the NFV dashboard, most open specs have been reviewed actually
14:21:53 <russellb> and need revision
14:22:07 <sgordon> there are some proposed dates to be aware of in Nova
14:22:16 <sgordon> Jun 25 (-10): Spec review day - TODAY! (https://etherpad.openstack.org/p/nova-juno-spec-priorities)
14:22:16 <sgordon> Jul  3 (-9): Spec proposal freeze
14:22:16 <sgordon> Jul 10 (-8): Spec approval freeze
14:22:16 <sgordon> Jul 24 (-6): Juno-2
14:22:43 <sgordon> have any of the other relevant proposals started laying out similar dates?
14:22:48 <sgordon> *projects
14:22:53 <bauzas> smazziotta: sure
14:23:00 <sgordon> #link http://lists.openstack.org/pipermail/openstack-dev/2014-June/038475.html
14:23:04 <bauzas> smazziotta: (about discussing off-meeting)
14:23:14 <sgordon> russellb, that's a good point
14:23:17 <russellb> sgordon: i would expect similar dates for the other projects
14:23:22 <russellb> there's an attempt to coordinate on dates like this
14:23:24 <sgordon> russellb, maybe we should go through some of those if the owners are here
14:23:27 <russellb> was discussed last night in cross project meeting
14:23:31 <sgordon> russellb, and see if they need assistance revising
14:23:35 <russellb> sgordon: sure
14:24:23 <sgordon> where to start is the question...
14:24:33 <sgordon> #link https://review.openstack.org/97716
14:24:39 <sgordon> 2 interfaces, 1 net
14:24:46 <russellb> sgordon: will let you drive the list :)
14:24:54 <sgordon> looks like ijw is not here today unfortunately
14:25:06 <russellb> yeah, but at least got some review from john
14:25:08 <sgordon> i picked this one as it was one of the 3-4 on that top list
14:25:10 <russellb> and he seems good with it in general
14:25:13 <sgordon> right
14:25:19 <sgordon> just needs a new iteration
14:25:23 <russellb> yep
14:25:28 <russellb> i see ijw responded to the comments ..
14:25:51 <sgordon> yes
14:26:04 <sgordon> he has a couple of others that are in similar state, i think we can skip those
14:26:13 <russellb> ok
14:26:18 <russellb> really, most are in that state ...
14:26:25 <sgordon> let's talk about the -2s...
14:26:28 <russellb> so in general, if you own a spec, check the feedback and iterate :)
14:26:30 <russellb> sgordon: ah, good idea
14:26:50 * bauzas so loves iterating on -1s... :)
14:26:56 <sgordon> so https://review.openstack.org/#/c/87978/
14:27:01 <russellb> heh, that's how we work
14:27:14 <sgordon> nic state aware scheduling, there was an attempt at reworking this based on the initial rejection
14:27:19 <sgordon> which is still the -2 on it
14:27:35 <sgordon> bauzas, looks like you had some comments/questions about it
14:27:41 <sgordon> since it was last uploaded
14:27:59 <sgordon> no updates from the author in response though
14:28:04 <russellb> sgordon: not sure the update addresses the fundamental rejection though
14:28:05 <bauzas> indeed
14:28:28 <sgordon> alanm doesn't appear to be here, not sure what balazs nick is
14:28:43 <russellb> wasn't the rejection ... "we don't want this in nova at all" ?
14:29:01 <adrian-hoban> Not sure I understand why up/down state is not appropriate for Nova
14:29:04 <bauzas> sgordon: no clear disagreement with my review, just minor details to explain more
14:29:24 <adrian-hoban> ...up/down NIC state...
14:29:25 <russellb> to be clear, understand the importance of the use case, just think it should be handled outside of nova by your system monitoring tool of choice (nagios or whatever)
14:29:32 <russellb> dansmith: around?
14:29:32 <sgordon> adrian-hoban, the suggestion on mailing list was that this is the domain of existing monitoring tools
14:29:44 <dansmith> russellb: yep
14:29:52 <russellb> dansmith: discussing https://review.openstack.org/#/c/87978/3/specs/juno/nic-state-aware-scheduling.rst
14:29:59 <sgordon> adrian-hoban, i personally am not convinced that openstack couldnt/shouldnt handle this but it may not necessarily be *within* nova
14:30:03 <sgordon> so that's a little unclear
14:30:16 <sgordon> the other thing is there are differing preferences on what to actually do if the NIC is down
14:30:22 <russellb> sgordon: the ML suggestion you refer to is my preference
14:30:30 <sgordon> disable the host, or just remove ports from the pool for passthrough
14:30:35 <sgordon> depending on which NIC it is of course
14:30:43 <dansmith> IMHO, if the monitoring is out of nova,
14:30:52 <dansmith> then it's easier for us to allow the deployer to do what they want if the nic link goes down
14:30:55 <bauzas> dansmith: +1
14:30:57 <russellb> right, and having it be logic outside of nova gives you that flexibility, would rather not build all this policy into nova
14:31:01 <sgordon> from a libvirt perspective there are patches posed upstream in libvirt to at least expose this information there
14:31:06 <sgordon> but you still need to collect and act on it
14:31:15 <dansmith> if we provide tools to extract it from the network pool, then you can have the response script do what you want
14:31:18 <dansmith> or just report it
14:31:48 <russellb> right
14:31:53 <russellb> or disable the host entirely if you want
14:31:55 <russellb> or whatever.
14:31:57 <dansmith> right
14:32:00 <dansmith> or make it blue
14:32:10 <russellb> or go ahead and STONITH that sucker
14:32:18 <dansmith> I can see us reporting the link state in some host details blob or something
14:32:26 <dansmith> if we want that to avoid the need for a nagios infra,
14:32:29 <sgordon> dansmith, so that may be the question i have actually
14:32:37 <dansmith> but in reality, something like nagios is required anyway for a real deployment
14:32:41 <russellb> dansmith: that's what the spec is proposing now
14:32:44 <sgordon> dansmith, because i dont think there currently is a plan to easily be able to remove them from the pool
14:32:50 <dansmith> russellb: reporting it but not doing anything?
14:32:56 <russellb> let me check ...
14:33:03 <dansmith> I haven't read the updated one yet
14:33:10 <russellb> doing something
14:33:14 <russellb> scheduiler filter
14:33:32 <russellb> so, report it via servicegroup junk
14:33:34 <dansmith> I don't really think we should even report it
14:33:37 <russellb> right
14:33:48 <russellb> i think we're crossing into system monitoring territory
14:33:49 <dansmith> but it's less offensive than encoding policy based on the status to me
14:33:52 <dansmith> yeah
14:34:03 <dansmith> nagios has flapping detection and all kinds of stuff
14:34:12 <russellb> yeah..
14:34:16 <dansmith> that are necessary for hysteresis that we don't want to have to implement, IMHO
14:34:18 <russellb> so i think i'm going to add a -2 here as well
14:34:25 <dansmith> and if we don't, we'll suck compared to it
14:34:53 <russellb> sgordon: not sure that's the outcome you wanted, heh ...
14:35:13 <russellb> clear direction forward should be top priority :)
14:35:20 <bauzas> well, I'm thinking about this usecase as a possible good fit for a big Scheduler like Gantt
14:35:25 <bauzas> but not Nova
14:35:32 <russellb> bauzas: maybe ...
14:35:39 <bauzas> I mean, notifications have to be done maybe thru Nagios indeed
14:35:41 <sgordon> russellb, not exactly but i really said lets look at the -2s
14:35:47 <sgordon> russellb, and then that was the only one
14:36:20 <sgordon> russellb, i did try to bump the thread about this a week or two ago to discuss nagios- based approaches to try and flesh out what we would need from the API but didnt get much interest
14:36:57 <sgordon> (again mainly talking about managing the pool of devices for passthrough)
14:38:14 <russellb> yeah
14:38:17 <russellb> that's all config based
14:38:19 <russellb> so kind of messy
14:38:42 <russellb> but we need an API for it anyway
14:38:50 <russellb> this would be another use case for the API
14:38:56 <bauzas> hence Gantt :)
14:38:58 <russellb> not just initial provisioning, but ongoing management
14:39:18 <russellb> let's move on, can discuss more blueprints in open discussion if someone wants to
14:39:23 <russellb> #topic Paris mud-cycle spring next week
14:39:38 <russellb> looks like there is a small group of people interested in NFV attending this meetup
14:39:45 <dansmith> mud cycle?
14:39:50 <dansmith> sounds cool
14:39:51 <bauzas> lol
14:39:53 <russellb> yeah!
14:39:57 <russellb> #undo
14:39:58 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x2ddc350>
14:40:05 <russellb> #topic Paris mid-cycle spring next week
14:40:13 <bauzas> spring ? :)
14:40:15 <lukego> I’m planning to be in Paris for this and it would be neat to talk with more OpenStack/NFV hackers there :-)
14:40:15 <russellb> #link https://wiki.openstack.org/wiki/Sprints/ParisJuno2014
14:40:25 <lukego> Looks like 5 people planning to go so far (not all confirmed)
14:40:29 <russellb> bauzas: dangit
14:40:30 <russellb> SPRINT
14:40:32 <russellb> :)
14:40:48 <russellb> #topic open discussion
14:40:51 <russellb> anything else today?
14:41:01 <russellb> as usual, please help review specs and code
14:41:08 <russellb> and if you own active specs or code, please iterate based on feedback
14:41:14 <russellb> most specs are waiting for updates from submitter right now
14:41:18 <sgordon> right
14:41:23 <sgordon> v. important to get that done asap
14:41:32 <sgordon> as we are quickly approaching crunch time for this cycle
14:41:43 <sgordon> as far as approval goes anyway
14:41:47 <russellb> right
14:41:56 <russellb> if specs aren't approved in the next week or two, they will be deferred to K
14:42:02 <MichaelB> anything worthwhile to communicate to ETSI NFV folks?
14:42:08 <cloudon> Do we need more example use cases, or are the detailed control & data plane app examples we know have sufficiently representative?
14:42:15 <lukego> Looks like we have two proposals for userspace vhost at the moment: vhost-user (the new QEMU feature) and also DPDK’s own solution. Got to haromnize these someow
14:42:24 <sgordon> MichaelB, there was a request previously for access to drafts of the ETSI gap analysis work
14:42:37 <sgordon> MichaelB, particularly around OpenStack gaps
14:42:49 <sgordon> MichaelB, obviously some of the members are here and filtering info through
14:42:50 <MichaelB> gaps analysis is incomplet, not ready for publishing yet in ETSI
14:42:59 <sgordon> MichaelB, but would be good for everyone to be on the same page
14:43:05 <sgordon> MichaelB, not even via the draft documents page?
14:43:07 <MichaelB> we are now in consistency review, followed by WG approval
14:43:18 <MichaelB> then working on tightening the gap analysis
14:43:38 <sgordon> lukego, do you have the links handy
14:43:44 <sgordon> lukego, i think i only saw one of them
14:43:44 <lukego> I have renamed VIF_SNABB to VIF_VHOSTUSER name. Hope it will suit everybody for userspace vhost going forward
14:43:53 <MichaelB> so far, what we have is WG-level, partial/incomplete analysis
14:43:57 <adrian-hoban> Multiple ETSI-NFV working groups will be looking at OpenStack from different perspectives.
14:44:13 <MichaelB> there is something against OpenStack...but rudimentary
14:44:14 <lukego> sgordon: QEMU vhost-user VIF (previously VIF_SNABB) is here: https://review.openstack.org/#/c/96138/
14:44:33 <sgordon> #link https://review.openstack.org/#/c/96138/
14:44:46 <MichaelB> I can check with the MANO chairs if we want to share this in this stage, or wait till it is in better shape
14:44:59 <sgordon> MichaelB, that seemed to be the main request i am aware of
14:45:10 <sgordon> MichaelB, concern is that we're duplicating each others efforts on that front somewhat
14:45:51 <lukego> sgordon: The DPDK one is here: https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
14:45:53 <MichaelB> sgordon - what do u mean? Is the gap analysis happneing somewhere else other than ETSI NFV ?
14:45:59 <sgordon> #link https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
14:46:09 <lukego> noted #link for future :)
14:46:27 <sgordon> MichaelB, much of the discussion in this group has been around identifying NFV use cases and the gaps they expose
14:46:41 <sgordon> MichaelB, driven in part by the existing ETSI NFV publications
14:46:52 <lukego> My perspective is that vhost-user is the standard feature for the future (now upstream in QEMU) that everybody will use (?). but the DPDK-OVS people may want to support the other one anyway in the short term? need to get in sync
14:46:56 <MichaelB> that is fine. I don't think that's a dupication with the gap analysis I am talking about
14:47:09 <MichaelB> we should continue to do what u started HERE
14:47:12 <adrian-hoban> From what I have seen in drafts, there are new items to be added to the list we are tracking
14:47:37 <sgordon> adrian-hoban, right that is my expectation
14:47:46 <MichaelB> adrian-hoban ... yes, I think we should bring into here any deltas
14:47:55 <adrian-hoban> lukego: The concern related to support for folks that are using earlier versions of qemu
14:48:10 <sgordon> adrian-hoban, obviously the earlier we get and iterate on deltas the sooner we can do something about them
14:48:20 <sgordon> adrian-hoban, but the plate for juno is getting full anyway :)
14:48:34 <lukego> adrian-hoban: That makes sense. Question in my mind is how important that is? (I noticed that other OVS-DPDK related features, like security groups, were already deferred until the K-cycle — so do we still need this VIF and in Juno?)
14:50:25 <lukego> adrian-hoban: Should we treat the two VIFs separately or would there be a practical way to combine them
14:52:26 <lukego> adrian-hoban: more to the point — I want to push forward and get VIF_VHOSTUSER into Juno, because it will enable Deutsche Telekom to avoid deploying a fork. Does this effort step on you guys’ toes somehow or is this a positive thing for you too (for future use)?
14:52:39 <adrian-hoban> lukego: It would be preferred to have it in Juno so that we can help support some time-to-market concerns of waiting on qemu 2.1 support to get into the distros etc...
14:52:43 <pczesno> lukego: we want to use your work to support vhost user for ovs
14:53:00 <adrian-hoban> lukego: I think that is positive too
14:53:02 <pczesno> lukego: positive think
14:53:11 <adrian-hoban> Go for it
14:53:24 <lukego> ok good. I am happy to present both blueprints as best we can and ideally get both of them in. I don’t immediately see a way to combine them, but obviously that would be better.
14:53:35 <adrian-hoban> Need to figure out how to support older versions of qemu
14:53:55 <russellb> i think the 2 could reference each other
14:54:17 <russellb> sounds like if there' separate VIFs, they should stay as separate specs
14:54:56 <lukego> btw I have just run through and renamed VIF_SNABB to VIF_VHOSTUSER everywhere (based on sensible feedback) — please let me know if I missed some references somewhere
14:55:01 <lukego> russellb: cool
14:55:42 <russellb> will try to take a look in more detail though
14:55:53 <sgordon> adrian-hoban, i think there is a wider discussion there that we've never really nailed down
14:56:07 <sgordon> wrt support for a wide range of libvirt/qemu versions and what is actually tested in the gate
14:56:26 <adrian-hoban> lukego: We will contribute some updates to extend your work to support the DPDK use case
14:56:47 <adrian-hoban> for qemu 2.1
14:56:51 <lukego> adrian-hoban: any chance you guys fancy a visit to Paris next week btw? :-)
14:57:06 <lukego> adrian-hoban: (you’re based in the UK?)
14:57:30 <lukego> adrian-hoban: awesome
14:57:34 <adrian-hoban> Close, I'm in Ireland
14:58:00 <adrian-hoban> Don't think I can make it
14:58:52 <lukego> I’d love to understand more about how OVS-DPDK relates to OVS plugin and OpenDaylight and so on. (Like: do you plan to implement Neutron APIs directly in Python code in OpenStack or always use an external controller?) if there’s a link please send :)
14:59:35 <lukego> adrian-hoban: I’m guessing external SDN controllers will be the big winners of the OVS-DPDK work?
14:59:46 <lukego> (but I digress.)
14:59:48 <russellb> looks like we're out of time
14:59:56 <russellb> discussion can continue in #openstack-nfv if you'd like
15:00:00 <russellb> thanks everyone!
15:00:01 <nijaba> Thanks for hosting russellb!
15:00:07 <russellb> #endmeeting