18:03:18 <SumitNaiksatam> #startmeeting networking_policy
18:03:19 <openstack> Meeting started Thu Jul 30 18:03:18 2015 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:03:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:03:22 <openstack> The meeting name has been set to 'networking_policy'
18:03:38 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#July_30th.2C_23rd_2015
18:04:00 <SumitNaiksatam> #topic Bugs
18:04:00 <Yi> Hi
18:04:11 <rkukura> hi
18:04:35 <igordcard> hi rkukura Yi
18:04:40 <SumitNaiksatam> few new ones that need immediate attention
18:04:44 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1479169
18:04:46 <openstack> Launchpad bug 1479169 in Group Based Policy "GBP default security group allows outbound access to the Internet" [Undecided,In progress] - Assigned to Ivar Lazzaro (mmaleckk)
18:05:13 <SumitNaiksatam> #link https://review.openstack.org/207250
18:05:20 <SumitNaiksatam> ivar-lazzaro: has posted the patch ^^^
18:05:25 <SumitNaiksatam> ivar-lazzaro: there?
18:06:15 <ransari> hi
18:06:43 <igordcard> hi ransari
18:06:45 <SumitNaiksatam> so the issue we were seeing is that we were opening up th SG rules for all egress traffic by default
18:07:03 <SumitNaiksatam> this meant that you didnt have to consume and external policy to get connectivity to the external world
18:07:15 <SumitNaiksatam> ivar-lazzaro: is trying to address that in his patch
18:07:18 <SumitNaiksatam> ransari: ho
18:07:20 <SumitNaiksatam> hi
18:07:38 <SumitNaiksatam> i was hoping to discuss the solution he has proposed in the patch
18:07:47 <SumitNaiksatam> there was another bug which posted today:
18:07:58 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1479706
18:07:59 <openstack> Launchpad bug 1479706 in Group Based Policy "GBP Service chaining errors are are not vsible to the user because of not having status attribute" [Undecided,New]
18:08:06 <SumitNaiksatam> i dont see magesh around
18:08:13 <SumitNaiksatam> but i believe ransari is here
18:08:39 <SumitNaiksatam> some of us discussed the adding the status field earlier for the resources
18:09:33 <SumitNaiksatam> the bug seems to suggest that you want to present an async interface to the user with this status field
18:09:56 <SumitNaiksatam> ransari: does the service chain API become difficult to use without this addition?
18:10:24 <ransari> yes  to some extent
18:10:42 <ransari> I think it has an impact in terms of usability
18:11:25 <SumitNaiksatam> ransari: is it the case that with service chain API the backend operation is almost always going to be a long running operation?
18:12:16 <ransari> yes
18:12:22 <SumitNaiksatam> ransari: okay
18:12:23 <igordcard> SumitNaiksatam: even more when real VMs be used as nodes
18:12:30 <SumitNaiksatam> igordcard: true
18:12:58 <SumitNaiksatam> however I believe rkukura has concerns with the usability of completely async API too
18:13:36 <ransari> cna we review what those concerns are.
18:14:20 <rkukura> my concern is just to keep simple things simple
18:14:54 <SumitNaiksatam> ransari: mainly that forcing the user to poll or check the status for a resource’s availability makes the system more difficult to use
18:15:32 <rkukura> many client apps don’t need/want concurrency - they can’t do the next step until the prior one completes, so a sync API serves them well
18:16:28 <ransari> so currently is teh expectation that  the service node driver is expected to poll for status  and fail an operation if tehre  are errors- for ex in launching a VM?
18:17:50 <rkukura> I’m not arguing we need 100% guarantee that everything in the data plane is synchronized before calls return, just that the needed results can be returned
18:18:45 <SumitNaiksatam> rkukura: but currently there is no way for the user to know with what is returned that the service chain instantiation has actually failed
18:20:19 <SumitNaiksatam> i think this is probably a longer discussion
18:20:38 <rkukura> Even if the VM gets created, it could fail later, so status attributes seem necessary
18:20:56 <SumitNaiksatam> rkukura: right
18:21:11 <rkukura> And we should always try to detect user errors before returning
18:21:33 <SumitNaiksatam> rkukura: ransari: can we set some time aside to have this discussion offline on #openstack-gbp?
18:21:40 <ransari> sure
18:21:40 <rkukura> I’m not so sure about resource exhaustion errors
18:22:14 <SumitNaiksatam> okay i will try to coordinate with magesh (who posted the bug), will let others know
18:22:56 <SumitNaiksatam> there is another user request: #link https://bugs.launchpad.net/group-based-policy/+bug/1479460
18:22:57 <openstack> Launchpad bug 1479460 in Group Based Policy "GBP create group API does not allow specification of DNS nameservers" [High,In progress] - Assigned to Magesh GV (magesh-gv)
18:23:24 <SumitNaiksatam> and the patch has been posted: #link https://review.openstack.org/207383
18:23:53 <SumitNaiksatam> rkukura: please eye ball the above when you have time
18:23:58 <rkukura> sure
18:24:34 <SumitNaiksatam> rkukura: we took the conf approach to solve this as a shorter term solution
18:24:52 <SumitNaiksatam> since the user is fine having a global setting
18:25:07 <rkukura> ok
18:25:23 <SumitNaiksatam> we can later evolve this to allow tenant or group-specific granularity
18:25:31 <SumitNaiksatam> any other bugs that we need to discuss?
18:26:07 <SumitNaiksatam> #topic Testing
18:26:26 <SumitNaiksatam> update on the rally testing - i have added a new job for the rally tests
18:27:17 <SumitNaiksatam> so for example if you see: #link https://review.openstack.org/#/c/207250/
18:27:42 <SumitNaiksatam> you will see a new non-voting job: gate-group-based-policy-dsvm-rally
18:28:08 <SumitNaiksatam> i need to fix a small issue in the post-hook script
18:28:19 <SumitNaiksatam> after you should hopefully see it running properly
18:28:32 <SumitNaiksatam> with this new job, we can start planning to move the tests in-tree
18:28:53 <SumitNaiksatam> #topic Packaging
18:29:06 <SumitNaiksatam> rkukura: anything to discuss update at your end?
18:29:19 <rkukura> nothing new yet
18:30:08 <SumitNaiksatam> rkukura: okay
18:31:09 <SumitNaiksatam> #topic Client stable branch
18:31:32 <SumitNaiksatam> rkukura: we have this one pending - Service Profile backport
18:32:03 <rkukura> Forgot about that - I can do it.
18:32:34 <SumitNaiksatam> rkukura: thanks
18:32:56 <SumitNaiksatam> #topic New features
18:33:22 <SumitNaiksatam> please review #link: https://review.openstack.org/#/c/203226/
18:33:38 <SumitNaiksatam> “Plumbing Terminology” ^^^
18:33:47 <SumitNaiksatam> i know a few people aloready have
18:34:13 <SumitNaiksatam> igordcard: have your comments been adequately addressed?
18:34:49 <igordcard> SumitNaiksatam: yes
18:35:04 <SumitNaiksatam> igordcard: great
18:35:20 <SumitNaiksatam> igordcard: anything you want to discuss regarding: https://review.openstack.org/#/c/168733/ (Traffic Steering NCP Plumber)
18:35:25 <igordcard> SumitNaiksatam: for now I am putting the transparent plumbing type inside the traffic steering plumber, and then we'll see how it merges into the TScP
18:35:38 <SumitNaiksatam> igordcard: okay
18:36:00 <igordcard> SumitNaiksatam: yes, I have made a simple node driver called manual node driver, which is like a nova driver but is manually fed with an existing neutron port already attached to a VM
18:36:12 <SumitNaiksatam> igordcard: okay
18:36:19 <SumitNaiksatam> igordcard: are you planning to post a new spec document for the traffic steering plumber?
18:36:24 <igordcard> I will use that to interact with the traffic steering plumber I have just started so that I can exercise traffic steering in practice
18:36:41 <SumitNaiksatam> igordcard: can be very minimal but will help people to understand what approach you have taken
18:36:43 <igordcard> I will have some code up until the next meeting
18:36:51 <SumitNaiksatam> igordcard: sounds good
18:36:56 <igordcard> SumitNaiksatam: yes, I can do that
18:37:00 <SumitNaiksatam> igordcard: thanks
18:37:15 <SumitNaiksatam> igordcard: dont have to spend too much time, just high level to begin with
18:37:30 <igordcard> and I'm already following the general idea around ivar's plumbing spec
18:37:38 <igordcard> SumitNaiksatam: okay
18:37:41 <SumitNaiksatam> igordcard: okay cool!
18:38:02 <SumitNaiksatam> rkukura: anything you want to discuss on the nova driver?
18:38:57 <rkukura> finally got to where VMs get launched connected to the PTGs from the plumber
18:39:09 <SumitNaiksatam> rkukura: sweet!!
18:39:14 <rkukura> So mainly just cleanup work needed now be able to review
18:39:29 <SumitNaiksatam> rkukura: okay
18:39:29 <igordcard> rkukura: nice!
18:39:49 <rkukura> This does depend on ivar-lazzaro’s patch https://review.openstack.org/#/c/166424/ so that the neutron ports are owned by the same tenant as the nova VMs.
18:40:36 <SumitNaiksatam> rkukura: okay
18:41:08 <SumitNaiksatam> #topic Open Discussion
18:41:23 <SumitNaiksatam> anything more to discuss?
18:41:27 <rkukura> Its also a bit inconvenient to have to configure this tenant by UUID rather than name, so I’m wondering if ivar’s patch should convert tenant names to UUIDs.
18:42:15 <SumitNaiksatam> rkukura: okay, so get the implementation would get the tenant ID from keystone?
18:42:33 <rkukura> I think it would need to
18:42:59 <rkukura> otherwise, with devstack you need to edit neutron.conf manually and restart the server once you get the UUID from keystone
18:43:05 <SumitNaiksatam> rkukura: is the tenant name udatable in keystone?
18:43:18 <SumitNaiksatam> rkukura: that makes sense
18:43:44 <rkukura> I know ivar-lazzaro has concerns about name duplication and change, but I’d think we could support specifying name or UUID
18:43:49 <SumitNaiksatam> rkukura: i meant to ask if the tenant name can be updated in keystone
18:44:29 <rkukura> If keystone reported a duplicate name, we could log and error and terminate
18:44:36 <SumitNaiksatam> rkukura: okay
18:46:16 <rkukura> Of course, we don’t want a denial of service attack where any user can create a new tenant in keystone stealing the admin or service name, so I’d hope keystone didn’t allow duplicates. Authenticating by name+password to keystone would also seem broken if duplicate names are allowed.
18:46:50 <SumitNaiksatam> rkukura: okay, i was wondering if the name is changed for the same tenant
18:47:04 <SumitNaiksatam> and if a different tenant is given that name
18:47:48 <rkukura> that could be a problem, but admins should know not to change the name of the ‘admin’ or ‘service’ users, or a new user specifically used for this.
18:48:04 <SumitNaiksatam> rkukura: :-)
18:48:43 <SumitNaiksatam> rkukura: from a usability perspective, there is no arguing that have the name is better
18:48:56 <SumitNaiksatam> rkukura: especially the devstack workflow that you point out
18:49:09 <rkukura> I don’t see any real risk, especially of either name or UUID is accepted.
18:49:54 <SumitNaiksatam> rkukura: okay
18:50:09 <SumitNaiksatam> anything else anyone wants to discuss today?
18:50:35 <SumitNaiksatam> else we get 10 mins back
18:50:42 <SumitNaiksatam> thanks everyone for your time today
18:50:58 <SumitNaiksatam> bye
18:51:29 <SumitNaiksatam> #endmeeting