18:02:14 <SumitNaiksatam> #startmeeting networking_policy
18:02:15 <openstack> Meeting started Thu Aug 20 18:02:14 2015 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:02:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:02:19 <openstack> The meeting name has been set to 'networking_policy'
18:02:36 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Aug_20th_2015
18:03:12 <SumitNaiksatam> we have release Kilo now!
18:03:19 <SumitNaiksatam> *released
18:03:36 <SumitNaiksatam> thanks to the team for the fantastic effort
18:03:56 <SumitNaiksatam> i think we accomplished quite a lot
18:04:08 <ivar-la__> SumitNaiksatam: ++
18:04:12 <SumitNaiksatam> details of which will be posted in the release notes page:
18:04:21 <Yi> great
18:04:22 <SumitNaiksatam> #link https://wiki.openstack.org/wiki/GroupBasedPolicy/ReleaseNotes/Kilo
18:04:54 <SumitNaiksatam> ransari: hi
18:05:01 <ransari> hi
18:05:07 <SumitNaiksatam> ransari: just discussing that we released kilo
18:05:20 <ransari> Good milestone!!
18:05:27 <SumitNaiksatam> ransari: thanks
18:05:33 <SumitNaiksatam> tarballs have been posted in launchpad
18:06:11 <SumitNaiksatam> as before, we will continue to fix bugs as and when find them, and backport them
18:06:18 <ransari> ok
18:06:55 <SumitNaiksatam> any thoughts, questions about the release (or concerns if you tested it)
18:07:31 <SumitNaiksatam> ok
18:07:33 <Yi> Any plan on distribution package? :-)
18:07:50 <SumitNaiksatam> Yi: okay, good segue
18:08:03 <SumitNaiksatam> #topic Packaging update
18:08:07 <rkukura> I plan to update the Fedora/RDO packages to the kilo and stable/juno releases, but am not sure when
18:08:42 <SumitNaiksatam> rkukura: okay :-)
18:09:42 <SumitNaiksatam> rkukura: another update on the packaging/distr front?
18:09:55 <rkukura> no
18:10:13 <SumitNaiksatam> sorry, i meant to say - any other
18:10:16 <SumitNaiksatam> rkukura: okay thanks
18:10:51 <SumitNaiksatam> #topic Integration testing
18:10:56 <SumitNaiksatam> I dont have much of an update on the integration tests
18:11:14 <SumitNaiksatam> except that the rally job needs to be tweaked a little more
18:11:32 <SumitNaiksatam> currently it votes +1 even if we have less than 100% success on a test
18:12:02 <SumitNaiksatam> ideally all concurrent tests in this suite should return 100% succcess
18:12:10 <SumitNaiksatam> however i have seen at times that they dont
18:12:26 <SumitNaiksatam> so we need to decide what is acceptable and what is not, and where the issue it
18:12:29 <SumitNaiksatam> *is
18:12:55 <SumitNaiksatam> currently i am running 10 concurrent operations for every test
18:12:59 <ivar-la__> SumitNaiksatam: +1, I think this is of vital importance
18:13:08 <ivar-la__> SumitNaiksatam: how many servers/workers?
18:13:11 <SumitNaiksatam> ivar-la__: yeah
18:13:39 <SumitNaiksatam> ivar-la__: good question, i believe i am using the default
18:14:00 <ivar-la__> SumitNaiksatam: so one server with 0 workers I think
18:14:16 <SumitNaiksatam> ivar-la__: definitely one server
18:14:18 <ivar-la__> that is important, one server with 0 workers is basically serial IIUC
18:14:37 <SumitNaiksatam> ivar-la__: i agree if there is only one worker that it should be serial
18:14:41 <ivar-la__> you either need more than 1 server or more than one worker
18:15:01 <SumitNaiksatam> ivar-la__: yes, we cannot do more than one server in that gate environment
18:15:13 <ivar-la__> the difference is that external locking (oslo_consurrancy) won't work across server, but it will across workers
18:15:30 <ivar-la__> oslo_concurrency*
18:15:48 <ivar-la__> SumitNaiksatam: seems legit
18:15:59 <SumitNaiksatam> ivar-la__: right
18:16:29 <SumitNaiksatam> assuming its “0” workers (which is one thread), we are still seeing issues sometimes
18:16:57 <SumitNaiksatam> we will need to investigate the result of those tests more carefully
18:17:11 <SumitNaiksatam> #topic Bugs
18:17:25 <ivar-la__> SumitNaiksatam: default number of API workers seem to be the number of CPUs
18:17:30 <ivar-la__> SumitNaiksatam: as for Kilo/Liberty
18:17:40 <SumitNaiksatam> ivar-la__: ah okay
18:18:09 <ivar-la__> zero in Juno
18:18:10 <SumitNaiksatam> so we had some pending reviews sitting in the queue for a while now
18:18:43 <SumitNaiksatam> ivar-la__: okay, need to check what that particular devstack is doing since i did not override the conf
18:18:54 <SumitNaiksatam> there are a few pending reviews in the review queue
18:19:12 <SumitNaiksatam> and we should decide how we want to make progress
18:19:48 <SumitNaiksatam> #link https://review.openstack.org/166424 (Admin or Provider tenant owns service chain instances)
18:20:09 <SumitNaiksatam> ransari: i believe you mentioned that you prefer the default to be admin?
18:20:23 <ransari> One clarification I have w.r.t some of patches under review. Some introduce DB migration. If this is backported to Juno, on production deployments, will it requiore a re-install of rpm, or will rpm upgrade work?
18:20:35 <ransari> My mistake. prefer default to be provider
18:20:49 <SumitNaiksatam> ransari: ah good, so the current patch will work for you?
18:20:58 <ransari> yes, I believe so.
18:21:09 <ivar-la__> the provider is already the default
18:21:21 <ransari> the DB migration reference was this: https://review.openstack.org/#/c/170235/
18:21:37 <SumitNaiksatam> ransari: nice, not sure if magesh is around, but it will help to get him to review this and make progress
18:21:41 <ivar-la__> I need to do two things there... First finish the rebase (was a pretty ugly one and it's unfinished)
18:21:43 <SumitNaiksatam> ransari: very good point to bring up
18:21:51 <SumitNaiksatam> ivar-la__: yes sure
18:21:54 <rkukura> ivar-la__: Any update on allowing the tenant to be specified by name?
18:22:06 <ivar-la__> and the address rkukura comment about using admin name... Although it doesn't look very straight forward
18:22:24 <SumitNaiksatam> rkukura: thanks for bringing that up
18:22:32 <ivar-la__> We need an extra user (probably neutron user on Keystone) that can ask keystone info about this particular tenant
18:22:40 <rkukura> In my nova testing, I’ve been using the UUID of the “service” tenant.
18:23:02 <ransari> sumit: we will review  https://review.openstack.org/166424 and confirm
18:23:05 <ivar-la__> But by default, I'm not sure the Neutron service tenant can retrieve information about other tenants
18:23:20 <SumitNaiksatam> ransari: okay, thanks
18:24:14 <ransari> RH default I beleive does have  admin user added to services teannt
18:24:26 <rkukura> ivar-la__: That’s the tenant I’ve been using to own the nova VMs, etc.. I didn’t mean to imply it could query keystone, but maybe it can.
18:25:31 <SumitNaiksatam> rkukura: what user are you using?
18:26:00 <rkukura> SumitNaiksatam: “service”
18:26:37 <ivar-la__> rkukura: so the Nova VMs aren't owned by the service_admin?
18:26:39 <ransari> <rkukura> Does "service" user have admin role?
18:26:46 <SumitNaiksatam> rkukura: okay
18:26:55 <ivar-la__> rkukura: that makes sense... I think we don't have any way to authenticate against keystone with that
18:27:12 <ivar-la__> rkukura: would it work if we user service_tenant_name and service_tenant_password?
18:27:22 <ivar-la__> rkukura: s/user/used
18:28:01 <rkukura> ivar-la__: I think that would let us authenticate with keystone
18:28:18 <ivar-la__> yes, so service_tenant could own the VMs and to "external calls"
18:28:29 <ivar-la__> and still we should be able to get the UUID for internal ones
18:29:46 <mageshgv> ivar-la__: did not quite understand the internal vs external calls, do you mean neutron vs other component calls ?
18:30:33 <ivar-la__> mageshgv: yes... In Neutron we just need a context with the proper tenant_id, not a full fledged Keystone token
18:30:41 <ivar-la__> mageshgv: at least for now
18:30:46 <mageshgv> ivar-la__: okay
18:32:30 <SumitNaiksatam> mageshgv: i just noticed you posted the following #link https://bugs.launchpad.net/bugs/1487156
18:32:30 <openstack> Launchpad bug 1487156 in Group Based Policy "Fetching instance metadata fails for Policy Target VMs" [Undecided,New]
18:32:47 <SumitNaiksatam> i believe this would be a high priority to fix
18:33:11 <mageshgv> SumitNaiksatam: yes, this is a high priority one now
18:33:19 <SumitNaiksatam> i think this would have crept in when we start applying the egress rules
18:33:45 <mageshgv> SumitNaiksatam: right
18:34:05 <SumitNaiksatam> mageshgv: i am wondering what else we need to open up
18:34:51 <mageshgv> SumitNaiksatam: I think we should take a look at the rules added by L3 agent/Firewall. I notice there this IP is opened up
18:35:42 <SumitNaiksatam> mageshgv: which IP?
18:36:21 <mageshgv> SumitNaiksatam: The metadata server IP. I meant the ports/IPs opened up by openstack fwaas
18:36:33 <mageshgv> should give us an idea
18:36:54 <ransari> mageshv: That is a good suggestion
18:37:16 <SumitNaiksatam> mageshgv: hmmm, i dont recall opening up anything by default in fwaas
18:37:37 <SumitNaiksatam> mageshgv: since its applied at the perimeter
18:37:59 <SumitNaiksatam> anyway, we can investigate
18:38:17 <mageshgv> SumitNaiksatam: okay, I will take a look once more
18:39:09 <SumitNaiksatam> ivar-la__: so adding the metadata IP can be added to default SG?
18:39:34 <ivar-la__> sure
18:39:50 <SumitNaiksatam> ivar-la__: okay
18:39:54 <ivar-la__> will be needed for outgoing traffic
18:40:27 <SumitNaiksatam> ivar-la__: right
18:40:44 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1484425
18:40:44 <openstack> Launchpad bug 1484425 in Group Based Policy "GBP Allows the same PTG to be the provider and consumer of a Contract" [Undecided,Opinion]
18:40:52 <SumitNaiksatam> mageshgv: you have a patch for this as well
18:41:00 <SumitNaiksatam> #link https://review.openstack.org/#/c/212676/
18:41:12 <SumitNaiksatam> however this is currently working as designed
18:41:40 <SumitNaiksatam> i meant from an API perspective
18:42:07 <rkukura> I’ve wondered about the design on this
18:42:33 <rkukura> Are PRSes supposed to be needed for PTs in the same PTG to talk to each other?
18:42:44 <SumitNaiksatam> rkukura: no
18:42:47 <mageshgv> SumitNaiksatam: yes, we do not restrict it at the api, but what does it mean when we say a group both provides and consumes the same contract
18:42:57 <SumitNaiksatam> mageshgv: yes
18:43:14 <SumitNaiksatam> mageshgv: a particular policy driver can choose to not support this
18:44:04 <mageshgv> SumitNaiksatam: I am trying to understand what would be the use case of such a scenario
18:45:53 <SumitNaiksatam> mageshgv: there are certain contracts like infrastructure services’ contracts
18:46:26 <SumitNaiksatam> mageshgv: which the providing PTG also needs, but might not necessarily be getting serviced by its own members
18:47:18 <SumitNaiksatam> mageshgv: was this creating an issue in the resource mapping driver?
18:47:37 <rkukura> SumitNaiksatam: This seems to contradict your answer to my question above.
18:48:23 <mageshgv> SumitNaiksatam: This needs more thought since we do not have a selector which can choose from a list of providers
18:48:27 <SumitNaiksatam> rkukura: PRS not needed to communicating with the same PTG
18:48:49 <SumitNaiksatam> mageshgv: that is correct, we had the scope notion but havent implemented it
18:48:56 <mageshgv> SumitNaiksatam: yes, this was causing issues when we have a chain
18:49:23 <SumitNaiksatam> mageshgv: so i agree with you that this can result in ambiguity with our currnet implementation
18:49:53 <SumitNaiksatam> mageshgv: but i am suggesting that we do the validation and raise the error in the driver, as opposed to in the API itself
18:50:14 <rkukura> SumitNaiksatam: So is the PRS needed only to insert a service chain between PTs in a PTG, or am I really confused?
18:51:04 <SumitNaiksatam> rkukura: no no, by services’ I did not mean the L4-7 services and their chains
18:51:12 <ivar-la__> providing/consuming at the same time is a peer relationship
18:51:28 <ivar-la__> show do you establish the PRSs between two peer groups?
18:52:32 <SumitNaiksatam> we have 8 mins, so perhaps this is a slightly longer discussion requiring more concrete examples
18:52:39 <SumitNaiksatam> i will send out an email on this
18:52:41 <mageshgv> SumitNaiksatam, rkukura, ivar-la__: I think we should first clear this ambiguity in the model
18:53:06 <rkukura> SumitNaiksatam, mageshgv: I’d appreciate that
18:53:32 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1460831
18:53:32 <openstack> Launchpad bug 1460831 in Group Based Policy "API for group update is not clear" [High,Confirmed] - Assigned to Sumit Naiksatam (snaiksat)
18:53:37 <ivar-la__> mageshgv: what ambiguity?
18:53:45 <SumitNaiksatam> i was trying to address this in a backward compatible way
18:54:29 <ivar-la__> mageshgv: I don't think that is ambiguous... I am a provider of a service, but at the same time I need to consume it from someone else
18:55:01 <SumitNaiksatam> ivar-la__: correct
18:55:18 <SumitNaiksatam> so in #link https://review.openstack.org/#/c/209409/ i was trying to allow both, the old format, and the new format
18:55:20 <mageshgv> ivar-la__: right, but we do not have a way of specifying from which provider we will be consuming
18:55:38 <SumitNaiksatam> however i realized that this creates an issue with the dict that we return
18:55:54 <ivar-la__> mageshgv: that's the point of having contracts, the "destination" is not specified there
18:56:02 <ivar-la__> mageshgv: otherwise they would be firewalls
18:56:10 <SumitNaiksatam> should we return a dict with provided/consumed PRS in the new format or the old format
18:56:26 <SumitNaiksatam> since we can specify only one
18:56:36 <SumitNaiksatam> hence i did not push for this patch to get into kilo
18:56:54 <SumitNaiksatam> would like to hear you suggestions on this (since we want to make this a non-disruptive transition in the API)
18:57:16 <SumitNaiksatam> similar concerns might apply to other API changes that we might want to make
18:57:35 <SumitNaiksatam> if guess if we introduce micro versioning then this is not a concern
18:57:45 <mageshgv> ivar-la__: for me, it looks really confusing when I say a particular group provides as well as consumes the same contract say X
18:57:54 <ivar-la__> SumitNaiksatam: can you make an example of the new format?
18:58:13 <SumitNaiksatam> ivar-la__: its posted in the review commit message
18:58:50 <SumitNaiksatam> ok we have a couple of minutes
18:59:10 <SumitNaiksatam> any other high priority reviews, that you want to request the attention of the rest of the team?
18:59:41 <ivar-la__> SumitNaiksatam: what about UUID:scope, but with scope implicitly set to None or '' when not provided?
18:59:56 <ivar-la__> so you can POST {'policy_rule_set': UUID1}
19:00:07 <ivar-la__> and will become {'policy_rule_set': UUID1:''}
19:00:11 <SumitNaiksatam> ivar-la__: thats an option, but in the future we might want to add “status” as well to that relationship
19:00:17 <ivar-la__> that ugly, but compatible I think
19:00:28 <ivar-la__> SumitNaiksatam: oh right
19:00:35 <SumitNaiksatam> ivar-la__: yes, i did think of what you say, that is definitely compatible
19:00:55 <SumitNaiksatam> #topic Open Discussion
19:01:13 <SumitNaiksatam> ransari: sorry we did not get time to address your upgrade point
19:01:24 <SumitNaiksatam> lets go to #openstack-gbp for that
19:01:57 <SumitNaiksatam> one suggestion - think a bunch of folks are going to be on vacation next week, so we wont meet next week
19:02:20 <SumitNaiksatam> okay i think we are 2 mins over
19:02:27 <SumitNaiksatam> thanks all for joining today
19:02:28 <ivar-la__> mageshgv: let's say you have a ICMP PRS, and you want all your groups to be able to ping each other... who do you choose as the provider?
19:02:39 <SumitNaiksatam> bye!
19:02:42 <ivar-la__> bye!
19:02:45 <mageshgv> bye
19:02:45 <ransari> bye
19:02:46 <SumitNaiksatam> ivar-la__: yes simple example
19:02:48 <rkukura> bye
19:02:53 <SumitNaiksatam> #endmeeting