14:00:06 #startmeeting neutron_drivers 14:00:07 Meeting started Fri Oct 27 14:00:06 2017 UTC and is due to finish in 60 minutes. The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:10 The meeting name has been set to 'neutron_drivers' 14:00:15 hi 14:00:24 hello 14:00:25 hi yamamoto 14:00:38 hi slaweq_ 14:01:51 amotoki: ping 14:02:03 hi 14:02:08 I don't see ihrachys or armax around 14:02:16 let's give them a couple of minutes 14:02:46 amotoki, yamamoto: you going to Sydney? 14:02:54 slaweq_: you? 14:02:54 no 14:02:56 yes, i will be there 14:02:59 mlavalle: no 14:04:56 it seems most folks focus on PTGs now 14:05:29 yeah, if you have to choose between the 2 and you are a dev, the PTG makes more sense 14:05:42 ok, let's get going 14:06:17 RFEs to review today are: https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe 14:06:43 First one is https://bugs.launchpad.net/neutron/+bug/1690425 14:06:44 Launchpad bug 1690425 in neutron "[RFE] neutron cells aware" [Wishlist,Triaged] 14:07:42 For this one I did a little bit of research in terms of the Tricircle option 14:08:28 It hasn't been deployed in production yet. But it has a non open source predecessor it is based on, Cascading 14:08:33 me either, but I can say nova and neutron are two main projects which heavily use MQ and DB 14:09:16 which is in production in 5 public clouds, including Huawei's 14:09:41 I think we can talk about tricircle as a separate topic as it provides a different focus. AFAIK, it is a API proxy in front of multiple openstack clouds, right? 14:10:01 Joe Huang of Huawei is going to get me more data in terms of scale achieved in those deployments 14:11:00 yes, although there is an effort to align it with Nova Cells. I think one of the links I posted referes to taht 14:11:25 https://docs.openstack.org/tricircle/latest/install/installation-guide.html#work-with-nova-cell-v2-experiment 14:11:32 it is experimental as well 14:13:34 while It sounds an interesting thing, I wonder what is a good start. 14:14:10 amotoki: do you have a suggestion for a good start in mind? 14:14:19 IMHO we can start cells support in one OpenStack clouds. it means we can start from what nova does. 14:14:45 yeah, I agree 14:15:13 IIUC, nova faces some difficulties by spreading out their db into multiple parts 14:15:51 yes, that is the reason I also posted a pointer to a Cells V2 document 14:16:08 https://docs.openstack.org/nova/pike/user/cellsv2_layout.html 14:16:36 here they have the concept of superconductor, for the entire deployment, and conductor, for each cell 14:16:52 and also API DB (the whole deployment) and cells DBs 14:17:26 There is a nice graphic in that doc that illustrates the entire thing 14:18:29 perhaps we can start to explore cells supports by studying how we can support the same model as nova. 14:18:53 for example, how does parent MQ and DB affect neutron? 14:19:01 meaning an API DB and cells DB? 14:19:07 exactly 14:19:44 armax made the point last time we discussed this that the DB is not a problem in our case 14:20:06 that our bottleneck is the messaging bus 14:20:27 good point 14:20:30 that is why https://www.youtube.com/watch?v=R0fwHr8XC1I 14:20:38 is an alternative 14:21:29 So I guess to move ahead the first step would be to decide whether we agree that in our case the DB is not the problem 14:22:36 if the answer to that is yes, then the messaging alternative pointed out ^^^ might be a good approach 14:22:42 i think a balance of load pressures from nova and neutron is one of key points. 14:23:07 is it better to ask their experiences from operators for performance? 14:23:43 yeah. Actually that is a conversation we might want to try to have in Sydney 14:24:22 the number of compute nodes is an important factor 14:25:05 personally i have no good data on this. our cloud with 500 nodes (one region) works well so far. 14:25:34 there is also a cells V2 session scheduled in Sydney: http://forumtopics.openstack.org/cfp/details/60 14:26:39 ok, I will update the RFE with two next steps if you all agree: 1) seek feedback from operators 14:27:00 2) catch up with the Nova team in Sydney about the status of Cells V2 14:27:11 agree 14:27:13 makes sense? 14:27:28 yamamoto: what do you think? 14:27:30 yes 14:27:45 sounds reasonable 14:27:54 cool :-) 14:28:42 Next one is https://bugs.launchpad.net/neutron/+bug/1692490 14:28:43 Launchpad bug 1692490 in neutron "[RFE] Ability to migrate a non-Segment subnet to a Segment" [Wishlist,Triaged] 14:29:48 The way I read this originally made me think it was not possible 14:30:34 moving from an arbitrary number of segments in a network to associating segments to subnets 14:31:10 i am not sure this is a common case. 14:32:48 but if all they want to do is to move from one segment / subnet to a sitatuion where the segment is associated to the subnet, it is a pretty easy thing to do, isn't it? 14:33:00 it is an update to the API 14:33:27 is that your reading yamamoto? 14:33:32 yes 14:34:09 perhaps we first need to clarify what we assumed when network segment support was implemented. 14:34:55 you mean clarify in the RFE? 14:35:07 yeah 14:35:20 yeah, I was going to propose that 14:35:24 i wonder we can avoid the sitaution by the order of operations 14:35:41 or it happens easily 14:36:22 i haven't checked the detail of this RFE yet. I need to think more 14:36:47 we can ask the submitter to clarify exactly what he wants to do and go from there 14:36:50 makes sense? 14:36:54 mlavalle: +1 14:36:56 yes 14:37:13 ok, I will update the RFE wit a request for clarification 14:37:22 clarifying operation scenario really wil help us 14:37:23 feel free to add questions of your own 14:37:46 amotoki: yeah, go ahead and add a question there if you want 14:37:53 sure 14:38:23 cool 14:39:45 Next one is https://bugs.launchpad.net/neutron/+bug/1705084 14:39:47 Launchpad bug 1705084 in neutron "[RFE] Allow automatic sub-port configuration on the per-trunk basis in the trunk API" [Wishlist,Triaged] 14:42:59 regarding this, I tend to agree with yamamoto that this is not a job of neutron in general. 14:43:18 yes I also lean towards that 14:43:43 do we support similar things via metadata API? anyway this is also what nova metadata API returns 14:44:51 any thoughts yamamoto? should we decline this one? 14:45:56 i think so. it's basically a metadata which neutron itself doesn't use. 14:46:10 amotoki: agree? 14:46:37 i agree to declient it 14:46:43 me too 14:47:01 if we need something like this, it should be designed to be more generic. 14:47:33 there are three ways to provides configurations to a guest: config driver, metadata API and information retrieved via API 14:48:03 if our API does not provide enough information, we need to improve it, but I don't think this is the case. 14:48:24 s/driver/drive/ 14:49:16 LOL 14:49:33 ok, moving on 14:49:45 Next one is https://bugs.launchpad.net/neutron/+bug/1705467 14:49:46 Launchpad bug 1705467 in neutron "[RFE] project ID is not verified when creating neutron resources" [Wishlist,Triaged] 14:51:03 yamamoto: do you know if that nova spec got ever implemented? 14:51:26 it has been implemented. 14:51:43 partially or fully, i don't know 14:52:34 do you think we should follow suit? 14:53:56 i'm not sure. it depends on how often it needs to be executed. ie. the load increase on keystone is accepted or not 14:54:17 thta's a good point 14:54:30 this is really tricky. I to agree to follow it in general as admin operations would be predictable 14:54:40 I tend to agree* 14:54:47 in case of nova those operations are not frequently used i guess 14:55:00 yamamoto: yes 14:55:41 on the other hand, there are some cases we create resources in GET operations. this is the corner case like SG 14:55:46 if load on Keystone is our concern, we could make it configurable 14:56:16 if it is limited to global admin, it is not a problem. 14:56:32 if it is allowed to domain admin, it might be a problem 14:56:38 as they are not cloud admin 14:56:54 we need this only when the resource project-id is not same as the requester's, right? i wonder how often it's the case. 14:57:07 right 14:57:55 some weird situation is "typo" 14:58:17 assume a user issues a CLI like 'neutron auto-allocated-topology-show bbbbbbb' 14:58:31 if it causes a problem, we need to avoid it 14:58:52 there is something OSC/CLI can do in CLI side though 14:59:51 i would like to think more what are cases where real problems happen. 15:00:02 more input and clarification would be helpful 15:00:21 cool, would you post some comments there? 15:00:30 and we pick up from here next week 15:00:34 as far as the reported case, I can guard this in OSC side 15:00:46 please comment that in the RFE 15:00:54 sure 15:01:08 yamamoto: you agree? 15:01:20 yes 15:01:30 ok, thanks for attending 15:01:38 thanks 15:01:42 will continue next Thursday 15:01:44 o/ 15:01:48 #endmeeting