21:00:47 <danwent> #startmeeting 21:00:48 <openstack> Meeting started Mon Jul 23 21:00:47 2012 UTC. The chair is danwent. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:05 <danwent> #linkn agenda: http://wiki.openstack.org/Network/Meetings 21:01:07 <garyk> hi 21:01:39 <salv-orlando> Hi all! 21:01:44 <danwent> ok, two weeks from when major branches should be proposed for F-3 21:01:50 <danwent> three weeks total 21:01:52 <rkukura> hi everyoone 21:02:06 <danwent> starting this week, i'll bug people who haven't posted full specs (including myself!) for major features 21:02:28 <PotHix> o/ 21:02:31 <amotoki> Hi! 21:02:36 <gongys> ok. be ready to be bugged. 21:02:53 <danwent> here are working items: https://launchpad.net/quantum/+milestone/folsom-3 21:03:13 <danwent> biggest roadblock to clear out is agreement on rpc/notifications stuff for non-polling 21:03:23 <danwent> which we'll talk about later in the meeting. 21:03:33 <danwent> progress on v2 plugins seems to be moving forward well, which is great. 21:03:41 <danwent> and we have people working on quantum + horizon, which is awesome. 21:03:51 <edgarmagana> hi all 21:04:07 <danwent> in terms of key reviews.... 21:04:10 <danwent> rkukura: https://review.openstack.org/#/c/9069/6 21:04:13 <danwent> provider networks 21:04:27 <danwent> looks like your new plugin stuff just got merged, congrats. 21:04:29 <rkukura> phase one should be ready for hopefully final review later today 21:04:34 <danwent> yup, looks close 21:04:49 <rkukura> I would like feedback on my attribute authorization proposal 21:05:06 <danwent> rkukura: yes, saw email. haven't had a chance to read yet though. 21:05:11 <rkukura> If its acceptable, I'll include that in this patch set 21:05:15 <zhuadl> yes, CRD of network/subnet/ports in sys panel works now. 21:05:17 <salv-orlando> I think the provider networks were already in good shape, rebasing after changes in the ext framework should make its approval piece of cake 21:05:39 <danwent> zhuadl: great. i'm super excited about the progress on horizon 21:05:55 <danwent> zhuadl: do you want to send a link out the ML once you think its ready for testing? 21:06:03 <zhuadl> sure 21:06:11 <danwent> zhuadl: by sys panel, do you mean for admin users but not regular users? 21:06:33 <zhuadl> yes, the user panel is in progress. 21:06:51 <danwent> zhuadl: ok, i will try to test sys panel, i think i was testing user panel before 21:06:58 <danwent> I also wanted to call out the reviews for gongys's notifications stuff and garyk's rpc plugin work. 21:07:27 <danwent> reviews are up for that, and I'd really like to wrap up any design discussion at the meeting today, as I don't want to slow you two down on making progress. 21:07:37 <garyk> danwent: notifications 21:08:01 <gongys> regarding notification, we have to thing about where to put our configuraiton items. 21:08:04 <garyk> danwent, gongys: 2 issues here. 1. configuration files and what to notify 21:08:22 <danwent> ok, going a bit out of agenda order, but let's roll with it. 21:08:23 <danwent> :) 21:08:36 <garyk> roll with the punches - dope on a rope :) 21:09:02 <danwent> Moving on to Discussion Topics, we'll talk about the notifications / rpc 21:09:04 <garyk> configuration files - do we want to merge and have 1 file or stick with the current model. 21:09:42 <gongys> I have no clear idea yet. 21:09:43 <danwent> garyk: it seemed strange that someone would have to configure the same rpc/message-queue stuff in multiple files, but I don't really have much experience with it, so I'd defer to others. 21:10:14 <danwent> i'd like to identify all key blockers in this meeting though, as I feel like we're a bit stuck on a lot of items until we do. 21:10:22 <danwent> garyk: what is your opinion on this? 21:10:30 <garyk> danwent: not sure if this is in multiple files. it is either in quantum.conf or xxx-plugin.ini. 21:11:05 <garyk> danwent: i think that we need summit for this one. he wanted to keep these separate for the cisoc plugins 21:11:16 <danwent> but if we have multiple agents (e.g., dhcp, l3, plugin), woudln't they all need those values? 21:11:18 <SumitNaiksatam> i am here 21:11:47 <danwent> garyk: sorry, who is "he"? 21:11:51 <danwent> SumitNaiksatam: ? 21:12:09 <garyk> danwent: yes 21:12:20 <SumitNaiksatam> we prefer that the plugin specific conf should be in a different conf file 21:12:35 <garyk> danwent: i guess you are right. with multiple plugins it does seem logical. 21:12:46 <danwent> garyk: are you talking about generally whether plugin config should be collapse, or specifically about rpc/message-queue config? I was just talking about the latter. 21:13:16 <garyk> danwent: the latter 21:13:46 <nati_ueno> I'm implemanting multiple plugin, some configuration looks conflict because some plugin are suing same configuration key 21:13:46 <garyk> danwent: it means that agents will need to load the qunatum.conf file. i am ok with this. it will make life simpler a bit 21:13:51 <PotHix> +1 for different configurations per plugin 21:14:49 <danwent> garyk: that is my feeling as well, but i'm happy to defer to you on this. 21:15:04 <danwent> the most important thing is that we make a decision and move forward 21:15:53 <garyk> danwent: i am good with the agent also reading quantum.conf. any objections? 21:15:55 <salv-orlando> I am probably naive, but I reckon that if plugins can be developed independently then they should have separate namespaces for configuration 21:16:19 <danwent> garyk: i'm happy with that. 21:16:22 <nati_ueno> +1 for quantum.conf 21:16:30 <markmcclain> +1 for different configs 21:16:33 <salv-orlando> but they should all be able to read quantum.conf as well 21:16:35 <gongys> But the notification and rpc will have many items about 20. 21:16:49 <gongys> we have to duplicate them in all of the plugin files. 21:17:10 <danwent> gongys: yeah, that was my concern as well. 21:17:11 <nati_ueno> salv-orlando : reading quantu.conf is simple 21:17:15 <garyk> gongys: if the agent reads quantum.conf + its own file then we are good. allcommon stuff in quantum.conf. 21:17:27 <danwent> I think assuming that plugins can all read quantum.conf for non-plugin specific config if reasonable. 21:17:35 <zhuadl> we'd better figure out all the items and then see where to put in, I think. 21:17:37 <SumitNaiksatam> i agree 21:17:37 <garyk> in fact the common config enables us to read multiple files - this is done by the plugin on the service 21:18:10 <gongys> to require plugin agent to read both quantum.conf and plugin.ini file will complicate the install. 21:18:14 <gongys> of agent. 21:18:16 <salv-orlando> I agree. But I would pose the problem in terms of namespaces rather than one or multiple files. There's a common namespace every plugin and agent can access, and then there are plugin specific and agent specific namespaces. 21:18:35 <salv-orlando> If we then map namespaces to files, then that's a viable solution 21:18:45 <garyk> why shoukd it complicate things? 21:18:46 <danwent> salv-orlando: ok, that makes sense. 21:18:52 <PotHix> Namespaces is a good idea IMHO 21:19:41 <gongys> garyk: We will have to say that copy the quantum.conf and plugin.ini to xx dir instead of current guide which just requires copy plugin.ini file. 21:20:03 <salv-orlando> PotHix: I have a suspect our current configuration infrastructure does not handle namespaces, but that's a technical issue we can handle offline. 21:20:03 <danwent> packaging should take care of that, no? 21:20:21 <nati_ueno> IMO, table-name should use namespace also 21:20:25 <danwent> ok, we seem to have stumbled on a mine field here :) 21:20:48 <garyk> can someone please elaborate on the namespacing? 21:20:53 <salv-orlando> that the moment when we typically decide to go back to the ML, but at least let's agree on a direction 21:21:00 <danwent> garyk: I consider you master of all things config, due to the work here. can you sum this up and express your preference? 21:21:09 <PotHix> agree 21:21:15 <salv-orlando> garyk: I apologise I should not have mentioned that word. 21:21:23 <rkukura> By "namespaces" do we mean the "[DEFAULT]" and "[QUOTAS]" section headers in the current config file? 21:21:29 <danwent> if we still have disagreement, let's agree about what we disagree on, and take it offline. 21:22:02 <zhuadl> agree :-) 21:22:12 <garyk> danwent: i can write a mail tomorrow and hopeully we can debate the issue 21:22:25 <danwent> garyk: ok, yeah, forgot how it is for you. 21:22:37 <salv-orlando> exactly. on the namespace thing let's say that from my point of view each file is a distinct namespace and [default] or [quotas] are subsections of a given namespace. So multiple files will work in model 21:22:39 <danwent> if you feel you haven't had a chance to make your comments on config files known, please ping garyk 21:22:59 <danwent> (via email, is probably best) 21:23:13 <danwent> Ok, garyk, so config file were one of the big issues. what was the other? 21:23:30 <garyk> salv-orlando: ok, that makes sense. one limitation is enforcing it 21:23:40 <garyk> danwent: items 2 is what exactly should be notified. 21:23:52 <salv-orlando> garyk: that's what I regard as the "technical" problem to tackle offline 21:23:57 <danwent> garyk: you mean whether the data can be passed in the notification? 21:24:07 <rkukura> Then config file names should be orthogonal to the section/variable naming or else we are locking in a very specific implementation. 21:24:19 <garyk> danwent: i just need a few seconds to write... 21:24:30 <gongys> current notificaton event is create/update/delete of net/subnet/port. 21:24:42 <danwent> garyk: k, go ahead 21:25:17 <garyk> danwent, gingys: these are notifications for billing etc. they should also include failures in the operations - for example an port not defined 21:25:52 <danwent> garyk: ah, ok, now i understand what you're getting at 21:25:55 <garyk> dantwent: these are not notifications to notify an agent that is should do something. does this make sense 21:26:19 <garyk> danwent: we are mixing different features. 21:26:39 <zhuadl> these are kind of events? 21:26:45 <danwent> garyk: well, i'm still not convinced, but not enough that its worth slowing things down for F-3, so i'm putting my concerns aside. 21:27:20 <gongys> Garyk: we have start/end for each event. 21:27:25 <danwent> garyk: on notification for failures, I think it depends on what your goals are for notifications. 21:27:40 <garyk> danwent: it should not slow things down at all. i am cool with gonys changes. just the minor issue of the error flows that need to be treated. 21:27:54 <garyk> gongys: what if there is a start and no end. how do you explain this? 21:28:10 <gongys> so for port not defined can be expressed by no end ejected. 21:29:16 <gongys> Our log.debug is used to log error or exceptions. 21:29:21 <garyk> gongys: i find it odd if someone is writing a monitoring application how it will treat cases when a create event has taken place and then no completion. 21:29:45 <danwent> garyk: i think the big question is whether this notification framework is intended for external monitoring/troubleshooting 21:30:00 <gongys> monitor should care about end more than begin. 21:30:09 <danwent> if so, i agree that errors probably should be included. If the intial version is targeted more for things like billing, perhaps not. 21:30:11 <garyk> danwent: i would assume billing 21:30:12 <nati_uen_> Notification is for external monitoring/troubleshooting in Nova 21:30:16 <nati_uen_> So quantum should be same 21:30:34 <danwent> nati_uen_: do we know how nova notifications handle API errors? 21:30:37 <garyk> if it is troubleshooting then it mus include all paths 21:30:39 <nati_uen_> In nova error is also included. It is configurable by flag value 21:30:47 <nati_uen_> Yes I was working on that 21:31:14 <nati_uen_> Nova's exception class is doing that 21:31:18 <danwent> nati_uen_: good to know. 21:31:47 <shivh> Not sure what is start and end of event. Events are instantanious, they dont have start and end. Tasks have start and end. 21:31:56 <nati_uen_> See https://github.com/openstack/nova/blob/master/nova/exception.py#L116 21:32:02 <danwent> gongys: are you opposed to adding the notification capability in line with what nova does? 21:32:14 <danwent> (i'll completely admit I have no familiarity with what nova does) 21:32:26 <danwent> or perhaps someone else could work on it as a follow-up commit? 21:32:58 <gongys> not very. I am just thinking we already have LOg.debug, why we deal with troubleshooting by notification? 21:33:35 <zhuadl> me too 21:33:39 <nati_uen_> troubleshooting could be done by automatically 21:33:58 <danwent> gongys: i had the same opinion to start. I think the difference is whether it is an admin monitoring (in which case log files are enough), vs. an tenant application monitoring. 21:34:00 <nati_uen_> But debug log has no json structure. 21:34:13 <danwent> nati_uen_: that's a good point as well. 21:34:53 <gongys> anyway, not all of openstack have such agreement to use notification to do troubleshooting. 21:35:11 <gongys> to Do the same role with LOG.debug() 21:35:14 <danwent> ok, so gongys, up to you as to whether you want to put it in the initial patch. sounds like others would be happy to extend you patch once its merged. 21:35:35 <danwent> does that seem like a reasonable compromise? or are people fundementally opposed to using notifications for failures ala nova? 21:35:40 <nati_uen_> LOG.debug is for development. We will disable it when we do real operation 21:35:56 <garyk> nati_uen_: i agree 21:35:57 <gongys> Yes. I agree. I will push my first initial patch. 21:36:03 <danwent> ok, sounds good. 21:36:11 <danwent> garyk: any other items you feel are blockers in this area? 21:36:12 <garyk> gongys: +1 21:36:22 <nati_uen_> gongys: +1 21:36:25 <danwent> we're running a bit low on time, but I want to make sure we clear things out. 21:36:37 <garyk> danwent: only the summer heat :). 21:36:59 <danwent> hehe… no kidding. I stay at the office b/c it has AC. 21:37:24 <danwent> ok, gongys, i believe you wanted to confirm that we are ok with names being non-unique and optional. 21:37:33 <gongys> yes. 21:37:48 <danwent> I think garyk has waived his concerns on this front 21:37:52 <danwent> but I wanted to confnirm 21:38:13 <danwent> garyk: ? 21:38:45 <danwent> (maybe he fell asleep.. sounds like agreement to me! j/k) 21:38:55 <garyk> danwent: i tried to follow all of the mails on the list. i am ok with whatever is decided 21:38:57 <zhuadl> hehe:D 21:39:12 <danwent> ok, thanks. gongys, i think you're cleared for take-off on that. 21:39:32 <danwent> salv-orlando: did you want to comment on terminology around the "public networks" stuff? 21:39:38 <garyk> i think that the plane has already landed - he just needs to park 21:39:39 <danwent> that's the last roadblock i wanted to clear today 21:39:48 <gongys> and what about the 'quantum net-create' command if we manke the network name optional too. 21:39:50 <gongys> ? 21:39:50 <garyk> danwet: rpc 21:39:50 <salv-orlando> yes 21:40:08 <garyk> gongys: it was a complement - you have alreday done the work. :) 21:40:11 <danwent> sorry salv-orlando let's pause and wrap up gongys's question. 21:40:29 <gongys> Do we remove the required argument too. 21:40:31 <gongys> ? 21:40:32 <danwent> gongys: yes, I think consistency across all entities makes sense. 21:40:55 <danwent> garyk: ok, let's circle back on rpc after salv-orlando talks about public networks. 21:40:56 <shivh> Is name optional when creating a VM 21:41:09 <gongys> ok, I will allow 'quantum net-create' to create a network without any argument input. 21:41:26 <garyk> ok, sorry 21:41:28 <danwent> shivh: not sure if its optional its definitely not unique. 21:41:51 <shivh> If we want consisteny then it should match VM and Volume creation? 21:42:12 <danwent> shivh: are names mandatory there? 21:42:25 <danwent> I don't feel really strongly on mandatory vs. non-mandatory. 21:42:52 <danwent> i think the point was just that once a name is not unique, its really just a label for display purposes, and thus if there's no need for a display name, it should be optional. 21:42:57 <salv-orlando> I have a corollary question on client-side behavior for the name parameter (bugu979527), but will happy to discuss in open discussion time. 21:43:12 <danwent> salv-orlando: yes please, as were already running late 21:43:21 <salv-orlando> I meant bug 979527 - meetbot do your job :) 21:43:22 <uvirtbot> Launchpad bug 979527 in python-quantumclient "quantum-client does not support network lookup by label" [Medium,Confirmed] https://launchpad.net/bugs/979527 21:43:36 <salv-orlando> ok, moving forward 21:43:46 <danwent> salv-orlando: want to comment on public networks stuff? 21:43:51 <salv-orlando> the terminology is probably the last blocker before I take the WIP out of the branch, which I believe contains a solution for addressing the attribute-authorization issues raised by rkukura. So I invite you to look at that branch. 21:43:54 <shivh> I'll take a back seat on that unique/optional in network. 21:44:22 <danwent> shivh: ok :) please ping gongys, and if you can convince him, you win :) 21:44:23 <salv-orlando> The concern we received on the proposed name, public, is that it is unclear to what public refers to 21:44:42 <salv-orlando> it might be confused with "the Internet" or "public access" 21:44:51 <danwent> yeah, i think that's valid 21:44:56 <rkukura> salv-orlando: I'll followup on the list to clarify whether your mechanism can do what provider-network needs for authorization. If so, I'll use it, and maybe we can merge what I've got for now until yours is in. 21:45:13 <shivh> I want to clarify when we talk about network type - what are we refering to? 21:45:27 <salv-orlando> similarly, I personally believe network-type can be confused as well with the particular technology that network use. 21:45:32 <shivh> public/pvt or stuff like L2/L3? 21:45:46 <danwent> shivh: https://blueprints.launchpad.net/quantum/+spec/quantum-v2-public-networks 21:45:52 <danwent> see link to full specification 21:46:08 <shivh> ok. 21:46:13 <salv-orlando> rkukura: sounds cool. Happy to discuss online. Surely, by merging we'll be able to address a larger set of use cases 21:46:18 <danwent> shivh: i agree though, a notion of "type" can be very ambiguous 21:46:27 <danwent> ok folks, 15 minutes left 21:46:58 <nati_uen_> I wanna discuss about multi-host implementation also 21:47:08 <danwent> nati_uen_: please wait until open discussion for non-agenda items 21:47:17 <salv-orlando> So, I think we need an attribute whose name characterize how it can be shared. Ance hence the name could be really "shared" 21:47:18 <nati_uen_> danwent: Sorry I got it 21:47:30 <salv-orlando> with "ALL" or "None" (default) as a starter 21:48:17 <salv-orlando> with a model that we can then extend during the grizzly release cycle to allow more fine-grained control. Think about different levels of authorizations for owner, "group", and "world" 21:48:22 <danwent> salv-orlando: i think the appreoach of phrasing it as a type of sharing with limited options to start makes sense. 21:48:46 <danwent> "all" or "none" doesn't seem super intuitive to me, but we can bicker about exact terms later :) 21:49:23 <salv-orlando> In theory I would use a bit mask as you do for an in ode! 21:49:31 <salv-orlando> inode 21:49:53 <danwent> mmm… might be an interesting way to think about it. 21:50:38 <danwent> ok, so are people generally inline with the approach of not calling them public, but rather having a "sharing" attribute, with two values to start in Folsom: "all"/"global" and "none"/"private"? 21:50:48 <danwent> module the details of naming 21:50:58 <salv-orlando> the public network would be a 644 permission mask on the network object as an example 21:51:14 <rkukura> danwent: +1 21:51:17 <nati_uen_> +1 for 644 21:51:18 <danwent> mmm… for unix folks, that is a fairly intuitive way of describing it. 21:51:34 <danwent> ok, its official, i like it :) 21:51:42 <salv-orlando> 644? 21:51:45 <danwent> yeah 21:51:55 <nati_uen_> salv-orlando: +1 for unix way :) 21:51:56 <gongys> My god, I have to count bit again. 21:51:56 <garyk> +1 - 777 21:52:02 <danwent> or at least, letting people think of it in a unix-like way 21:52:19 <danwent> probably not suing actual numbers though :) 21:52:26 <salv-orlando> yeah but the "group" bit mask does not yet make any sense, so let's leave it ignored at the moment 21:52:36 <danwent> ok, sounds good. 21:52:38 <nati_uen_> quantum net-chmod 0777 public-network 21:52:51 <danwent> ok, last topic, garyk, blockers on rpc? 21:53:15 <garyk> danwent: nope - jut syncing it with all of the agents. 21:53:44 <danwent> garyk: ok, reviewed it this morning. no big concerns though, other than the config stuff 21:54:04 <danwent> ok, 6 minutes left 21:54:06 <danwent> moving on to... 21:54:09 <garyk> danwent: cool. inpr from https://review.openstack.org/#/c/9591/ would be great 21:54:15 <danwent> #topic open-discussion 21:54:19 <garyk> that is 1km for us slow runners 21:54:44 <danwent> garyk: did you not get my comments this morning? 21:55:18 <danwent> ok, open discussion? 21:55:26 <shivh> Just a thought/idea, all this stuff looks like a file system, Type: directory/special file Scope: Permissions, name/inodes: name/UUID 21:55:37 <nati_uen_> May I discuss multi-host? 21:55:40 <garyk> danwent: i'll double check. hadnetworking issues today 21:56:03 <gongys> let's talk abut multi-host 21:56:04 <danwent> garyk: ok. 21:56:06 <shivh> sorry lost ability to transmit for some time..(will take up my point in ML) 21:56:08 <danwent> nati_uen_: go ahead 21:56:16 <nati_uen_> https://blueprints.launchpad.net/quantum/+spec/quantum-multihost-dhcp 21:56:19 <danwent> shivh: ok. logs of the meeting are posted as well. 21:56:24 <nati_uen_> I'm thinking about how to design it 21:56:47 <nati_uen_> It needed scheduing function such as quantum-schduler 21:57:15 <danwent> nati_uen_: this sounds like a complex question. can you ask it on ML? 21:57:15 <nati_uen_> But it looks big for F3, so I wrote another design 21:57:22 <nati_uen_> I got it 21:57:38 <salv-orlando> shivh: I think you'll find out in the logs the consensus is pretty much along the lines of your idea 21:57:51 <danwent> nati_uen_: sorry, but we're running out of time. 21:58:09 <nati_uen_> danwent: NP :) 21:58:12 <danwent> one last comment I wanted to make was that, in case you haven't heard, vmware announced plans to acquire nicira today. 21:58:29 <danwent> vmware actually mentioned openstack as one of the reasons they were excited about working with nicira 21:58:38 <nati_uen_> Congrat! You can buy Tesra also 21:58:48 <edgarmagana> dan: plans to acquire? I think that was a deal done! :-) 21:58:52 <danwent> so I think things with Nicira's contributions to Quantum (and OVS) shoudl stay on track 21:58:54 <PotHix> Me too 21:59:09 <gongys> Good news. 21:59:12 <danwent> if any of you, or others within your company have concerns, I'm happy to chat with them. 21:59:16 <garyk> danwent, salv-orlando: congratulations 21:59:22 <danwent> thanks guys. 21:59:23 <s0mik> Congratulations Team Quantum on being at the strategic angle for future of networking! 21:59:33 <danwent> edgarmagana: its like buying a house… its never "done" :P 21:59:39 <edgarmagana> yeah.. congratulations folks! 21:59:39 <zhuadl> congrats! 21:59:45 <PotHix> Congrats! :) 21:59:47 <mestery> edgarmagana: :) 21:59:53 <danwent> anyway, wanted to just mention that as a few people have pinged me with concerns 22:00:03 <danwent> ok, anything else people wanted to bring up? 22:00:13 <mestery> danwent: We know where you're at if we have concerns. :) 22:00:22 <edgarmagana> drinks on Nicira next summit... ups sorry.. vmware! 22:00:25 <danwent> hehe… all too true. 22:00:32 <danwent> edgarmagana: yeah… that will take some time getting used to. 22:00:40 <danwent> ok, thanks folks. see you on gerrit! 22:00:40 <salv-orlando> mestery: sounds treating, good thing I'm not in Palo Alto :) 22:00:44 <zhuadl> hehe 22:00:46 <danwent> #endmeeting