14:01:01 <johnthetubaguy> #startmeeting nova
14:01:02 <openstack> Meeting started Thu Oct 15 14:01:01 2015 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:06 <openstack> The meeting name has been set to 'nova'
14:01:14 <edleafe> o/
14:01:18 <markus_z> o/
14:01:20 <mriedem> o/
14:01:20 <ctrath> o/
14:01:30 <smcginnis> o/
14:01:37 <andrearosa> hello
14:01:40 <scottda> hi
14:01:45 <johnthetubaguy> hello all
14:01:47 <jlvillal> o/
14:01:49 <dansmith> o/
14:01:51 <PaulMurray> o/
14:01:53 <johnthetubaguy> #topic Liberty Status
14:01:57 <jichen> o/
14:02:02 <johnthetubaguy> so liberty is out the door now
14:02:12 <jroll> \o
14:02:14 <johnthetubaguy> #link http://lists.openstack.org/pipermail/openstack-dev/2015-October/076012.html
14:02:21 <bauzas> \o
14:02:25 <rlrossit> o/
14:02:33 <johnthetubaguy> mriedem made a good call to action on the ML about the release notes
14:02:51 <gibi> o/
14:02:56 <johnthetubaguy> please double check those if you added a commit that needs something in the release nodes UpgradeImpact some DocImpact ones, etc
14:03:26 <johnthetubaguy> thanks for all the hard work pushing through the last moment issues, and all the testing folks did
14:03:45 <johnthetubaguy> thats the end of that agenda item in this meeting
14:03:52 <johnthetubaguy> #topic Mitaka Status
14:04:04 <johnthetubaguy> well on to the next release...
14:04:13 <johnthetubaguy> #link http://mitakadesignsummit.sched.org/type/Nova
14:04:20 <johnthetubaguy> so I have uploaded our draft schedule
14:04:59 <johnthetubaguy> there are a few question marks around the scheduler session, from memory, and I am keeping my eye out for specs, or collections of specs that might need a session
14:05:26 <bauzas> we should somehow find some time to ack/nack that session
14:05:26 <johnthetubaguy> #help let johnthetubaguy know if you have a summit clash ASAP
14:06:12 <johnthetubaguy> bauzas: agreed I am going to try meet with all moderators before the summit, been focusing on specs today though
14:06:26 <mriedem> there is a thing i'd like discussed at the summit but it was mostly alaski_out and i hashing out the details, so would be kind of hard for people to discuss in the friday meetup unless they read up first
14:06:26 <johnthetubaguy> #link https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking
14:06:59 <bauzas> so there is one conflict AFAICT
14:07:20 <johnthetubaguy> mriedem: so alaski_out is not there either right, which makes it tricky in person
14:07:22 <bauzas> https://mitakadesignsummit.sched.org/event/b2812a85cbca739432fe0b3d26f5fe69#.Vh-zFbzDia4
14:07:24 <mriedem> well, meh, i'd need to flesh out the idea a bit more clearly first probably
14:07:32 <mriedem> yeah, ignore
14:07:52 * jaypipes here...
14:07:55 <johnthetubaguy> bauzas: yeah, I need to reach out to armex about that
14:08:04 <bauzas> ack
14:08:07 <jaypipes> or armax :)
14:08:16 <johnthetubaguy> lol, yeah, either of them
14:08:52 * dims here
14:08:55 <johnthetubaguy> jaypipes: I need to catch up with you about the content of the scheduler summit session, given we also have those two on resource modling
14:09:29 <johnthetubaguy> three sessions on the scheduler feels like too much, but anyways, lets do that in the channel after we finish up here
14:09:36 <bauzas> +1
14:09:51 <johnthetubaguy> so grouping specs
14:10:19 <johnthetubaguy> the idea of the above link is to try and group the specs, so its easier to have a bit more focus around the reviews
14:10:26 <johnthetubaguy> in the hope we get more completed, more quickly
14:10:38 <johnthetubaguy> so help adding your spec in there, into a nice group, would be awesome
14:10:44 <jaypipes> johnthetubaguy: sounds good.
14:10:48 <johnthetubaguy> thanks to those who already did all that
14:10:50 <johnthetubaguy> jaypipes: cools
14:11:10 <johnthetubaguy> so specless blueprints
14:11:15 <johnthetubaguy> any ones folks want to discuss
14:11:25 <sdague> johnthetubaguy: how do you want to handle service catalog overall work that is going to need to end up in nova from the specs process?
14:11:49 <mriedem> is that cross project?
14:12:00 <sdague> mriedem: it is
14:12:05 <mriedem> so openstack-specs?
14:12:23 <johnthetubaguy> so we can maybe do a spec-less blueprint that points to the openstack-spec
14:12:23 <sdague> right, there is an openstack-spec for this, which should be finalized shortly after summit
14:12:36 <mriedem> ah, ok
14:12:44 <sdague> just wanted to make sure we kept people engaged, and did what ever artifact tracking made sense
14:12:47 <johnthetubaguy> is there much specific to nova we need to get agreement?
14:12:48 <mriedem> then yeah let's not re-review the nova parts
14:12:49 <markus_z> johnthetubaguy: wrt specless blueprint, just to double-check if I get it, I have to bring the blueprint up in one of our meetings, is that right?
14:13:14 <johnthetubaguy> markus_z: ideally just add it into here: https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking
14:13:21 <mriedem> there is a section in ^
14:13:38 <johnthetubaguy> #info for a specless blueprint, please add them into here: https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking and we will review objections in the next nova meeting
14:13:55 <markus_z> ah, there it is. The whiteboard gets then updated or how will I be notified?
14:13:59 <sdague> johnthetubaguy: I'll make sure to write up the nova impacts in the spec after the summit, I think they are straight forward, probably not controversial, but it will be a heads up for what's coming
14:14:18 <sdague> mostly the optional project_id and how we reference other projects
14:14:24 <johnthetubaguy> markus_z: normally I add a comment in the whiteboard, and/or approve the spec once we have decided, that or just ping me
14:14:36 <markus_z> johnthetubaguy: ok, thanks, got it
14:15:16 <johnthetubaguy> sdague: yeah, that seems cool, a short spec might be handy just to set the context when folks look back at the details
14:15:40 <johnthetubaguy> cool, so moving on for now
14:15:47 <johnthetubaguy> #topic Regular Reminders
14:16:01 <johnthetubaguy> #link https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
14:16:12 <johnthetubaguy> I have re-added the subteam area in that code review list
14:16:25 <johnthetubaguy> for subteam specs, just create a list inside the spec etherpad:
14:16:32 <johnthetubaguy> #link https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking
14:16:54 <johnthetubaguy> #info subteams please add back your focus lists for code reviews and spec reviews
14:17:01 <johnthetubaguy> #topic Bugs
14:17:10 <markus_z> #help: Triaging: Top 3 subteams with most "New" bugs: volumes: 10, libvirt: 9, vmware: 5
14:17:13 <johnthetubaguy> markus_z: mriedem: what do you folks want to cover today?
14:17:27 <markus_z> #info: Trivial bug fix: 9 reviews at https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
14:17:35 <johnthetubaguy> markus_z thats a good call out, thank you
14:17:42 <mriedem> the libvirt firewall driver bugs are still probably the most annoying in the gate
14:17:47 <bauzas> +++
14:18:02 <johnthetubaguy> mriedem: are those nova-net specific?
14:18:18 <mriedem> yeah
14:18:35 <johnthetubaguy> wondering if we deprecate that, if no one fixes it...
14:18:45 <johnthetubaguy> thats probably a dumb idea
14:18:49 <mriedem> https://bugs.launchpad.net/nova/+bug/1501558 https://bugs.launchpad.net/nova/+bug/1501366
14:18:49 <openstack> Launchpad bug 1501558 in OpenStack Compute (nova) "nova-net: libvirtError: Error while building firewall: Some rules could not be created for interface: Unable to update the kernel" [High,Confirmed]
14:18:49 <mriedem> libvirt?
14:18:50 <openstack> Launchpad bug 1501366 in OpenStack Compute (nova) "libvirtError: Error while building firewall: Some rules could not be created for interface" [High,Confirmed]
14:18:51 <mriedem> :P
14:19:14 <bauzas> that's a kernel problem, right?
14:19:18 <mriedem> from what i can tell it's not a code regression in nova
14:19:23 <mriedem> bauzas: that's what i think
14:19:30 <bauzas> mriedem: and I agree with you
14:19:41 <johnthetubaguy> ah, interesting
14:20:17 <johnthetubaguy> #help need help with gate bugs 1501558 and 1501366
14:20:17 <openstack> bug 1501558 in OpenStack Compute (nova) "nova-net: libvirtError: Error while building firewall: Some rules could not be created for interface: Unable to update the kernel" [High,Confirmed] https://launchpad.net/bugs/1501558
14:20:18 <openstack> bug 1501366 in OpenStack Compute (nova) "libvirtError: Error while building firewall: Some rules could not be created for interface" [High,Confirmed] https://launchpad.net/bugs/1501366
14:20:29 <johnthetubaguy> so looking at third pary CI tests
14:20:39 <johnthetubaguy> I see Hyper-V is unhappy right now
14:20:54 <sdague> mriedem: at one point I considered adding iptables logging into devstack, that might be appropriate here
14:21:01 <johnthetubaguy> claudiub: do we know any more about the Hyper-V CI?
14:21:28 <claudiub> johnthetubaguy: there aren't any problems that I'm aware of
14:21:40 <claudiub> johnthetubaguy: there used to be a problem until yesterday
14:21:43 <johnthetubaguy> claudiub: I am looking at here: http://ci-watch.tintri.com/project?project=nova&time=7+days
14:21:46 <mriedem> sdague: some retries in nova might just resolve the issue
14:22:16 <sdague> mriedem: sure, it might just be good to figure out what's going on before we just fix it the max powers way
14:22:27 <mriedem> yeah, i know
14:22:38 <mriedem> apparently there is a thing we can use in libvirt super future version,
14:22:44 <mriedem> but we're nowhere near that on ubuntu
14:22:49 <mriedem> 1.2.11 i think?
14:23:00 <jaypipes> sdague: ++ to iptables logging in devstack.
14:23:06 <claudiub> well, last 2 days there was a problem with nova.virt.images, as it was importing something  that did not exist in windows, but that has been fixed since then
14:24:09 <johnthetubaguy> anyways, I just wanted to make sure about third party CIs, we should agree to not +W-ing any patch that needs a third party test, but it hasn't yet voted positively on that patchset?
14:24:13 <ociuhandu> johnthetubaguy: until yesterday we have had a blocking bug, now we have seen some fails in a few tempest tests, we are looking into the possible causes
14:24:27 <mriedem> btw http://ci-watch.tintri.com/project?project=nova
14:24:30 <johnthetubaguy> ociuhandu: OK, thanks
14:25:07 <mriedem> looks like hyper-v is the only bad one there and that's a known issue with the resource import
14:25:11 <johnthetubaguy> garyk1: do you have news on the VMware CI for Nova, I am seeing lots of non-votes at the moment in the graphs: http://ci-watch.tintri.com/project?project=nova&time=24+hours
14:25:11 <mriedem> and that was reverted as claudiub said
14:25:32 <johnthetubaguy> yeah, anyways, just wanted to make sure we are all on the same page
14:25:43 <johnthetubaguy> I am thinking no approve until the third party CI has voted on a patch
14:25:53 <johnthetubaguy> I know we kinda said that before, but not sure we are always enforcing that
14:25:54 <garyk1> johnthetubaguy: at the moment we have a few problems with the CI
14:26:02 <mriedem> i forget the name of the openvz guys, but we should get virtuozzo compute ci in there
14:26:09 <garyk1> i think it is related to what broke the gate w few days ago. we are investigating
14:26:25 <johnthetubaguy> garyk1: OK, thanks, good to know you are digging
14:26:28 <johnthetubaguy> mriedem: +1
14:26:40 <johnthetubaguy> mriedem: there is a patch up to start working on an LXC CI
14:26:55 <mriedem> johnthetubaguy: lxc ci?
14:27:01 <johnthetubaguy> https://review.openstack.org/#/c/226565
14:27:13 <mriedem> umm
14:27:23 <mriedem> https://review.openstack.org/#/c/219448/
14:27:26 <johnthetubaguy> there are some folks working out what tests need to pass, and with what config, or some such
14:27:28 <mriedem> unless that guy has some magic
14:27:35 <mriedem> lxc + tempest + dsvm is a turd in the toilet
14:27:55 <mriedem> i'll comment on the project-config change
14:28:02 * markus_z giggles
14:28:15 <johnthetubaguy> mriedem: yeah, they did say about that, not sure if they got around some of those
14:28:31 <mriedem> whoa they are running cells?
14:28:36 <mriedem> geez, good lukc
14:28:39 <mriedem> *luck
14:28:53 <johnthetubaguy> mriedem: yeah, thats prehaps a bit optimistic...
14:29:02 <johnthetubaguy> anyways, lets move on
14:29:04 <sdague> johnthetubaguy: anyway, it should be in experimental queue first so it's not burning nodes
14:29:09 <johnthetubaguy> sdague: +1
14:29:12 <sdague> added that review comment
14:29:15 <johnthetubaguy> sdague: thanks
14:29:54 <johnthetubaguy> anyways, there were no objections to the third party CI rule
14:30:32 <johnthetubaguy> #info reminder, please do not approve changes in a virt driver until the appropriate third party CI has voted positively on that patchset
14:30:54 <johnthetubaguy> #help if anyone fancies automating the above, please make that happen
14:30:59 <johnthetubaguy> cools
14:31:13 <johnthetubaguy> #topic Open discussion
14:31:19 <johnthetubaguy> so a quick think from me...
14:31:30 <johnthetubaguy> the nova midcycle
14:31:35 <johnthetubaguy> I am looking for venues
14:31:42 <PaulMurray> we offerred bristol
14:31:47 <johnthetubaguy> the offers I have are in the UK and europe
14:31:51 <johnthetubaguy> yeah
14:32:00 <bauzas> is UK not in Europe ? ;)
14:32:07 <andrearosa> lol
14:32:11 <johnthetubaguy> bauzas: we are having a vote on that :)
14:32:12 <PaulMurray> no its mainland
14:32:19 <jlvillal> Depends who you ask in the UK ;)
14:32:22 <bauzas> lol
14:32:29 <markus_z> jlvillal: +1 :D
14:32:32 <johnthetubaguy> jlvillal: yup
14:32:33 <johnthetubaguy> anyways
14:33:15 <johnthetubaguy> so anyways, I wanted to open up to use offers, if we have them
14:33:19 <johnthetubaguy> oops
14:33:26 <johnthetubaguy> US based offers, if there are any
14:33:38 <dansmith> johnthetubaguy: RAX has previously offered san antonio, right?
14:33:43 <dansmith> is that off the table now?
14:33:49 <johnthetubaguy> as, honestly, worried we will not get enough folks to a europe offer
14:33:56 <sdague> we should also probably straw poll the core team about what venues people could get to
14:34:06 <edleafe> sdague: +1
14:34:16 <dansmith> yeah, I'm worried about travel to europe for a midcycle
14:34:20 <johnthetubaguy> dansmith: its not, just thinking San Antionio just before the austin summit is a bit odd
14:34:24 <dansmith> and I know other cores even said the same
14:34:31 <PaulMurray> johnthetubaguy, may be able to do an hp site in us if pushed - need to check
14:34:38 <dansmith> johnthetubaguy: I don't think it matters, personally
14:34:49 <mriedem> HP in colorado has hosted a few midcycles for liberty
14:34:51 <mriedem> qa and cinder at least
14:35:02 <dansmith> yeah, colorado would be good for a lot of people
14:35:04 <jlvillal> Ironic once tried two mid-cycles one in US and other in Europe.
14:35:07 <raildo> can we do a hangouts air(or something like that), for those that can't travel for midcycle? :)
14:35:08 <mriedem> not that i'm volunteering HP
14:35:13 <jlvillal> About a week apart
14:35:15 <edleafe> colorado in january can be very snowy
14:35:21 <johnthetubaguy> #action johnthetubaguy to do poll about mid-cycle location and date
14:35:26 <smcginnis> I proposed an international midcycle for Cinder but doesn't look like key folks could pull it off.
14:35:27 <markus_z> johnthetubaguy: A hangout thing would be nice
14:35:42 <dansmith> can we make intel do it in portland again? that worked out real nice :)
14:35:46 <bauzas> markus_z: there was one last midcycle
14:35:58 <markus_z> bauzas: How did it work out from your side?
14:36:01 <PaulMurray> portnad was a pain for us
14:36:08 <dansmith> PaulMurray: you hush
14:36:22 <PaulMurray> I know, my wife prefers bristol too
14:36:30 <bauzas> markus_z: pretty doable
14:37:22 <johnthetubaguy> anyways, I will do my action, I mean I will probably just send that out on the ML, but I would ask folks take budget into consideration, not just preference
14:37:28 <sdague> whatever the option, it definitely needs someone on the ground organizing it.
14:37:45 <johnthetubaguy> there is the OSIC thing in San Antonio, so it might be good for young folks
14:37:55 <mriedem> young folks?
14:37:59 <sdague> osic?
14:38:03 <edleafe> mriedem: like you
14:38:06 <bauzas> johnthetubaguy: any idea about when it would happen ?
14:38:13 <bauzas> end of Jan I guess ?
14:38:18 <johnthetubaguy> its like an intel + rackspace thing, I think they have some new folks there
14:38:26 <johnthetubaguy> so timing wise, I was thinking end of jan
14:38:31 <mriedem> oh the new hire collaboration station
14:38:35 <markus_z> OSIC = Office of the Scottish Information Commissioner
14:38:38 <markus_z> ...
14:38:45 <johnthetubaguy> #link https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
14:38:57 <jlvillal> OpenStack Innovation Center
14:38:57 <johnthetubaguy> So we could do it just after Mikta-2
14:39:02 <johnthetubaguy> jlvillal: thats the one
14:39:04 <bauzas> +1
14:39:16 <sdague> R-10 week seems like about right
14:39:18 <bauzas> (for after M2)
14:39:20 <sdague> it aligns the final push
14:39:21 <johnthetubaguy> #link http://blog.rackspace.com/rackspace-and-intel-form-the-openstack-innovation-center/
14:39:39 <johnthetubaguy> OK, so lets aim for week 10 and after
14:39:55 <johnthetubaguy> I have a band contest booked in just before that week, so that works well for me personally
14:40:12 <johnthetubaguy> anyways, I think we have flogged that one now
14:40:16 <johnthetubaguy> any more topics for today?
14:40:19 <dansmith> johnthetubaguy: what would be the actual date then?
14:40:19 <bauzas> yup
14:40:26 <bauzas> nova meeting next week ?
14:40:38 <sdague> dansmith: R-10 is Jan 25-29
14:40:48 <dansmith> oh, I missed that R-10 I see
14:40:52 <johnthetubaguy> dansmith: good point, aim for Tuesday 26th till Thurs 28th
14:40:55 <dansmith> that's almost definitely going to screw me
14:41:08 <johnthetubaguy> dansmith: week after better?
14:41:12 <sdague> dansmith: isn't that almost always the case?
14:41:27 <gsilvis> I've got a topic
14:41:28 <dansmith> actually, no, that's fine now that I look
14:41:36 <dansmith> dog race is a week earlier this year I think
14:41:39 <dansmith> so I'm good
14:41:43 <dansmith> I'll even be showered
14:41:49 <PaulMurray> dog race?
14:41:59 <johnthetubaguy> PaulMurray: are you good checking on your end for that date, I will do checks on my end
14:42:08 <PaulMurray> yep
14:42:14 <johnthetubaguy> PaulMurray: thanks
14:42:38 <markus_z> johnthetubaguy: I've seen the "host-maintenance-mode" more often recently. Do we have a direction already? Like, deprecating it?
14:42:50 <dansmith> +20
14:42:54 <markus_z> Right now it is a XEN only thing
14:42:56 <johnthetubaguy> markus_z: I think we should deprecate it to stop the confusion its calling
14:43:01 <johnthetubaguy> casuing
14:43:12 <johnthetubaguy> meh, silly spellings
14:43:15 <mriedem> was it hyperv that had a spec for that?
14:43:25 <dansmith> markus_z: it's actually not completely xen-only, only because some drivers have copied it without knowing it was not the right thing to do
14:43:30 <dansmith> mriedem: yeah, I think so
14:43:30 <johnthetubaguy> I think there was, and we have code up for a libvirt thingy
14:43:49 <johnthetubaguy> so, I don't think it even did anything, if I remember correctly
14:43:51 <markus_z> mriedem: yes, hyperv wants to implement it
14:43:59 <johnthetubaguy> it changed an icon in the XenServer dashboard, I remember that much
14:44:28 <edleafe> gsilvis: you had something to ask?
14:44:38 <markus_z> dansmith: Really? It's already a few weeks since I looked at it...
14:44:57 <gsilvis> edleafe: indeed... I was waiting for a pause in the previous topic, but I'll just ask
14:45:04 <johnthetubaguy> #help can someone please deprecate host maintenance mode to stop the confusion
14:45:07 <gsilvis> I'm from the Massachusetts Open Cloud.  Last summit, we talked to some of you about an idea for inter-Openstack-deployment resource federation, and some of you told us to come up with a proof-of-concept for it
14:45:20 <dansmith> markus_z: yeah, pretty sure, but we can take it offline.. +1 for deprecation for sure
14:45:33 <gsilvis> we have one now, and we're wondering how to get some time to show it to people at the Tokyo summit
14:45:36 <markus_z> dansmith: ack
14:46:12 <johnthetubaguy> gsilvis: so we have closed for new summit submissions, so its tricky at this point
14:46:17 <markus_z> gsilvis: Do you have a pointer? ML thread or something like that?
14:46:34 <johnthetubaguy> gsilvis: yeah, I think describing the idea on the ML is a good starting point
14:46:44 <raildo> gsilvis: Are you using Keystone Federation on this feature, in some way?
14:46:47 <sdague> johnthetubaguy: we could block a chunk of time on the friday
14:46:53 <gsilvis> raildo: yes, extensively
14:47:05 <johnthetubaguy> sdague: yeah, we have some unconference slots as well
14:47:07 <gsilvis> johnthetubaguy: We announced that we were going to do it about 5 months ago... let me pull up that email
14:47:31 <sdague> johnthetubaguy: it's probably worth some of the meetup time to see what they did, and be able to fire more questions back and forth
14:48:15 <johnthetubaguy> sdague: true, I was thinking unconference slot would also work for that, admit it probably need more time
14:48:27 <gsilvis> at that point we weren't sure which projects this would affect, so it was on openstack-dev:  http://lists.openstack.org/pipermail/openstack-dev/2015-June/066445.html
14:48:32 <johnthetubaguy> gsilvis: it would be good to reply to that original email on the ML, and say how it went
14:48:37 <sdague> yeh, this feels like it needs a little more unbounding on time
14:48:46 <gsilvis> johnthetubaguy: okay, that makes sense
14:48:59 <markus_z> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/066445.html
14:49:05 <raildo> gsilvis: if you want some help with this keystone federation side, me and other guys onmy team can help you :)
14:49:35 <raildo> gsilvis: we are working with this since kilo
14:50:09 <johnthetubaguy> gsilvis: so I think if you can tag it with [nova] in the subject on the ML, and add other projects in there, that will help us jump on that thread
14:50:26 <gsilvis> raildo: we worked with several people on the Keystone team in designing and trouble shooting it, actually---so big thanks there :)
14:50:46 <raildo> gsilvis: great :)
14:50:55 <bauzas> so, asking again, do we have a meeting next week, given that we would be close to the summit time with some people already away?
14:50:58 <johnthetubaguy> gsilvis: I am a bit worried if we talk about it in the Nova room, we will not have all the correct folks in the room, so we might want to find a slot where we can get a good collection of people all together, ML is a good way to push on some of that
14:51:09 <johnthetubaguy> bauzas: oh good question
14:51:20 <johnthetubaguy> so the meeting is crazy late for me, so I am unlikely to be there
14:51:41 <johnthetubaguy> does anyone really want a meeting this time next week?
14:51:48 <mriedem> not me :)
14:51:50 <gsilvis> johnthetubaguy: yeah... at the very least we'll need a couple of people who understand cinder and keystone too
14:51:54 <mriedem> i'll be at epcot or something equally crappy
14:51:56 <bauzas> it's planned 2100UTC IIRC
14:52:01 <johnthetubaguy> gsilvis: yeah, +1
14:52:05 <gsilvis> johnthetubaguy: I was hoping to get a talk, but I got waitlisted
14:52:16 <scottda> gsilvis: Keep me in the loop and I'll connect some Cinder people
14:52:25 <johnthetubaguy> gsilvis: honestly, getting a spot in the design summit is probably better for this
14:52:28 <johnthetubaguy> scottda: thanks
14:52:45 <gsilvis> scottda: thanks, will do
14:52:51 <johnthetubaguy> gsilvis: I would add [keystone] [cinder] [nova] on your email
14:53:02 <raildo> ++
14:53:09 <johnthetubaguy> cools
14:53:10 <gsilvis> johnthetubaguy: okay
14:53:11 <johnthetubaguy> so lets cancel next weeks meeting
14:53:21 <johnthetubaguy> we used all the time this week, so give it back next week
14:53:40 <johnthetubaguy> #info no meeting for the next two weeks due to the summit and summit travel
14:53:55 <johnthetubaguy> OK, thanks all
14:54:03 <edleafe> thanks johnthetubaguy
14:54:07 <bauzas> ++
14:54:09 <gsilvis> thanks!
14:54:25 <johnthetubaguy> #endmeeting