21:01:04 <russellb> #startmeeting nova
21:01:05 <openstack> Meeting started Thu May 16 21:01:04 2013 UTC.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:08 <openstack> The meeting name has been set to 'nova'
21:01:12 <mikal> Heya!
21:01:13 <russellb> Well hello, everyone!
21:01:16 <russellb> who's around to chat?
21:01:17 <hartsocks> hey
21:01:19 <comstud> o/
21:01:20 <alaski> o/
21:01:20 <n0ano> o/
21:01:21 <beagles> hi
21:01:30 <jog0> o/
21:01:30 <devananda> \o
21:01:36 <dripton> hi
21:01:46 <russellb> throw your ascii hands in the air like you just don't care
21:01:47 <cyeoh> hi
21:01:58 <mikal> \o/
21:02:00 <russellb> #link https://wiki.openstack.org/wiki/Meetings/Nova
21:02:10 <russellb> #topic blueprints
21:02:23 <russellb> let's take a look at havana-1 status
21:02:32 <russellb> #link https://launchpad.net/nova/+milestone/havana-1
21:02:37 <russellb> so time is flying by on this
21:02:43 <russellb> merge deadline is only 1.5 weeks away
21:02:58 <russellb> which means that ideally anything targeted for havana-1 should be up for review in about ... 0.5 weeks
21:03:04 <mikal> Ugh
21:03:12 <cburgess> here
21:03:15 <comstud> Whole stole our time?
21:03:17 <comstud> who
21:03:18 <russellb> to give time for the review process
21:03:32 <russellb> comstud: ikr?
21:03:43 <russellb> so please take a look at this list and 1) ensure that the status is accurate
21:04:02 <russellb> 2) try to get this stuff up for review by early/mid next week
21:04:17 <russellb> 3) if not #2, let me know as soon as you think it's going to slip so we can update the blueprint to push it back for havana-2
21:04:35 <mikal> I will not be completing 6 bps in the next week. I shall move some to havana-2.
21:04:43 <russellb> mikal: great, thank you sir
21:04:56 <russellb> just need to keep this up to date so we have an accurate picture of the road to havana-1
21:05:19 <russellb> and for reference, the full havana list is here ...
21:05:21 <russellb> #link https://blueprints.launchpad.net/nova/havana
21:05:37 <comstud> is there an easy way to see which are targeted for -1 ?
21:05:43 <russellb> comstud: the first link
21:05:52 <russellb> https://launchpad.net/nova/+milestone/havana-1
21:06:05 <comstud> ty
21:06:26 <russellb> you can also see what needs code review there
21:06:31 <russellb> which is handy for prioritizing reviews
21:06:34 <senhuang> the link also have bug fixes which are targeted for H-1
21:06:51 <russellb> since ideally we can land the stuff targeted at havana-1 with some priority
21:07:01 <russellb> senhuang: good point, indeed it does
21:07:20 <russellb> any questions on the blueprints or havana-1 status stuff?
21:07:39 <comstud> mikal: I only see 1 assigned to you.. what's the other 5?
21:07:54 <comstud> (Looking at where I can help)
21:08:04 <mikal> comstud: Oh, I see. That number includes bugs apparently.
21:08:17 <mikal> 1 bp, 5 bugs.
21:08:25 <comstud> yeah it does
21:08:43 <comstud> cools :)
21:09:10 <russellb> k, also yell if you need help :)
21:09:12 <russellb> #topic bugs
21:09:29 <russellb> #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
21:09:47 <russellb> looks like our new bug count is rising quickly, so any triage is always helpful :-)
21:10:12 <russellb> speaking of which, a big opportunity for making a significant contribution to the nova project is someone to focus on bug triage to make sure we stay on top of it
21:10:27 <russellb> #help need bug triage, as usual
21:10:36 <cburgess> russellb: Do we have a guide for triage?
21:10:43 <russellb> cburgess: we surrrrre do!
21:10:54 <russellb> #link https://wiki.openstack.org/wiki/BugTriage
21:10:59 <cburgess> While I'm not promising anything, I'm happy to help with a guide
21:11:02 <cburgess> Cool.. bookmarked.
21:11:11 <russellb> yeah, it's something you can do with as little or as much time as you want
21:11:36 <devananda> fwiw, lifeless uncovered and filed a lot of bugs against the baremetal driver. they are, i believe all triaged, but most are pretty serious and marked High priority.
21:11:57 <russellb> interesting
21:12:12 <devananda> I'm picking them off as solutions become apparent, but if anyone else doing a POC with baremetal wants to hack on them
21:12:22 <devananda> poke me or folks in #tripleo if you need tips
21:12:24 <mikal> Also, I think we're down for another bug day Wednesday next week?
21:12:26 <russellb> all tagged as baremetal presumably?
21:12:30 <russellb> mikal: sounds great to me
21:12:32 <devananda> russellb: yes, i believe so
21:12:37 <dripton> bug day!
21:12:37 <senhuang> yes
21:12:54 <russellb> #help could use help fixing bugs in baremetal: https://bugs.launchpad.net/nova/+bugs?field.tag=baremetal
21:13:35 <russellb> #note bug day next wednesday, May 22
21:13:40 <russellb> anything else on bugs?
21:14:08 <russellb> #topic sub-team reports
21:14:18 <russellb> ok so this time i'd first just like to know who'd like to give a report
21:14:21 <russellb> and then i'll call on each one
21:14:23 <russellb> so we don't step on each other
21:14:33 <russellb> devananda: I think an Ironic update would be good in this section each week
21:14:50 <devananda> russellb: ack
21:15:04 <hartsocks> I can report on VMwareAPI team having actually met. Next meeting coming up the 22nd.
21:15:20 <devananda> russellb: so then i'd like to give an update :)
21:15:22 <russellb> ok cool, anyone else who wants to report, just ping me and i'll put you in the queue
21:15:30 <russellb> devananda: you're up!
21:15:35 <devananda> ok!
21:15:36 <senhuang> i can report the scheduling subteam
21:15:42 <n0ano> scheduler - what if we give a meeting an nobody comes :-)  Due to various issues we kind of missed this week.
21:15:43 <devananda> baremetal -
21:16:01 <devananda> folks in #tripleo have been pounding on it. it mostly works
21:16:08 <devananda> on a whole rack
21:16:11 <senhuang> n0ano: sorry i missed the meeting due to webchat connection issue
21:16:13 <devananda> but doesn't recover from errors
21:16:21 <devananda> hence bugs ^ :)
21:16:24 <devananda> ironic -
21:16:36 <devananda> it's alive! there's code in gerrit and unit tests
21:16:49 <n0ano> senhuang, multiple people (including me) had connectivity issues, we hope to do better next week.
21:16:51 <devananda> and once I figure out how to tell jenkins, it'll have pep8 and stuff
21:17:12 <devananda> also, i'm not sure i can write it all by Havana,
21:17:22 <senhuang> n0ano: my issue was that i used the webchat client which was down on tuesday..
21:17:24 <devananda> so i'm hoping more people will start writing (and they're stepping forward this week)
21:17:33 <devananda> db - dripton asked for meetings to start again
21:17:37 <devananda> [eol]
21:17:40 <harlowja> as far as stuff that i'm doing, good progress :)
21:17:48 <harlowja> #link http://eavesdrop.openstack.org/meetings/state_management/2013/state_management.2013-05-16-20.00.html
21:18:05 <harlowja> hoping to target cinder for H2 with some cases, nova if people can to
21:18:10 <rmk> We need a libvirt subteam.  Don't know what the process is for forming one.
21:18:28 <mikal> rmk: I think you just start one!
21:18:31 <harlowja> the cinder people want to hopefully use the library for all workflows in cinder by end of H, we'll see :)
21:18:38 <mikal> rmk: I'm come to your meetings if they're at an ok time for me.
21:18:47 <rmk> Great.  Then I have a report.  Many brokens!  We must fix!
21:18:52 <cburgess> lol
21:19:01 <n0ano> rmk, all it takes is one committed person to find a spot and organize things on the mailing list.
21:19:57 <senhuang> For the instancegroup api extension blueprint, gary has submitted a set of patches for reviews.
21:20:33 <senhuang> https://review.openstack.org/#/c/28880/
21:20:43 <russellb> sorry, was gone for a few minutes, internet died and had to switch to tethering :(
21:20:45 <rmk> Joking aside, this is a major issue and we need to figure out what the appropriate direction is for a fix -- https://review.openstack.org/28503
21:21:26 <russellb> rmk: ok, can come back to it in open discussion in a bit
21:21:33 <rmk> OK, cool.
21:21:43 <russellb> senhuang: done with scheduling?
21:21:58 <senhuang> russellb: yep
21:22:01 <russellb> hartsocks: did you get to go (missed a few minutes)
21:22:36 <hartsocks> russellb: The VMwareAPI guys are just getting rolling. Not much to say yet.
21:22:47 <russellb> hartsocks: alright, thanks
21:22:52 <harlowja> russellb crap, didn't see that u were gonna call in order, my fault
21:22:52 <russellb> but you have a weekly meeting going, that's good to see
21:22:58 <russellb> harlowja: did you already go?
21:23:40 <harlowja> ah, ya, i put some stuff on, my fault, didn't see the we are ordering this message
21:23:59 <russellb> it's ok
21:24:02 <hartsocks> I'll just leave this here if anybody is interested.
21:24:07 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI
21:24:11 <russellb> hartsocks: cool
21:24:15 <russellb> #topic open discussion
21:24:29 <russellb> mikal: any thoughts on the bug rmk posted a bit ago?
21:24:49 <comstud> should we discuss anything about these dynamic flavors?
21:24:51 <comstud> or just leave it on-list?
21:24:58 <mikal> russellb: /me looks
21:25:03 <russellb> comstud: we can, can be next topic
21:25:19 <comstud> #vote no
21:25:31 <russellb> no vote for you.
21:25:47 <rmk> So the problem here is that right now any pre-grizzly instances will fail to live block migrate on Grizzly.
21:25:50 <russellb> #link https://review.openstack.org/#/c/28503/
21:25:56 <mikal> rmk: I have a change out for this too
21:26:13 <rmk> mikal: I linked the change unless there's a new one?
21:26:19 <mikal> rmk: oh wait, that _is_ my change
21:26:25 <russellb> mikal: LaunchpadSync ate it
21:26:26 <mikal> rmk: LOL
21:26:37 <mikal> rmk: I was going to pick that one up again today
21:26:42 <russellb> mikal: awesome
21:26:44 <rmk> mikal: Yes.  It was abandoned, so I figured we should discuss what needs to be done to get it moving forward.
21:26:52 <rmk> OK works for me
21:27:01 <mikal> rmk: that was about be being super busy, not a lack of intent
21:27:01 <russellb> to be clear, my comment on there wasn't a -1 to the approach
21:27:11 <mikal> I am full of intent, just lacking in the action department
21:27:20 <russellb> just that you have to tweak the handling to not throw random exceptions if the data isn't there
21:27:21 <rmk> Just figured if there was an architectural concern about the approach it would be a good time to discuss
21:27:52 <russellb> mikal just needs minions
21:27:55 <mikal> I think its minor, I just need to get around to doing the tweak
21:28:08 <rmk> sounds good
21:28:22 <russellb> cool, and if for some reason you can't get to it, yell and i'll help find someone else to take it
21:28:28 <russellb> but sounds like it's not too much
21:28:46 <mikal> For sure
21:29:35 <russellb> k, so comstud suggested we talk dynamic flavors
21:29:38 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2013-May/009055.html
21:29:42 <russellb> a thread is going on openstack-dev about this
21:29:47 <russellb> comstud: comments?
21:29:52 <harlowja> i can also suggest we talk about state-management stuff :)
21:30:12 <russellb> harlowja: that's what the state mgmt meeting is for :)
21:30:14 <comstud> well, i didn't really suggest it
21:30:19 <comstud> I asked if we *should* talk about it
21:30:19 <comstud> :)
21:30:28 <russellb> comstud: heh, ok..
21:30:37 <comstud> I dunno if there's enough people here to do it or not
21:30:41 <russellb> well.  take a look, review the proposal, and review dansmith's suggestion for a slightly different approach
21:30:45 <russellb> post comments to the thread if you have them
21:30:46 <comstud> but thought I'd bring it up if enough people had interest in discussing it
21:30:51 <comstud> nod
21:31:04 <comstud> Mainly I guess it's good to just bring attention to it here
21:31:06 <russellb> yeah, to go into much detail we need the most active people on the thread
21:31:06 <comstud> like you just did!
21:31:10 <russellb> yeah, that's good
21:31:19 <russellb> hard to catch every thread, so good to bring attention to important ones for nova
21:31:19 <comstud> [done]
21:31:21 <comstud> nod
21:31:54 <russellb> any other topics for open discussion?
21:32:23 <russellb> one other thread that's worth looking at is 'virt driver management'
21:32:45 <russellb> some ideas being thrown around on the future of how to support things that aren't really nova managing a hypervisor node directlyh
21:32:52 <russellb> more like connecting to existing virt management systems
21:33:26 <russellb> so, could impact vCenter, future hyper-v work, oVirt if that code gets submitted (was presented at summit), and others, i'm sure
21:33:38 <jog0> some food for thought about: https://bugs.launchpad.net/nova/+bug/1178008 comstud has a good comment  on it.
21:33:40 <uvirtbot> Launchpad bug 1178008 in nova "publish_service_capabilities does a fanout to all nova-compute" [Undecided,Triaged]
21:33:53 <russellb> yes, i want that to die
21:34:04 <russellb> i pretty much want fanout to die
21:34:09 <jog0> I am hoping to attack this in H2 timeframe
21:34:18 <jog0> but welcome to others jumping in
21:34:33 <russellb> that's great
21:34:52 <hartsocks> interesting problem
21:34:56 <russellb> we actually have fanout going both directions between compute <-> scheduler
21:35:13 <russellb> would be nice to remove both
21:35:30 <devananda> ++
21:35:47 <jog0> russellb: what fans out the other way?
21:35:55 <russellb> which way
21:36:10 <jog0> scheduler -> compute fanout
21:36:15 <russellb> scheduler does a fanout to say "HEY COMPUTES, GIMME YO STATE"
21:36:22 <russellb> when a scheduler node first starts up
21:36:22 <jog0> russellb: ahh
21:36:24 <comstud> on startup
21:36:24 <comstud> yeah
21:36:25 <jog0> oh right
21:36:31 <russellb> it's just once ... but still
21:36:44 <hartsocks> so… why not pub-sub?
21:37:06 <russellb> well, it is pub-sub, actually ...
21:37:20 <comstud> heh yeah, it is
21:37:25 <senhuang> it is hard-corded pub-sub
21:37:31 <russellb> pretty much :-)
21:37:50 <comstud> There's not much reason for these messages anymore..
21:37:58 <russellb> yeah that's good to know
21:37:58 <comstud> the scheduler gets most of it from the DB now
21:38:05 <comstud> and I think the remaining shit should move there too
21:38:09 <jog0> ++
21:38:10 <hartsocks> Okay. NOW I get it.
21:38:12 <senhuang> ++
21:38:24 <russellb> jog0: thanks for chasing this
21:38:31 <russellb> i think there's only one more use of fanout_cast in nova
21:38:35 <russellb> in network somewhere IIRC ...
21:38:42 <jog0> russellb: np
21:38:50 <russellb> update_dns()
21:39:06 <comstud> I do use fanout for cells broadcasts
21:39:20 <russellb> but i'm less concerned about fanout for nova-network, since it's being deprecated
21:39:36 <russellb> comstud: orly ... well in that case we expect a relatively small number of cells services
21:39:38 <russellb> so not a huge deal
21:39:46 <comstud> yes, it's not a problem
21:39:54 <comstud> or at least should not be
21:40:18 <comstud> i use it to broadcast capabilities and capacities..
21:40:27 <russellb> on a somewhat related note ... i think quantum agents (the things that run on compute nodes) consume notifications for every time anything gets created/updated/deleted on the quantum server :-(
21:40:36 <russellb> whcih is basically fanout to every compute node ...
21:40:42 <comstud> but it's from very few nova-cells to very few nova-cells
21:40:48 * jog0 sighs
21:40:53 <russellb> comstud: yeah, so i think that's ok
21:40:58 <comstud> they come from the immediately children cells only.
21:40:59 <comstud> not every single cell.
21:41:03 <comstud> for instance.
21:41:09 <russellb> jog0: yeah ... at least for the open source plugins (linux bridge, openvswitch, AFAIK)
21:41:56 <jog0> russellb:  I am not sure what bluehost did for quantum, but they said for nova they replaced the scheduler because fanouts from 16k compute nodes to single threaded nova-schedulers broke them
21:42:11 <jog0> but I bet bluehost did something about the fanouts in quantum too
21:42:16 <russellb> jog0: they have large failure rates for the quantum messaging
21:42:25 <russellb> it's bad
21:42:29 <jog0> russellb: heh
21:42:53 <russellb> something to keep in mind for our nova-network -> quantum migration ... i think ti's going to be a scale issue
21:43:01 <senhuang> jog0: they used mysql slaves for read requests
21:43:17 <russellb> comstud: so the only other problem with fanout is with the trusted-messaging work.
21:43:33 <russellb> comstud: when a message doesn't have a single recipient, you can't secure it quite the same
21:43:45 <jog0> senhuang: I think that was only part of the solution
21:43:56 <comstud> russellb: this is true
21:44:06 <russellb> comstud: but i'm sure we could update the cells stuff to message cells directly for that when we come to it
21:44:09 <comstud> but
21:44:14 <comstud> i think it's nearly the same problem..
21:44:20 <comstud> as a general topic consumer ?
21:44:24 <senhuang> jog0: true. they also changed the queueing mechanism? i
21:44:29 <comstud> or maybe it's not a problem?
21:44:49 <russellb> comstud: same problem
21:44:58 <russellb> "problem"
21:45:00 <russellb> limitation, really
21:45:03 <comstud> yeha
21:45:03 <comstud> ok
21:45:31 <russellb> what else can we rant about?  :-)
21:45:50 <cburgess> Thats a dangerous question.
21:46:03 <hartsocks> water is very wet...
21:46:14 <comstud> moist
21:46:20 <jog0> senhuang: here is there code https://github.com/JunPark/quantum/commits/bluehost/master
21:46:24 <russellb> gerrit was down for a few minutes today, i didn't know what to do with myself
21:46:43 <russellb> the mysql read slave thing is being worked on to go into havana-1
21:46:46 <russellb> yay for that.
21:47:42 <senhuang> yay
21:47:59 <senhuang> jog0: nice. their work is indeed very interesting!
21:48:27 <russellb> geekinutah is from there, and has been hanging out in -nova, feel free to thank him for sharing :-)
21:49:09 <russellb> alright, thank you all for coming.  #openstack-nova is open all week for spontaneous discussion
21:49:15 <russellb> bye!
21:49:16 <russellb> #endmeeting