21:01:04 #startmeeting nova 21:01:05 Meeting started Thu May 16 21:01:04 2013 UTC. The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:08 The meeting name has been set to 'nova' 21:01:12 Heya! 21:01:13 Well hello, everyone! 21:01:16 who's around to chat? 21:01:17 hey 21:01:19 o/ 21:01:20 o/ 21:01:20 o/ 21:01:21 hi 21:01:30 o/ 21:01:30 \o 21:01:36 hi 21:01:46 throw your ascii hands in the air like you just don't care 21:01:47 hi 21:01:58 \o/ 21:02:00 #link https://wiki.openstack.org/wiki/Meetings/Nova 21:02:10 #topic blueprints 21:02:23 let's take a look at havana-1 status 21:02:32 #link https://launchpad.net/nova/+milestone/havana-1 21:02:37 so time is flying by on this 21:02:43 merge deadline is only 1.5 weeks away 21:02:58 which means that ideally anything targeted for havana-1 should be up for review in about ... 0.5 weeks 21:03:04 Ugh 21:03:12 here 21:03:15 Whole stole our time? 21:03:17 who 21:03:18 to give time for the review process 21:03:32 comstud: ikr? 21:03:43 so please take a look at this list and 1) ensure that the status is accurate 21:04:02 2) try to get this stuff up for review by early/mid next week 21:04:17 3) if not #2, let me know as soon as you think it's going to slip so we can update the blueprint to push it back for havana-2 21:04:35 I will not be completing 6 bps in the next week. I shall move some to havana-2. 21:04:43 mikal: great, thank you sir 21:04:56 just need to keep this up to date so we have an accurate picture of the road to havana-1 21:05:19 and for reference, the full havana list is here ... 21:05:21 #link https://blueprints.launchpad.net/nova/havana 21:05:37 is there an easy way to see which are targeted for -1 ? 21:05:43 comstud: the first link 21:05:52 https://launchpad.net/nova/+milestone/havana-1 21:06:05 ty 21:06:26 you can also see what needs code review there 21:06:31 which is handy for prioritizing reviews 21:06:34 the link also have bug fixes which are targeted for H-1 21:06:51 since ideally we can land the stuff targeted at havana-1 with some priority 21:07:01 senhuang: good point, indeed it does 21:07:20 any questions on the blueprints or havana-1 status stuff? 21:07:39 mikal: I only see 1 assigned to you.. what's the other 5? 21:07:54 (Looking at where I can help) 21:08:04 comstud: Oh, I see. That number includes bugs apparently. 21:08:17 1 bp, 5 bugs. 21:08:25 yeah it does 21:08:43 cools :) 21:09:10 k, also yell if you need help :) 21:09:12 #topic bugs 21:09:29 #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 21:09:47 looks like our new bug count is rising quickly, so any triage is always helpful :-) 21:10:12 speaking of which, a big opportunity for making a significant contribution to the nova project is someone to focus on bug triage to make sure we stay on top of it 21:10:27 #help need bug triage, as usual 21:10:36 russellb: Do we have a guide for triage? 21:10:43 cburgess: we surrrrre do! 21:10:54 #link https://wiki.openstack.org/wiki/BugTriage 21:10:59 While I'm not promising anything, I'm happy to help with a guide 21:11:02 Cool.. bookmarked. 21:11:11 yeah, it's something you can do with as little or as much time as you want 21:11:36 fwiw, lifeless uncovered and filed a lot of bugs against the baremetal driver. they are, i believe all triaged, but most are pretty serious and marked High priority. 21:11:57 interesting 21:12:12 I'm picking them off as solutions become apparent, but if anyone else doing a POC with baremetal wants to hack on them 21:12:22 poke me or folks in #tripleo if you need tips 21:12:24 Also, I think we're down for another bug day Wednesday next week? 21:12:26 all tagged as baremetal presumably? 21:12:30 mikal: sounds great to me 21:12:32 russellb: yes, i believe so 21:12:37 bug day! 21:12:37 yes 21:12:54 #help could use help fixing bugs in baremetal: https://bugs.launchpad.net/nova/+bugs?field.tag=baremetal 21:13:35 #note bug day next wednesday, May 22 21:13:40 anything else on bugs? 21:14:08 #topic sub-team reports 21:14:18 ok so this time i'd first just like to know who'd like to give a report 21:14:21 and then i'll call on each one 21:14:23 so we don't step on each other 21:14:33 devananda: I think an Ironic update would be good in this section each week 21:14:50 russellb: ack 21:15:04 I can report on VMwareAPI team having actually met. Next meeting coming up the 22nd. 21:15:20 russellb: so then i'd like to give an update :) 21:15:22 ok cool, anyone else who wants to report, just ping me and i'll put you in the queue 21:15:30 devananda: you're up! 21:15:35 ok! 21:15:36 i can report the scheduling subteam 21:15:42 scheduler - what if we give a meeting an nobody comes :-) Due to various issues we kind of missed this week. 21:15:43 baremetal - 21:16:01 folks in #tripleo have been pounding on it. it mostly works 21:16:08 on a whole rack 21:16:11 n0ano: sorry i missed the meeting due to webchat connection issue 21:16:13 but doesn't recover from errors 21:16:21 hence bugs ^ :) 21:16:24 ironic - 21:16:36 it's alive! there's code in gerrit and unit tests 21:16:49 senhuang, multiple people (including me) had connectivity issues, we hope to do better next week. 21:16:51 and once I figure out how to tell jenkins, it'll have pep8 and stuff 21:17:12 also, i'm not sure i can write it all by Havana, 21:17:22 n0ano: my issue was that i used the webchat client which was down on tuesday.. 21:17:24 so i'm hoping more people will start writing (and they're stepping forward this week) 21:17:33 db - dripton asked for meetings to start again 21:17:37 [eol] 21:17:40 as far as stuff that i'm doing, good progress :) 21:17:48 #link http://eavesdrop.openstack.org/meetings/state_management/2013/state_management.2013-05-16-20.00.html 21:18:05 hoping to target cinder for H2 with some cases, nova if people can to 21:18:10 We need a libvirt subteam. Don't know what the process is for forming one. 21:18:28 rmk: I think you just start one! 21:18:31 the cinder people want to hopefully use the library for all workflows in cinder by end of H, we'll see :) 21:18:38 rmk: I'm come to your meetings if they're at an ok time for me. 21:18:47 Great. Then I have a report. Many brokens! We must fix! 21:18:52 lol 21:19:01 rmk, all it takes is one committed person to find a spot and organize things on the mailing list. 21:19:57 For the instancegroup api extension blueprint, gary has submitted a set of patches for reviews. 21:20:33 https://review.openstack.org/#/c/28880/ 21:20:43 sorry, was gone for a few minutes, internet died and had to switch to tethering :( 21:20:45 Joking aside, this is a major issue and we need to figure out what the appropriate direction is for a fix -- https://review.openstack.org/28503 21:21:26 rmk: ok, can come back to it in open discussion in a bit 21:21:33 OK, cool. 21:21:43 senhuang: done with scheduling? 21:21:58 russellb: yep 21:22:01 hartsocks: did you get to go (missed a few minutes) 21:22:36 russellb: The VMwareAPI guys are just getting rolling. Not much to say yet. 21:22:47 hartsocks: alright, thanks 21:22:52 russellb crap, didn't see that u were gonna call in order, my fault 21:22:52 but you have a weekly meeting going, that's good to see 21:22:58 harlowja: did you already go? 21:23:40 ah, ya, i put some stuff on, my fault, didn't see the we are ordering this message 21:23:59 it's ok 21:24:02 I'll just leave this here if anybody is interested. 21:24:07 #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI 21:24:11 hartsocks: cool 21:24:15 #topic open discussion 21:24:29 mikal: any thoughts on the bug rmk posted a bit ago? 21:24:49 should we discuss anything about these dynamic flavors? 21:24:51 or just leave it on-list? 21:24:58 russellb: /me looks 21:25:03 comstud: we can, can be next topic 21:25:19 #vote no 21:25:31 no vote for you. 21:25:47 So the problem here is that right now any pre-grizzly instances will fail to live block migrate on Grizzly. 21:25:50 #link https://review.openstack.org/#/c/28503/ 21:25:56 rmk: I have a change out for this too 21:26:13 mikal: I linked the change unless there's a new one? 21:26:19 rmk: oh wait, that _is_ my change 21:26:25 mikal: LaunchpadSync ate it 21:26:26 rmk: LOL 21:26:37 rmk: I was going to pick that one up again today 21:26:42 mikal: awesome 21:26:44 mikal: Yes. It was abandoned, so I figured we should discuss what needs to be done to get it moving forward. 21:26:52 OK works for me 21:27:01 rmk: that was about be being super busy, not a lack of intent 21:27:01 to be clear, my comment on there wasn't a -1 to the approach 21:27:11 I am full of intent, just lacking in the action department 21:27:20 just that you have to tweak the handling to not throw random exceptions if the data isn't there 21:27:21 Just figured if there was an architectural concern about the approach it would be a good time to discuss 21:27:52 mikal just needs minions 21:27:55 I think its minor, I just need to get around to doing the tweak 21:28:08 sounds good 21:28:22 cool, and if for some reason you can't get to it, yell and i'll help find someone else to take it 21:28:28 but sounds like it's not too much 21:28:46 For sure 21:29:35 k, so comstud suggested we talk dynamic flavors 21:29:38 #link http://lists.openstack.org/pipermail/openstack-dev/2013-May/009055.html 21:29:42 a thread is going on openstack-dev about this 21:29:47 comstud: comments? 21:29:52 i can also suggest we talk about state-management stuff :) 21:30:12 harlowja: that's what the state mgmt meeting is for :) 21:30:14 well, i didn't really suggest it 21:30:19 I asked if we *should* talk about it 21:30:19 :) 21:30:28 comstud: heh, ok.. 21:30:37 I dunno if there's enough people here to do it or not 21:30:41 well. take a look, review the proposal, and review dansmith's suggestion for a slightly different approach 21:30:45 post comments to the thread if you have them 21:30:46 but thought I'd bring it up if enough people had interest in discussing it 21:30:51 nod 21:31:04 Mainly I guess it's good to just bring attention to it here 21:31:06 yeah, to go into much detail we need the most active people on the thread 21:31:06 like you just did! 21:31:10 yeah, that's good 21:31:19 hard to catch every thread, so good to bring attention to important ones for nova 21:31:19 [done] 21:31:21 nod 21:31:54 any other topics for open discussion? 21:32:23 one other thread that's worth looking at is 'virt driver management' 21:32:45 some ideas being thrown around on the future of how to support things that aren't really nova managing a hypervisor node directlyh 21:32:52 more like connecting to existing virt management systems 21:33:26 so, could impact vCenter, future hyper-v work, oVirt if that code gets submitted (was presented at summit), and others, i'm sure 21:33:38 some food for thought about: https://bugs.launchpad.net/nova/+bug/1178008 comstud has a good comment on it. 21:33:40 Launchpad bug 1178008 in nova "publish_service_capabilities does a fanout to all nova-compute" [Undecided,Triaged] 21:33:53 yes, i want that to die 21:34:04 i pretty much want fanout to die 21:34:09 I am hoping to attack this in H2 timeframe 21:34:18 but welcome to others jumping in 21:34:33 that's great 21:34:52 interesting problem 21:34:56 we actually have fanout going both directions between compute <-> scheduler 21:35:13 would be nice to remove both 21:35:30 ++ 21:35:47 russellb: what fans out the other way? 21:35:55 which way 21:36:10 scheduler -> compute fanout 21:36:15 scheduler does a fanout to say "HEY COMPUTES, GIMME YO STATE" 21:36:22 when a scheduler node first starts up 21:36:22 russellb: ahh 21:36:24 on startup 21:36:24 yeah 21:36:25 oh right 21:36:31 it's just once ... but still 21:36:44 so… why not pub-sub? 21:37:06 well, it is pub-sub, actually ... 21:37:20 heh yeah, it is 21:37:25 it is hard-corded pub-sub 21:37:31 pretty much :-) 21:37:50 There's not much reason for these messages anymore.. 21:37:58 yeah that's good to know 21:37:58 the scheduler gets most of it from the DB now 21:38:05 and I think the remaining shit should move there too 21:38:09 ++ 21:38:10 Okay. NOW I get it. 21:38:12 ++ 21:38:24 jog0: thanks for chasing this 21:38:31 i think there's only one more use of fanout_cast in nova 21:38:35 in network somewhere IIRC ... 21:38:42 russellb: np 21:38:50 update_dns() 21:39:06 I do use fanout for cells broadcasts 21:39:20 but i'm less concerned about fanout for nova-network, since it's being deprecated 21:39:36 comstud: orly ... well in that case we expect a relatively small number of cells services 21:39:38 so not a huge deal 21:39:46 yes, it's not a problem 21:39:54 or at least should not be 21:40:18 i use it to broadcast capabilities and capacities.. 21:40:27 on a somewhat related note ... i think quantum agents (the things that run on compute nodes) consume notifications for every time anything gets created/updated/deleted on the quantum server :-( 21:40:36 whcih is basically fanout to every compute node ... 21:40:42 but it's from very few nova-cells to very few nova-cells 21:40:48 * jog0 sighs 21:40:53 comstud: yeah, so i think that's ok 21:40:58 they come from the immediately children cells only. 21:40:59 not every single cell. 21:41:03 for instance. 21:41:09 jog0: yeah ... at least for the open source plugins (linux bridge, openvswitch, AFAIK) 21:41:56 russellb: I am not sure what bluehost did for quantum, but they said for nova they replaced the scheduler because fanouts from 16k compute nodes to single threaded nova-schedulers broke them 21:42:11 but I bet bluehost did something about the fanouts in quantum too 21:42:16 jog0: they have large failure rates for the quantum messaging 21:42:25 it's bad 21:42:29 russellb: heh 21:42:53 something to keep in mind for our nova-network -> quantum migration ... i think ti's going to be a scale issue 21:43:01 jog0: they used mysql slaves for read requests 21:43:17 comstud: so the only other problem with fanout is with the trusted-messaging work. 21:43:33 comstud: when a message doesn't have a single recipient, you can't secure it quite the same 21:43:45 senhuang: I think that was only part of the solution 21:43:56 russellb: this is true 21:44:06 comstud: but i'm sure we could update the cells stuff to message cells directly for that when we come to it 21:44:09 but 21:44:14 i think it's nearly the same problem.. 21:44:20 as a general topic consumer ? 21:44:24 jog0: true. they also changed the queueing mechanism? i 21:44:29 or maybe it's not a problem? 21:44:49 comstud: same problem 21:44:58 "problem" 21:45:00 limitation, really 21:45:03 yeha 21:45:03 ok 21:45:31 what else can we rant about? :-) 21:45:50 Thats a dangerous question. 21:46:03 water is very wet... 21:46:14 moist 21:46:20 senhuang: here is there code https://github.com/JunPark/quantum/commits/bluehost/master 21:46:24 gerrit was down for a few minutes today, i didn't know what to do with myself 21:46:43 the mysql read slave thing is being worked on to go into havana-1 21:46:46 yay for that. 21:47:42 yay 21:47:59 jog0: nice. their work is indeed very interesting! 21:48:27 geekinutah is from there, and has been hanging out in -nova, feel free to thank him for sharing :-) 21:49:09 alright, thank you all for coming. #openstack-nova is open all week for spontaneous discussion 21:49:15 bye! 21:49:16 #endmeeting