14:02:27 #startmeeting nova 14:02:28 can we discuss on openstack-dev next steps for few mins? 14:02:29 Meeting started Thu Jan 9 14:02:27 2014 UTC and is due to finish in 60 minutes. The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:33 The meeting name has been set to 'nova' 14:02:37 Hello everyone! 14:02:39 whee! 14:02:44 and welcome to the first 1400 UTC nova meeting 14:02:45 ah, nova is here :) 14:02:45 hi 14:02:50 hi 14:02:59 hi 14:03:02 hi 14:03:04 #topic general 14:03:04 hi 14:03:11 hi 14:03:12 so, we're alternating meeting times for now 14:03:18 hi 14:03:23 we'll see how it goes. as long as this is attended well, we'll keep it 14:03:24 and meeting channel. 14:03:29 yeah and channel, oops 14:03:32 i'll update the wiki about that 14:03:57 some general things first ... 14:04:02 nova meetup next month 14:04:05 #link https://wiki.openstack.org/wiki/Nova/IcehouseCycleMeetup 14:04:11 be sure to sign up so we know you're coming 14:04:15 if you're able 14:04:22 * johnthetubaguy is looking forward to it 14:04:36 my thinking on the schedule for that was to ... not really have a strict one 14:04:46 perhaps having unconference in the mornings, and open afternoons to just hack on things together 14:04:58 that could be working through tough bugs, design, whatever 14:05:06 that sounds quite good 14:05:09 I like that. 14:05:16 ok great 14:05:19 so we'll plan on that for now then 14:05:24 which isn't much of a plan 14:05:27 more of a plan to not plan too much 14:05:28 in unconference u can gather friends for the afternoon I guess 14:05:31 sure 14:05:40 maybe an etherpad to collect ideas? 14:05:45 that's be good 14:05:53 if anyone creates one before me, just link it from that wiki page 14:05:57 yeah, i could move my notes to etherpad 14:06:05 mriedem: perfect 14:06:11 how many people are expected? 14:06:25 i'll check 14:07:04 20 registered so far 14:07:13 +1 hopefully soon 14:07:15 we have space for a lot more than that :) 14:07:34 should be fun 14:07:37 ok, another thing .... 14:07:44 please put january 20th on your schedule 14:07:46 gate bug fix day 14:07:54 thanks, and one last question: will there be any chance to connect from remote? 14:07:55 we really, really, really need to step it up on that front 14:08:10 garyk: maybe ... hadn't thought about it. we could try hangouts or something 14:08:12 garyk:+1 14:08:27 a hangout is a great idea 14:08:31 we can do that at least for the unconference part 14:08:34 the rest may not work well 14:08:46 well, should be on IRC for the rest I guess 14:08:50 true 14:08:54 Is this Foo Camp (or Bar Camp) style unconference? 14:09:04 well ... 14:09:09 i don't know about doing pitches and voting 14:09:13 other than maybe on etherpad 14:09:25 okay. 14:09:33 i'm not sure we'll have *that* much demand for the time that we have to formally rank stuff 14:09:40 just a guess though 14:09:55 i may just be a dictator on the schedule based on proposals 14:09:57 :-p 14:10:17 +1 we give you a veto on a session 14:10:25 heh 14:10:29 ok onward 14:10:31 #topic sub-teams 14:10:35 is Jan 20th appropriate for gate bug squash? is 23rd the icehouse-2? 14:10:43 anyone want to give a report on what a sub-group has been working on? 14:10:46 llu-laptop: we'll come back to that in a bit 14:10:56 * johnthetubaguy raises hand a little 14:11:05 johnthetubaguy: go for it 14:11:23 will you bring denis rodman as a side kick? 14:11:24 So we are working on getting XenServer images into the node pool 14:11:36 then getting Zuul testing stuff on XenServer 14:11:47 johnthetubaguy: that's great to hear, good progress? 14:11:48 slow progress, and infra-review welcome 14:12:07 yeah, its really just not getting reviewed at the moment, but it was over christmas I guess 14:12:07 confident that it's still good icehouse timeframe material? 14:12:12 matel is working on it 14:12:21 ok, yeah, holidays put a big delay on everything 14:12:27 nova queue is still recovering from holidays, too 14:12:30 russellb: it should be / better be 14:12:44 cool 14:12:56 at least tempest is running well on XenServer 14:13:00 excellent 14:13:02 Citrix did some good work on that 14:13:17 so, its more a wire up into Zuul exercise 14:13:31 and get an image in rackspace that has XenServer + Devstack domu 14:13:38 which is done, just needs more testing, etc 14:13:59 i think that's a cool approach btw, hooking your nodes into infra 14:14:04 instead of running your own zuul/jenkins/etc 14:14:30 yeah, fingers crossed 14:14:54 so, i saw you in a PCI discussion before this meeting :) 14:15:01 yeah, thats true 14:15:07 what's going on with that 14:15:12 I think we agreed on what the user requests could be 14:15:23 waiting to here from ijw on confirming that though 14:15:29 I wrote up some stuff here: 14:15:36 https://wiki.openstack.org/wiki/Meetings/Passthrough#New_Proposal_for_admin_view 14:15:55 The final interactions between Nova and Neutron are less clear to be honest 14:16:13 basically we keep this: 14:16:17 nova boot --image some_image --flavor flavor_that_has_big_GPU_attached some_name 14:16:20 we add this: 14:16:20 are there people from neutron involved here? 14:16:42 nova boot --flavor m1.large --image --nic net-id=,nic-type= 14:16:57 garyk: possible, but not enough I don't think 14:17:19 (where slow is a virtual connection, fast is a PCI passthrough, and foobar is some other type of PCI passthrough) 14:17:22 i know that there are a lot of dicussion that the melanox, intel, cisco etc are having 14:17:25 I kinda see it like nova volumes 14:17:27 I believe cisco guys are from neutron? 14:17:41 garyk: those are the folks I was talking 14:17:55 just didn't see any non-vendor types 14:17:55 johnthetubaguy: thanks. 14:18:13 will the nic_type have a 'type of service' or be a set of k,v pairs? 14:18:23 thats all TBC 14:18:34 ok, thanks 14:18:41 just wanted to make sure the user didn't need to know about macvtap, or whatever it is 14:18:56 I see it like the user requesting volume types 14:18:59 glad to see some stuff written down, and thanks a bunch for helping with this 14:19:00 maybe it is something worth looking into placing it in a flavor - for example someo gets gold service 14:19:09 no worries, seems to be getting somewhere 14:19:21 they could have better connectivity, storage etc. 14:19:23 we have some guys from bull.net working on adding PCI passthrough into XenAPI too 14:19:45 the reaon I don't like flavor is due to this: 14:19:52 nova boot --flavor m1.large --image --nic net-id= --nic net-id=,nic-type=fast --nic net-id=,nic-type=faster 14:19:58 that is, we introduce a notion of service levels 14:20:12 yeah, I flip flop on this 14:20:27 I like flavor being the main thing that gives you what you charge 14:20:39 but you can dymanically pick how many nics you want anyways 14:20:51 lets see how it works out anyways 14:21:03 one question on all this... 14:21:03 ok, any other sub-team reports? 14:21:06 * russellb waits 14:21:14 I have a short one. 14:21:16 I wonder about moving PCI alias to host aggregates 14:21:34 so its more dynamic, rather than in nova.conf 14:21:46 yeah, definitely prefer things to be API driven than config driven where we can 14:22:06 OK, I was thinking the same, just wanted to check 14:22:20 sometimes config is a much easier first revision 14:22:22 and that's OK 14:22:42 it could be: nova boot --flavor m1.large … —service_type gold 14:22:42 then the scheduler could take this into account and assign a host that can provide that service 14:22:42 yeah, i can give a scjedulre update 14:22:42 scheduler update if n0nao is not around 14:22:42 johnthetubaguy: yeah, host aggregates could be a soln 14:22:44 but makes sense to move to an API later 14:22:54 yeah, they have some stuff on config already 14:23:07 i am in favor of the api. 14:23:13 I am kinda trying to just agree what the config should be yet, so thats not a big deal just yet :) 14:23:22 * russellb nods 14:23:27 ok, hartsocks go for it 14:23:28 cool, sorry 14:23:31 I am all done... 14:23:33 all good :) 14:23:37 good stuff 14:24:05 Just wanted to say we're looking to get a few bug fixes in for our driver. 14:24:16 Those affect the CI stability on Minesweeper. 14:24:29 I've spammed the ML about priority order on these. 14:24:32 Also 14:24:40 http://162.209.83.206/logs/58598/7/ 14:25:07 logs! 14:25:12 tada 14:25:25 http://162.209.83.206/logs/ 14:25:26 excellent 14:25:47 There is a etherpad that has all of the vmware I2 issues - https://etherpad.openstack.org/p/vmware-subteam-icehouse-2 14:25:49 We're not confident in the infra's stability yet to do the −1 votes. 14:26:07 garyk: thank you, I posted those in the ML post earlier too. 14:26:27 hartsocks: are you planning to move to testing all nova changes at some point? 14:26:40 When we can plug the session management issues... 14:26:50 ok, so eventually, that's fine 14:27:02 … and an inventory issue we have when adding ESX hosts. 14:27:13 is it worth targeting those bugs at I-2, or did you do that already? 14:27:18 Eventually will be sooner if we can get more reviews? 14:27:22 heh 14:27:32 the donkey and the carrot 14:27:34 like i said earlier, nova queue seems to be still recovering from the holidays 14:27:35 I'll double check all the listed bugs today. 14:27:45 :-) 14:27:51 I've said my bit. 14:27:51 minesweeper is doing the following: nova and neutron 14:27:51 here is a list - https://review.openstack.org/#/dashboard/9008 14:28:06 you guys do seem to be high on these lists ... http://russellbryant.net/openstack-stats/nova-openreviews.html 14:28:46 ok, garyk did you have scheduler notes? 14:28:50 yeah, kind of aware of that. 14:29:00 russellb: yes 14:29:44 1. the gantt tree for the forklift is ready but still are waiting to do development there 14:29:55 the goal will be to cherry pick changes 14:30:31 great 14:30:51 fir all those that are not aware gantt is the forklift of the scheduler nova code to a sperate tree 14:30:51 2. we spoke yestertday to move the existing scheduler code to support objects (i am posting patches on this) so that the transition to an external scheduler may be easirer 14:30:54 so priorities: 1) keep in sync with nova, 2) get it running as a replacement for nova-scheduler, with n o new features 14:31:18 yeah that'd be nice 14:32:02 thats about it at the moment. 14:32:02 don dugger did great work with gantt and all the others in infra. kudos to them 14:32:11 or use object support to test (1) keep in sync with nova? 14:32:34 yeah. hopefully. we still have instance groups in deveopment - pending API support and a few extra filters in the works 14:32:55 the idea of the object support is to remove the db access from the scheduler. this can hopefully leverage the objects that can work with deifferent versions 14:33:20 direct use of the db API at least? 14:33:25 at the moment the changes we are making is in nova. hopefully these few patches may be approved and then cherry picked 14:33:28 not using objects to talk to db through conductor right? 14:34:09 * russellb assumes so 14:34:13 yes, to talk to the db via conductor. 14:34:24 well, we don't want gantt calling back to nova 14:34:39 so that won't work ... 14:34:39 garyk: can you point me the patchset? 14:34:52 besides, i thought we were going with the no-db-scheduler blueprint for that 14:35:00 basically not using the db at all anymore 14:35:17 and just giving it a cache and sending it data to update the cache over time 14:35:39 that is what boris-42 is doing. 14:35:43 right 14:36:20 ok, onward for now 14:36:21 #topic bugs 14:36:38 193 new bugs 14:36:51 been staying roughly level lately 14:37:11 do we have any cool stats of the bugs yet? 14:37:11 well, last month or so anyway 14:37:12 http://webnumbr.com/untouched-nova-bugs 14:37:12 why? can you please clarify 14:37:12 those patch sets for the 'no db' support have drivers where the one example is a sql alchemy one (unless i am misunderstanding) 14:37:12 that is for the host data. 14:37:12 there is the instance data that needs to be updated 14:37:22 russellb, so we are cool with those patches 14:37:22 russellb, oookay 14:37:55 not sure if my connection is messed up or what, i just caught a bunch of stuff from garyk and ndipanov ... sorry, wasn't trying to ignore you guys 14:38:06 +1 14:38:24 so, also on bugs, https://launchpad.net/nova/+milestone/icehouse-2 14:38:31 lots of stuff targeted to icehouse-2 14:38:38 the most concerning parts are all of the critical ones 14:38:45 we're the biggest gate failure offender right now 14:38:49 and pitchforks are coming out 14:39:07 so we really need to put time into this 14:39:23 january 20 was proposed as a gate bug fix day by sdague 14:39:28 which is great, but we shouldn't wait until then 14:39:53 i'm trying to clear off my plate so i can start focusing on these bugs 14:40:11 anyone interested in working with me and others on these? 14:40:18 I am traveling next week I am afraid :( 14:40:36 i'll allow it :) 14:40:46 well if anyone has some time available, please talk to me 14:40:57 i'm going to try to start organizing a team around these bugs 14:41:04 i am happy to work on bugs 14:41:23 these gate bugs are starting to *massively* impact the gate for everyone 14:41:49 gate queue got over 100 yesterday, approaching over 24 hours for patches to go through 14:41:51 because of so many resets 14:41:56 65 patches deep right now 14:42:29 #link http://lists.openstack.org/pipermail/openstack-dev/2014-January/023785.html 14:42:35 garyk: great, i'll be in touch 14:42:58 I'll look into some too next week I hope 14:43:00 regarding the bugs - i wanted us to try and formalize the VM diagnostics and then try and log this information when there is a gate failure - https://wiki.openstack.org/wiki/Nova_VM_Diagnostics 14:43:29 well, any tempest failure 14:43:30 that is, looking at VM diagnostics may help isolate the cause of issues - at least let us know if it was related to the VM, network ir storage 14:43:37 yeah, tempest failures 14:43:46 garyk: that's a good idea 14:43:59 the more data we can collect on failures the better, really 14:44:50 ok, next topic 14:44:51 dansmith had some ideas on debugging the nova-network related large ops one yesterday 14:44:55 we should talk to him today about that 14:44:58 mriedem: sounds good! 14:45:01 #topic blueprints 14:45:08 #link https://launchpad.net/nova/+milestone/icehouse-2 14:45:17 if you have a blueprint on that list, please make sure the status is accurate 14:45:25 or rather, "Delivery" 14:45:33 instance type -> flavor will move to i3 14:45:38 we're going to start deferring "Not Started" blueprints soon to icehouse-3 14:45:43 mriedem: ok go ahead and bump it 14:46:07 done 14:46:14 so another blueprint issue ... i've greatly appreciated the team effort on blueprint reviews, that has worked well 14:46:23 however, our idea for doing nova-core sponsors for blueprints has been a flop 14:46:25 nobody is doing it 14:46:29 and so virtually everything is Low 14:46:38 and that's not really much better than before 14:46:44 russellb, what if the status is inaccurate? 14:46:52 ndipanov: change it :-) 14:47:05 if it's yours you should be able to change it, if not, ask me (or someone on nova-drivers) 14:47:18 it takes 2 cores to move from Low, i see some bps with 1 sponsor - maybe move to 1 sponsor? 14:47:21 russellb, done thanks 14:47:37 1 +2 doesn't get the patches merged though 14:47:42 right 14:47:46 that's why we were requiring 2 14:47:54 yeah, I think 2 is correct 14:48:04 I sponsor the odd patch in the hope someone else joins me 14:48:04 we can either stick with this plan, and try to promote it better 14:48:15 or just punt the whole thing and start trying to sort them based on opinion 14:48:29 johnthetubaguy: yeah, i think you and dansmith have done some of that, not many others 14:48:46 how can we get core people interested in blueprints? there are ~25 bp's that are waiting review and are all low. 14:49:03 well, I think it reflects the current reality of the review rate though 14:49:09 so that means none are sponsored…. i just feel that a very small percentage of these may even get review cyces 14:49:10 johnthetubaguy: perhaps 14:49:31 if the vast amount of Low is actually a reflection of our review bandwidth, then we have a whole different problem 14:49:36 i guess the question is ... how many Low blueprints land 14:49:36 maybe worth a quick reminder email to nova-core? 14:49:51 here's the icehouse-1 list https://launchpad.net/nova/+milestone/icehouse-1 14:50:08 johnthetubaguy: yeah, guess we can try that and see 14:50:17 because i still really like the theory behind it :) 14:50:36 better tooling could help too 14:50:43 if looking over these was a more natural part of dev workflow 14:50:46 but that's not a short term fix 14:51:03 yeah, to tools could be a lot better, auto adding people to reviews, etc 14:51:03 i wish the gd whiteboard textarea had timestamps 14:51:13 and audited for who left the comment 14:51:14 yes the whiteboard sucks 14:51:18 i must be honest it is concerning. ~10 landed and some were just minor issues like configration options 14:51:20 +1 14:51:33 +1 to the whiteboard 14:51:40 icehouse-1 isn't the best example ... icehouse-1 snuck up really fast 14:51:45 so it just happened to be what could land very fast 14:51:49 true, it was short 14:51:52 i1 was also summit time 14:51:57 havana-2? 14:51:57 right 14:52:09 h2 didn't have the new model 14:52:13 well havana wasn't using this approach, yeah 14:52:15 and i-2 had xmas and new years. i guess that also has a part. but we are a couple of weeks away and there is a meetup in the middle 14:52:22 that was prioritized based on my opinion largely :-) 14:52:34 indeed, just curious on general throughput 14:52:40 personally, my excuse is we had a baby. 14:52:44 :-) 14:52:58 hartsocks: heh, that'll be me for Juno 14:53:00 hartsocks: plan 9 months ahead next time 14:53:02 life happens 14:53:10 mriedem: lol 14:53:14 sorry honey.... 14:53:38 johnthetubaguy: unfortunately we can't see the h2 list now ... 14:53:50 johnthetubaguy: after the final release everything gets moved to the big havana list 14:53:55 yeah, didn't work for me either, old reassignment thing 14:53:59 i think to get eyes on blueprints, and sponsors, people need to show up to the meeting and bring them up 14:54:07 heh 14:54:11 well on that note ... 14:54:11 the new meeting time should help that 14:54:14 #topic open discussion 14:54:22 open discussion is a good time to ask for eyes on things 14:54:24 meetup etherpad: https://etherpad.openstack.org/p/nova-icehouse-mid-cycle-meetup-items 14:54:32 mriedem great, add it to the wiki? 14:54:38 sounds like hyper-v CI is coming along: http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-01-07-16.01.log.html 14:54:41 russellb: sure 14:54:43 If possible could people please comment on https://wiki.openstack.org/wiki/Nova_VM_Diagnostics 14:54:44 i also posted to the ML 14:54:44 thanks 14:54:55 cool 14:55:16 i'd like eyes on the 3 patches i have in this i2 bp that is moving to i3, these just need another +2: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/flavor-instance-type-dedup,n,z 14:55:22 well, 2 patches, one is approved 14:55:27 they are just refactor 14:55:41 wonder how many blueprints aren't approved yet ... *looks* 14:55:52 https://blueprints.launchpad.net/nova/icehouse 14:55:59 regarding the meetup - would it be possible that a day bew devoted to reviews of BP's? that is there will be a bunch of people together. Why not all get heads down and review BP's 14:56:13 not bad, 7 waiting on a blueprint reviewer 14:56:14 garyk: i have bp review on the etherpad 14:56:20 not the proposed BP but the code 14:56:32 mriedem: thanks! 14:56:34 russellb: also looking for approval on this https://blueprints.launchpad.net/nova/+spec/aggregate-api-policy 14:56:36 code looks ready 14:56:37 ah, group code reviews, that could work 14:57:10 mriedem: seems fine, approved 14:57:16 thanks 14:57:27 anyone hear any status/progress on docker CI? 14:57:35 maybe in utar we could get through the backlog a little? 14:57:48 since they are adding sub-driver for LXC 14:58:00 well, docker folks are not adding that 14:58:04 that's zul 14:58:14 johnthetubaguy: yeah hope so, that'd be cool 14:58:23 re: docker CI, i've been in touch with them 14:58:27 they are fully aware of the requirement 14:58:33 and want to meet it, but haven't seen movement yet 14:58:46 sounds like eric w. is taking over maintaining the docker driver in nova 14:58:56 don't see him here 14:59:17 ok, i'm hoping to be pleasantly surprised with the hyper-v CI since it's been so quiet 14:59:30 yeah, but it was a nice surprise to see something get spun up 14:59:40 yup 14:59:42 a change in neutron broke their ci - they are out there :) 14:59:42 nice to see this all seem to come together 14:59:57 oh right, i saw that email 15:00:00 darn windows 15:00:05 garyk: yeah, i saw alex in -neutron the other day 15:00:10 talking about it 15:00:48 alright we're out of time 15:00:56 #openstack-nova is always open for nova chatter :) 15:00:58 thank you everyone! 15:01:04 next week we'll be back to 2100 UTC 15:01:08 thanks for time! have a good weekend 15:01:08 and alternating from there 15:01:13 #endmeeting