15:01:12 #startmeeting XenAPI 15:01:13 Meeting started Wed Aug 21 15:01:12 2013 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:16 The meeting name has been set to 'xenapi' 15:01:19 hi everyone 15:01:24 who is around for the meeting? 15:04:55 hmmm 15:04:56 weird 15:04:57 I am 15:05:02 I was expecting this to highlight 15:05:17 dunno why xenapi up there ^^ didn't cause me to sit up and take notice 15:05:37 johnthetubaguy: 15:05:49 Sorry for the delayed start 15:05:58 anyone else around 15:06:02 I am here. 15:06:04 matel is ;) 15:06:12 And we can pull euanh in if needed 15:07:09 * BobBall waits :) 15:09:04 cool 15:09:12 so people got stuff for the agenda? 15:09:27 Let's go through the normal agend 15:09:30 we have stuff to raise 15:09:38 but only in the right place 15:09:47 OK 15:09:59 no actions from last meeting I guess 15:10:10 #topic Blueprints 15:10:17 hows we doing on this front 15:10:22 its the merge freeze today 15:10:28 so all changes need to be up today 15:10:32 hows we looking? 15:11:13 they are all up I thin 15:11:14 k 15:11:17 we've got a couple of BPs 15:11:24 Ok, cool 15:11:25 including one late-bloomer (xenserver-core) 15:11:32 but all changes are up and ready for review 15:11:34 are they all approved now? 15:11:40 it's BP freeze now, right? 15:11:42 yes, all approved 15:11:46 feature freeze in a few days time 15:12:03 erm, code needs to be uploaded today 15:12:08 merged by two weeks time 15:12:11 or something like that 15:12:16 nice 15:12:23 are we disabling git review for new patches? 15:12:28 that'd be a good help to the reviewers ;) 15:12:33 nope, other bug patches are welcome still 15:12:40 but yes, code is all uploaded 15:12:52 we just reserve the right to −2 any new patches that are blueprints 15:13:12 nope, bug patches are welcome until we ship 15:13:18 I guess one can modify its patch, right? 15:13:34 yup 15:13:41 good point 15:13:56 So I think we are good on this front. Bob did a great patch bombing. 15:14:07 all good 15:14:36 * BobBall does the patch-bomb dance 15:15:24 #topic Docs 15:15:29 so any docs news 15:15:42 johnthetubaguy: I was emailing on xs-devel regarding to image import/export, I added you to the last email, your input is more than welcome. 15:15:46 yes lots of fun docs 15:16:06 https://blueprints.launchpad.net/openstack-manuals/+spec/redocument-xen was drafted 15:16:21 although a noticeable percentage of those bugs are "we should document more configurations" 15:16:39 I am looking at the docs right now, reading through the existing documentation. Spoke with anne, it seems that docs are heading towards a consolidated config manual. 15:16:40 so I do not consider them as important as the base docs 15:17:54 (wom 2+ 15:18:47 OK 15:18:50 that seems fair 15:18:58 accuracy is more important I think 15:19:04 so remove missleading stuff first 15:19:09 then add missing stuff 15:19:31 matel: sorry, I hope to wade in on that, just not had chance 15:19:42 cool, whats the timeframe on the docs stuff? 15:19:49 patches coming soon? 15:20:05 feel free to ping me on IRC if I can help with "is that still true" questions 15:20:06 I am not really expecting patches this week. 15:20:17 no problem 15:20:26 I am reading through the docs. 15:20:28 Figuring out what to change this week we hope 15:20:32 this post freeze time is the perfect time to try and tidy up the docs 15:20:38 and updating the wiki pages too 15:20:52 do we plan to port more of the wiki pages into the docs? 15:21:05 obviously the devstack docs do not belong in the admin guide, but other than those 15:21:53 Quite possibly - there are things in the wiki that we know are missing from the admin guide 15:21:53 johnthetubaguy: patches are going on now to create the config guide 15:21:57 Of course, the big TODO on the docs is making stuff like that step by step guide using Ubuntu / Fedora packages, and making it work inside a DomU 15:22:15 annegentle: cool, we should take a look at those 15:22:16 johnthetubaguy: the Compute Admin Guide will not have much left in it 15:22:24 johnthetubaguy: really the install/config docs are the focus 15:22:38 johnthetubaguy: not much "admin" -- more how do you get Compute running with xen 15:23:00 annegentle: sounds good, that what I was hoping 15:23:08 johnthetubaguy: shaunm is Shaun McCance and is a contractor with Cisco working solely on the Install guide 15:23:14 johnthetubaguy: you can definitely check with him 15:23:37 ah, cool, do you want to introduce me to him, in case I can help with quick questions? 15:23:56 johnthetubaguy: yep sure, come by #openstack-doc afterward 15:24:11 annegentle: ah, OK, will do 15:24:28 so, sounds like good progress is happening there 15:24:55 shall we move on to bugs...? 15:25:02 yup 15:25:07 I've got a nice one 15:25:08 or 15:25:11 more importantly a nice fix 15:25:15 cool 15:25:22 #topic Bugs 15:25:25 I had an epiphany last night 15:25:46 http://paste.openstack.org/show/44771/ is all that is needed to get things working on LVM-based SRs! 15:25:55 Particularly with Mate's fix to allow upload/download of raw OVAs 15:26:51 OK, are we going to look at fixing up that safe copy? 15:26:56 Fixes things like https://bugs.launchpad.net/nova/+bug/1162382 15:26:57 Launchpad bug 1162382 in nova "LVM over ISCSI as default SR not working" [Low,Won't fix] 15:27:16 I need to really understand that issue 15:27:19 I think you said we don't need it, if we check for VBDs being removed from Dom0? 15:27:20 I haven't understood it fully yet 15:27:25 I think that safe copy isn't needed at all 15:27:38 I think it was probably added by someone looking in dom0 rather than the code 15:27:45 well, it was added because there were races when the copy had not completed when that code returned 15:27:48 the vdi.copy will block until it returns 15:27:58 unless you use async copy 15:28:17 and unless we've got parallel API calls - one to copy then the next to clone based on that copy or something - then it'll never be a problem 15:28:28 hmm, thats an interesting one, it may have originally being doing async when that was added, its worth a history did at some point 15:28:40 I look a little at the history but didn't spot that 15:28:59 it would be around cactus I think 15:29:28 Anyway - this patch is an easy one to accept because it doesn't affect the existing use cases 15:29:38 of ext - that behaves identically 15:29:49 so current users wouldn't be affected at all. Guaranteed ;) 15:29:50 sort of, but we might introduce a sutible bug into the LVM support, which sounds bad 15:30:04 would rather we looked into the route cause myself 15:30:18 We're confident that there isn't one there 15:31:32 OK 15:31:38 then we should look at the VHD case 15:31:55 what was the parallel calls you were talking about 15:31:59 copy on the same VDI? 15:32:10 a sec, Bob has a guest 15:32:11 two copies on the same VDI, I mean? 15:32:19 lol, OK 15:32:45 Let's move on 15:32:48 I'l cehck the cause 15:32:58 gimme an action :) 15:33:02 to track down the original commit 15:33:26 #action look up why VDI.copy work around was added 15:33:36 oops 15:33:50 #action BobBall investigate VDI.copy workaround 15:33:57 OK 15:34:00 any more for bugs 15:34:30 just to say, once we pass feature freeze, it would be good to try and fix up some of the pending bugs we have in launchpad 15:34:46 I think I can get some time to do that, is anyone else likely to get some time on that? 15:34:54 It's planned yeah 15:35:06 but we'll have to see how it goes 15:35:14 there are other non-Havana things that we need to do for OS 15:35:23 such as fixing xenserver-core to work better under the havana base 15:35:48 so while there is a code freeze for havana there isn't necessarily a date for Citrix this time :D 15:35:55 And we want to do some prototyping of things 15:36:00 etc etc 15:36:37 hmm, so working on a private branch? might not be awesome if I am dropping a load of bug fixes 15:36:46 no 15:36:56 xenserver-core is not in openstack's github 15:37:07 oh, I see, working on xenserver-core 15:37:09 anything that is developed will be pushed to gerrit etc 15:37:22 even though it won't make H 15:37:46 i.e. if we do some prototype work that we might want to talk about at I summit then we can do that between now and release 15:37:49 in fact we have to ;) 15:38:01 OK, the idea of the period post feature freeze is to stabilze for the H release, so more help we can get the better, but clearly there are other priorities 15:38:29 I am certainly not about to do 100% on the bug fixing, doing similar prototyping here and there! 15:38:40 OK.. enough about bugs I guess 15:38:42 We will be fixing bugs 15:38:57 cool, thats all good 15:39:03 it all goes into the priority pot is all I'm saying 15:39:10 sure, understood 15:39:19 and some bugs are irrelevant :) 15:39:30 I'm glad to see we don't have any high bugs 15:39:34 if they are irrelevant, we can close them 15:39:40 so let me know 15:39:42 and the mediums could b e lows 15:39:52 irrelevant is also possibly low 15:40:13 https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29 15:40:18 https://bugs.launchpad.net/nova/+bug/1161471 for example could be low because we don't recommend 3-part images in production ;) 15:40:20 Launchpad bug 1161471 in nova "xenapi: guest kernel not cleaned up" [Medium,Triaged] 15:40:23 • Medium if the bug prevents a secondary feature from working properly 15:40:29 • Low if the bug is mostly cosmetic 15:40:29 ah 15:40:36 • Wishlist if the bug is not really a bug, but rather a welcome change in behavior 15:40:50 • High if the bug prevents a key feature from working properly for some users (or with a workaround) 15:41:21 I might start looking at targeting some bugs for Havanna if they look bad 15:41:37 anyways, if people do spot issues with the bug priorites, do drop me a note 15:41:40 https://bugs.launchpad.net/nova/+bug/1207253 should be "low" by that 15:41:42 Launchpad bug 1207253 in nova "xenapi config settings should be in their own nova config section" [Medium,Triaged] 15:42:03 BobBall: you mean XenServer does not recommend three part images? 15:42:28 we should look at documenting what we know works well, and is tested well, and what is in an "unknown" state 15:42:36 we should look at pulling some of the feature people don't use 15:42:47 we could deprecate them in the havanna timeframe 15:42:55 three part images are one 15:42:56 indeed 15:43:01 pools is another 15:43:02 I don't want to deprecate it 15:43:05 either of them 15:43:10 whys that? 15:43:26 3-part images are useful for testing and some rare use cases 15:43:41 pools are used by a small group of users so are a secondary feature 15:43:59 OK, so we need to get some effort to make those features work, and get them properly tested 15:44:10 yes 15:44:29 otherwise, they need to die, well, they might do that by themsleves, as the pool feature has shown 15:44:31 That's another post-freeze effort - better testing 15:44:45 in our internal CI and smokestack, even if we can't commit unit tests 15:44:58 indeed, how is the gating trunk going? 15:45:29 Expecting SS to be always-voting at the end of this week (possibly set up over the weekend) 15:45:30 its the multi-machine tests that we need to get working against trunk really soon 15:45:40 cool 15:45:46 how about running tempest tests? 15:45:47 -2 voting will be a few weeks after that once stability has been proven 15:46:17 I guess that's unlikely to make Havana unless the infra freeze is much later ;) 15:46:50 or if they don't have one (they aren't listed in the release schedule) 15:46:50 not sure there is such a thing, just they try to keep stuff stable when we hit milestones, I thought 15:46:53 :D 15:46:53 indeed 15:46:58 they don't release 15:47:07 XS is running in the RS cloud 15:47:10 which is fun 15:47:17 but hit some obvious IP issues 15:47:23 cool 15:47:30 i.e. can't install devstack domU without DHCP 15:47:47 but we now have an XVA/RPM combo being auto generated 15:47:55 so perhaps we can use that rather than the devstack install scripts 15:48:47 hmm, shame XS has no build in NAT 15:48:58 maybe ip tables rules could do that? 15:49:16 give it a static address, on a private network, and NAT traffic out xenbr0 15:49:28 it's the DHCP server that we'd need that's more fun 15:49:33 we have a work around but it makes things ugly 15:49:38 given with DHCP on RAX cloud, you would only get one per expected mac address 15:49:45 indeed 15:49:53 well you have isolated networks in the RAX cloud 15:49:56 just use those? 15:50:09 BobBall - I thought adding a DHCP server to a private net solved the issue. 15:50:17 indeed 15:50:20 it does 15:50:27 but then you have to use that server as the launchpoint 15:50:27 yeah, then RAX private networks will give you multi host 15:50:32 I haven't done any more work on it 15:50:42 it's a potential way forward but it's still icky :) 15:50:53 hmm, you want a single clean thing everytime 15:51:01 so you will always be launching XenServers right 15:51:05 I would prefer not to hack iptables stuff. 15:51:08 me too 15:51:39 My tentative plan is to have the XS with the IP (no DHCP server) with an XVA already built in it and ready to do the devstack thang 15:51:45 sure, but the rest will be waiting for us to support hypervisors in the RAX cloud, its not high priority 15:51:57 "support"? :) It works :D 15:52:07 isn't that the definition of "support"? ;) 15:52:11 well you only get one IP per machine 15:52:16 that what I mean 15:52:17 That's all we need 15:52:21 OK... 15:52:29 I guess I don't see your issue 15:52:35 gate is fully isolated - don't forget that running devstack in dom0 for libvirt works on RS cloud 15:52:46 The issue I have ATM is purely a setup one I think 15:52:53 OK 15:53:01 but setup for XS doesn't work in that environment 15:53:06 If devstack changes, and your XVA is out of date, you might have issues running the tests. 15:53:23 yeah, needs fresh devstack code on each run 15:53:38 Of course 15:53:47 but anyways, sounds like it just need some TLC, and it should be up and running 15:53:47 And the domU needs to get the nova etc code somehow too 15:53:51 Our internal CI is using a JEOS as a starting point. 15:53:52 but that's all manageable 15:53:56 yeah, i'm hoping we can make a quick script that will work with the configdrive to auto configure the instance, then i can make an actual xenserver image 15:54:12 would be a little quicker to stand up 15:54:27 antonym: yeah, that sounds good, build up from base centos 6.4? 15:54:33 It would save my hair from being pulled out too 15:54:41 Not xenserver-core - we need XS 6.2 15:54:41 Devstack install time is a round 700 secs (653) 15:54:49 something that would run the xe pif-reconfigure on boot and use the configdrive info 15:55:03 ah, gotcah 15:55:16 In a virtual machine it is 900 sec 15:55:18 our agent would probably not work so much since it relies on xenstore... which is already being used :P 15:55:24 I guess HVM XenServer doesn't get xenstore info, lol 15:55:35 antonym: Other option would be to use auto-generated answerfile 15:55:43 yeah, configdrive should have all the info already 15:55:44 even if it just uses the static IP passed into the kernel 15:55:57 a script can easily read that and put it as the IP for static config 15:56:01 BobBall: yeah, if you want to run through the entire install process... i'm thinking we capture the image before firstboot 15:56:22 that would help when we build up pools 15:56:30 so they get unique uuids, etc 15:56:35 uh oh - going into unchartered territory there ;) Golden images in pools are a big no-no ATM 15:56:53 ah, all the uniqueness is generated in the install? 15:56:59 indeed 15:57:02 I am remastering an ISO for virtual xenserver installs. 15:57:13 gotcha... i almost have my build automated anyways, so we could just use that then 15:57:20 and we haven't done enough analysis to understand if we've got all the uniquness bits in one place to configure for the golden image 15:57:33 isolated is much easier to argue 15:57:38 sure 15:57:44 baby steps first 15:57:47 OK... 15:57:58 60 seconds! 15:58:01 #topic Open Discussion 15:58:05 any last remarks? 15:58:25 Yes. It's sunny so I'm going to have a barbeque tonight. 15:58:32 sounds like we are moving forward 15:58:40 give a review to our changes, if you have time. 15:58:41 hmm, thats a good idea, I might do that too 15:58:50 oh good point 15:58:55 johnthetubaguy! Please review the XenAPI changes! 15:58:55 :) 15:58:56 Oh, I inserted my message to the right point! 15:58:57 there on my list, fingers cross that happens today! 15:58:57 *grin* 15:59:10 And also rope someone else in so there is another core working on it :D 15:59:10 Just -1, -2 them 15:59:19 obviously :) 15:59:24 #endmeeting