17:00:22 #startmeeting ironic 17:00:23 Meeting started Mon Jan 11 17:00:22 2016 UTC and is due to finish in 60 minutes. The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:26 The meeting name has been set to 'ironic' 17:00:30 o/ 17:00:30 hi everyone 17:00:30 o/ 17:00:32 \o/ 17:00:34 hi there 17:00:35 o/ 17:00:35 hi 17:00:37 o/ 17:00:38 o/ 17:00:44 as always, agenda is here: 17:00:45 #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting 17:00:48 o/ 17:00:52 nice and light today :) 17:01:05 o/ 17:01:13 #topic announcements and reminders 17:01:26 o/ 17:01:30 #info IPA releases this week, 1.1.0 from master and 1.0.1 from stable/liberty 17:01:34 <[1]cdearborn> o/ 17:01:41 o/ 17:01:44 o/ 17:02:09 also, networks patches are very close, I'd love to get eyes on those 17:02:16 anyone else have announcements? 17:02:24 Gate? 17:02:34 jroll: midcycle virtual whatever? 17:02:43 ditto for manual cleaning, those have 2x +2 17:02:50 rloo: let's talk in open discussion 17:02:58 jroll: ok 17:02:59 jlvillal: gate should be clear now afaik 17:03:03 jroll: I've got one from cinder side but I could hold it until open discussion 17:03:11 Great! 17:03:18 jroll: wasn't there a gate issue in stable/liberty or kilo? 17:03:38 e0ne: yes pls 17:03:40 gate is NOT clear yet 17:03:41 we still need to get the patches to allow dib to be tested from source in 17:03:45 oh 17:03:51 dtantsur: can you give more info please? 17:03:54 but will be as soon as DIB pinning finally lands 17:03:57 rloo: timeoutssssss 17:04:05 the patch to g-r is in flight: https://review.openstack.org/#/c/265775/ 17:04:08 dtantsur: oh, got it, thanks 17:04:28 o/ 17:04:36 after that please make sure we do land https://review.openstack.org/#/c/264736/ or we're broken again in the next release 17:04:40 o/ 17:04:57 Kilo is essentially dead due to constant timeouts 17:04:58 jroll: tinyipa has tested successfully in the gate, https://review.openstack.org/#/c/259089/ and https://review.openstack.org/#/c/234902/, the devstack lib patch needs work, but its seems quite fast now 17:05:10 Liberty is more or less fine, at least we managed to land all the pending patches (iirc) 17:05:13 cool, thanks sambetts 17:05:24 sambetts, w00t! 17:05:54 dtantsur: so what does that mean wrt kilo -- nothing we can do so no future kilo releases? 17:06:04 morning folks 17:06:19 rloo, someone has to invest a lot of time in figuring out. nobody did for now :) 17:06:38 dtantsur: gotcha. is there a bug against it? 17:06:47 yep, lemme find 17:07:12 rloo, https://bugs.launchpad.net/ironic/+bug/1531477 17:07:14 Launchpad bug 1531477 in Ironic "Kilo dsvm gate is mostly broken (timeout when waiting for active state)" [High,Confirmed] 17:07:40 thx dtantsur, i'll update etherpad if it isn't there 17:08:44 anything else? 17:09:09 #topic subteam status reports 17:09:19 as always, these are here: 17:09:21 #link https://etherpad.openstack.org/p/IronicWhiteBoard 17:09:41 any updates on "Node filter API and claims endpoint"? nothing is written 17:10:52 dtantsur: I've kind of put that on hold until the neutron stuff lands 17:10:59 got it 17:11:16 it won't land in nova this cycle, hoping to come back to it in a few weeks 17:11:55 jroll, should we hold the indexable json fields too? 17:12:00 * lucasagomes still need to update the spec 17:12:23 I would try to not hold everything for the next cycle :) 17:12:33 lucasagomes: keep working on it 17:12:37 we need to get the indexable json fields because that is holding up a lot of stuff like cleaning up how we deal with capabilities 17:12:43 I just personally would like to help get the network stuff done faster 17:12:43 ++ 17:12:58 fair enuff 17:13:00 and I think doing so is more valuable that a filter api without indexable json 17:13:04 I was dealing with capabilities for tripleo work: it's terrible :( 17:13:32 the neutron stuff is top priority for me right now, fwiw 17:13:52 ya, if we let that slip again we've failed hard 17:13:55 a lot of people have been putting a lot of work into it for 6 months, and it's very close to done 17:14:35 the patch chain works in my local OVS/devstack testing, and so far, the code I've reviewed is (aside from nits) very good -- though I haven't reviewed it all yet 17:14:59 sounds promising \o/ 17:15:15 there's a bit more to do on the CI side - we need to finish the switch to tempest-lib ASAP and rebase a few of the neutron integration patches 17:16:34 year.. we also need tempest plugin to *finally* finish versioning testing >_< 17:17:36 So far I didn't manage to run the tempest plugin ( 17:17:53 I think we could land all of the neutron work this week 17:18:05 ++ 17:18:20 the CI is the main open question for me -- we don't have an experimental job in there yt 17:18:22 yet 17:19:11 devananda: let's both review the variables in the devstack patches and if they're all good I can push on the project-config patch 17:19:14 review them today* 17:19:18 getting the tempest lib'ification complete will make testing the neutron integration significnatly easier 17:19:20 jroll: agreed 17:19:42 wrt tempest lib'ification, what's the delay there? Do we have patches up? 17:20:19 vsaienko: do you have time to do the patch rebasing today? 17:21:08 rloo, AFAIUI yes, we got the patches creating the base struct and copying the tempest code in ironic 17:21:12 and the project config one 17:21:17 jroll, anything else missing ? 17:21:52 lucasagomes: I have a patch removing our tests from tempest tree 17:21:57 and then it's done afaik 17:22:15 devananda, it's probably pretty late for him.. 17:22:16 good stuff 17:22:31 * jroll needs to update that one again for pep8 17:25:26 shall we move on? 17:27:32 sounds good to me 17:27:56 ++ 17:27:58 ++ 17:28:00 :) 17:28:05 ya 17:28:09 #topic open discussion 17:28:21 e0ne: you had something here? 17:28:43 small anouncment from cinder 17:29:10 we've finally got merget "Attach/detach volumes without Nova" spec 17:29:46 and we (cinder team) introduced new cinderclient extension named "python-brick-cinderclient-ext" to local attach/detach features 17:29:53 we discuss it at Summit 17:30:12 it could be used inside Ironic instances to attach cinder volumes 17:30:27 here is or short-term roadmap: http://lists.openstack.org/pipermail/openstack-dev/2016-January/083728.html 17:30:47 we need more feedback and feature requests from you 17:30:55 e0ne: woot! 17:31:04 \o/ 17:31:19 I hope, we'll merge attach/detach stuff in Mitaka 17:31:33 Awesome news 17:31:41 Ironic team's iput as very valuable for us 17:31:41 nice 17:31:59 e0ne: is it possible to test this within devstack today? 17:32:14 thanks e0ne! 17:32:22 devananda: yes, I'll share detailed manual with video later this week 17:32:42 devananda: for now, you need just to install this extension from sources 17:33:00 is it's needed, I can update you with our progress on weekly basis 17:33:36 s/is it's needed/if it's needed 17:33:52 e0ne: that's fantastic. once there are some docs on how to test / use it, things should accelerate on our end 17:34:15 devananda: I understand it. It's my top priority for now 17:34:53 also, we're thinking about functional tests for it including attach/detach inside ironic instance 17:34:57 e0ne: right now, I'm focused on landing the neutron integration, so I won't have time to personally dig into the cinder integration right now 17:35:35 devananda: ok, we've got some time before M-3 milestone 17:35:40 anyone else want to take the lead on the cinder work right now? if not, I'll dive into it once the neutron code is landed 17:36:08 well 17:36:14 let's be clear about cinder; there's two pieces 17:36:26 1) e0ne's work, which shouldn't require ironic changes 17:36:30 2) boot from volume 17:36:33 I think there's some people looking at boot from volume but I don't know if there's anyone working on attach/deattach volumes 17:36:42 jroll: you're absolutely right 17:36:47 right 17:37:26 jroll, +1... 1) will have changes in Ironic when we start attaching/deattaching volumes via the BMC 17:37:39 but the brick agent is hardware agnostic 17:37:39 jroll: for the #2 of your list, I need to to take a look on https://blueprints.launchpad.net/ironic/+spec/cinder-integration closer 17:37:50 e0ne: ditto 17:38:30 lucasagomes: with brick agent, we could start building images that automount volumes on boot 17:38:45 lucasagomes: it's our first step in integration between projects 17:38:50 and passing in attachment data via cloudinit 17:38:59 devananda: +1 17:39:06 devananda, e0ne sounds good 17:39:23 devananda, question there, the volume information is passed via configdrive? 17:39:23 devananda: to be clear, it's not an 'brick agend', it's a cinderclitnd CLI and Python API extension 17:39:25 this isn't BMC dependent, and with [i]PXE booting, could also do diskless nodes 17:39:44 or it's after the instance is deployed ? (wondering if the provision vs tenant network would be a hurdle here) 17:39:54 e0ne: thanks for the clarity 17:40:23 lucasagomes: yes, network will be an issue. cinder vol will need to be visible on tenant network for any of this to work 17:40:28 lucasagomes: yeah, the instance will need a route to the cinder-volume service 17:40:29 lucasagomes: you can use it after provisioning if you have SSH access 17:40:40 gotcha 17:40:47 jroll: actually, co cinder-api and storage networks 17:41:04 I know that ist's not secure enough for multi-tenant envs 17:41:09 e0ne: yeah, I usually assume the user can access cinder-api :) 17:41:19 jroll: :) 17:41:37 lucasagomes: for instance we have "servicenet" at rackspace - 10. provider network that has an ACL for onmetal -> cinder storage 17:42:15 and our cinder backend does magic acl things per volume 17:42:29 jroll: what backends do you use? 17:42:50 e0ne: dunno, it's something around LVM 17:42:53 jroll, I see, yeah... When time comes we probably should document a reference architeture for those things 17:43:05 forget what it's called 17:43:07 yeah 17:43:13 not to be a spoilsport, but we've spent > 10 min on this... can the rest be taken off line? 17:43:28 sure? 17:43:34 rloo, yes 17:43:34 do we have anything else to talk about? 17:43:37 do we have more open topics? 17:43:37 #link http://lists.openstack.org/pipermail/openstack-dev/2016-January/083728.html 17:43:37 oh, midcycle 17:44:09 so I'd like to do the midcycle in february sometime - does anyone have dates that work great or don't work at all for them? 17:44:18 sorry for the link duplication - now it will be easer to find it 17:44:19 or should I just pick one 17:44:42 jroll, after fosdem/devconf would be ideal 17:44:44 jroll, the whole week 1-5th + 15th do not work for me 17:45:09 jroll: we'll got cinder midcycle on January 26-29. ironic itegration questions are on our desk 17:45:23 dtantsur: the 15th or the week of the 15th? 17:45:30 dtantsur, are you going to the devconf right? 17:45:31 jroll, no, only 15th itself 17:45:40 cause it's 5-7th of feb 17:45:43 lucasagomes, yes + there're some meeting right before 17:45:44 e0ne: I won't be able to make it with that late of notice, I think. link to more details? 17:46:11 jroll:this is a virtual midcycle this go round? 17:46:19 Good chance I will be on vacation for two weeks starting 19-Feb-2015. 17:46:21 jroll: https://etherpad.openstack.org/p/mitaka-cinder-midcycle - we will have online streaming and hangouts call 17:46:33 e0ne: cool 17:46:35 NobodyCam: yes 17:46:38 :) 17:46:59 so it will be great to have someone from ironic there:) 17:47:01 e0ne: I'm local to the cinder midcycle location and can attend if you guys want a crazy ironic person to attend 17:47:05 ok so feb 16-19 looks pretty ideal heh, or 8-12 17:47:24 TheJulia: cool 17:47:40 jroll, wfm, maybe propose both dates to the ML 17:47:48 to see who would be able to join 17:47:59 lucasagomes: yeah, my first priority is cores 17:48:02 TheJulia: it will be helpful for sure 17:48:31 e0ne: putting it on my calendar then 17:48:38 either should work for /me 17:48:44 TheJulia: thanks! 17:48:53 e0ne, You would be lucky to get TheJulia to attend! :) 17:50:02 oh, jroll, maybe mention the new rfc process, or send email about it? 17:50:13 did we not send enough emails about it already? 17:50:14 err, rfe? 17:50:19 all I did was write it down 17:50:24 didn't we just update the documentation about it? 17:50:41 yeah 17:50:43 ok I'll email 17:50:54 thx jroll 17:50:58 :) 17:51:40 * jroll top posts like a boss 17:51:46 anything else? 8 minutes left 17:52:18 Regarding functional testing and how it gets run. 17:52:24 There was an email thread started. 17:52:31 I voted for being able to do it with tox. 17:52:38 So it is easy to do for developers. 17:52:39 yeah. It was me 17:52:46 There were also options of devstack and tempest. 17:52:50 Any thoughts? 17:53:02 I was going to bring it up in the Testing/QA meeting on Wednesday. 17:53:06 so 17:53:11 those are two different things 17:53:13 right? 17:53:22 inspector does it with tox for developer's ease 17:53:23 I haven't seen this thread afaik 17:53:27 but like, do it with tox 17:53:40 and there's a dependency of having ironic available 17:53:44 I would say tox 17:53:46 too 17:53:46 so if you're in devstack, you have it 17:53:49 and can still run tox 17:53:50 right? 17:54:00 (assuming this is client testing) 17:54:05 I think we can run tox with DevStack as well like HEAT does 17:54:09 jroll, what inspector does (and I think what jlvillal means) is that tox actually starts a limited inspector instance 17:54:13 Yes. I am thinking for openstack/ironic 17:54:21 * lucasagomes gotta read the ML thread 17:54:29 jroll, so it's fully standalone, does not need devstack/any other ironic installation 17:54:29 oh, ironic, not ironicclient? 17:54:33 jroll et al: this thread: http://lists.openstack.org/pipermail/openstack-dev/2016-January/083744.html 17:54:36 dtantsur, +1 Run ironic-api, ironic-conductor, and rabbitmq. To start 17:54:48 sergek, so setup devstack and then call a tox -e that will run the tests? 17:54:48 but I'm just telling what inspector does, not sure I'm fully aware of the context :) 17:55:11 inspector-client does the same, it uses shared code in ironic_inspector.test.functional 17:55:11 lucasagomes: yes 17:55:23 jlvillal: that doesn't make it easy for developers, though, it depends on lots of things outside of pip 17:55:25 I think if we developers have to run devstack and/or tempest to do functional testing, then they likely won't 17:55:34 the question was how to run ironic services - manually or with devstack 17:55:39 https://github.com/openstack/python-ironic-inspector-client/blob/master/ironic_inspector_client/test/functional.py#L172-L174 17:55:40 it sounds like a good approach 17:55:42 I have to install half of devstack just to start ironic anyway 17:55:57 sergek, when we test in gate, we will already have devstack right? 17:56:11 this approach sounds good IMO, and it's slim for ironic 17:56:30 I would love to be able to run 'tox -efunc' on my laptop, if it's even possible.. 17:56:33 * lucasagomes don't want to maintain yet-another big script to setup everything 17:56:36 I think we can all options altogether 17:56:37 I totally agree 17:56:43 but I also don't want rabbit on my laptop 17:56:46 is my point here 17:56:57 jroll, oslo.msg has an in-memory implementation 17:57:00 iirc 17:57:01 *have 17:57:03 oh true 17:57:14 idk, need to investigate more I guess 17:57:15 3 minutes 17:57:17 s 17:57:33 I will discuss it more on Wednesday. And feel free to comment in email thread! 17:57:44 zeromq doesn't need broker, might be an option jroll 17:58:09 afaik 17:58:11 sure 17:58:14 rabbit was one example 17:58:17 I'd love to have all three options - manual service, devstack and tempest plguin 17:58:31 alas I didn't manage to start Tempest plugin yet 17:58:37 sergek, I don't see value in manual service option tbh.. 17:58:44 vagrant ? 17:58:48 I'd like to call tox -efunc and just have it finish to the end.. 17:59:07 * dtantsur sets a reminder to respond to the thread tomorrow morning 17:59:09 +1 on tox -efunc 17:59:17 dtantsur: agree. and the question is what happens under the hood 17:59:17 ok we're out of time 17:59:23 https://www.vagrantup.com 17:59:24 #endmeeting