*** baoli has quit IRC | 00:02 | |
*** baoli has joined #openstack-ironic | 00:07 | |
*** baoli has quit IRC | 00:16 | |
*** baoli has joined #openstack-ironic | 00:17 | |
*** mrkelley has quit IRC | 00:19 | |
*** keedya_ has quit IRC | 00:21 | |
*** kuthurium has joined #openstack-ironic | 00:27 | |
*** baoli has quit IRC | 01:02 | |
*** ekarlso has quit IRC | 01:33 | |
*** causten_ has joined #openstack-ironic | 01:48 | |
*** ChrisAusten has quit IRC | 01:51 | |
*** ekarlso has joined #openstack-ironic | 01:53 | |
*** linuxgeek has joined #openstack-ironic | 02:38 | |
*** baoli has joined #openstack-ironic | 03:14 | |
*** Nisha has joined #openstack-ironic | 03:18 | |
*** baoli has quit IRC | 03:19 | |
*** raginbaji is now known as raginbajin | 03:28 | |
*** linuxgeek has quit IRC | 03:43 | |
*** links has joined #openstack-ironic | 04:07 | |
*** Nisha has quit IRC | 04:07 | |
*** Nisha has joined #openstack-ironic | 04:21 | |
*** bnemec has joined #openstack-ironic | 04:21 | |
*** linuxgeek has joined #openstack-ironic | 04:36 | |
*** vmud213 has joined #openstack-ironic | 04:37 | |
openstackgerrit | Yushiro FURUKAWA proposed openstack/ironic: Align cleanning behaivor in maintaenace b/w auto and manual https://review.openstack.org/299097 | 04:39 |
---|---|---|
*** rcernin has joined #openstack-ironic | 05:05 | |
*** Sukhdev has joined #openstack-ironic | 05:14 | |
*** yolanda has joined #openstack-ironic | 05:32 | |
*** yolanda has quit IRC | 05:34 | |
*** yolanda has joined #openstack-ironic | 05:34 | |
*** ChubYann has quit IRC | 05:35 | |
*** Nisha_away has joined #openstack-ironic | 05:37 | |
*** Nisha has quit IRC | 05:40 | |
*** Sukhdev has quit IRC | 05:55 | |
*** mkovacik has quit IRC | 05:56 | |
*** causten_ has quit IRC | 06:01 | |
*** jtomasek has joined #openstack-ironic | 06:12 | |
*** Nisha_brb has joined #openstack-ironic | 06:16 | |
*** Nisha_away has quit IRC | 06:16 | |
*** moshele has joined #openstack-ironic | 06:16 | |
*** deray has joined #openstack-ironic | 06:17 | |
*** Nisha_away has joined #openstack-ironic | 06:23 | |
*** Nisha_brb has quit IRC | 06:27 | |
*** itamarl has joined #openstack-ironic | 06:56 | |
*** fragatina has joined #openstack-ironic | 07:08 | |
*** Nisha_away has quit IRC | 07:11 | |
*** daemontool has joined #openstack-ironic | 07:11 | |
*** yolanda has quit IRC | 07:12 | |
*** Nisha has joined #openstack-ironic | 07:12 | |
*** ifarkas has joined #openstack-ironic | 07:12 | |
*** vmud213 is now known as vmud213_brb | 07:15 | |
*** vmud213_brb is now known as vmud213 | 07:16 | |
*** daemontool has quit IRC | 07:21 | |
*** daemontool has joined #openstack-ironic | 07:21 | |
openstackgerrit | vinay kumar muddu proposed openstack/ironic-python-agent: Fix local boot issue with fedora in uefi mode https://review.openstack.org/302143 | 07:32 |
*** daemontool has quit IRC | 07:33 | |
*** tesseract has joined #openstack-ironic | 07:33 | |
*** tesseract is now known as Guest37590 | 07:34 | |
*** ohamada has joined #openstack-ironic | 07:35 | |
*** yolanda has joined #openstack-ironic | 07:36 | |
*** daemontool has joined #openstack-ironic | 07:37 | |
*** ohamada has quit IRC | 07:38 | |
*** ohamada has joined #openstack-ironic | 07:39 | |
*** fragatina has quit IRC | 07:46 | |
*** derekh has joined #openstack-ironic | 07:57 | |
*** zzzeek has quit IRC | 08:00 | |
*** zzzeek has joined #openstack-ironic | 08:00 | |
*** praneshp has quit IRC | 08:03 | |
*** dmk0202 has joined #openstack-ironic | 08:06 | |
*** deray has quit IRC | 08:07 | |
*** derekh has quit IRC | 08:19 | |
*** deray has joined #openstack-ironic | 08:23 | |
*** mkovacik has joined #openstack-ironic | 08:24 | |
*** jistr has joined #openstack-ironic | 08:26 | |
*** vmud213 has quit IRC | 08:39 | |
*** daemontool has quit IRC | 08:54 | |
*** divya has joined #openstack-ironic | 09:25 | |
*** ohamada_ has joined #openstack-ironic | 09:30 | |
*** ohamada has quit IRC | 09:30 | |
*** deray has quit IRC | 09:35 | |
*** deray has joined #openstack-ironic | 09:38 | |
*** fragatina has joined #openstack-ironic | 09:46 | |
*** fragatina has quit IRC | 09:52 | |
openstackgerrit | Aparna proposed openstack/proliantutils: Adds support in hpssa for SDD interface 'Solid State SAS' https://review.openstack.org/311713 | 10:16 |
*** irf has joined #openstack-ironic | 10:17 | |
irf | morning Ironic and all | 10:17 |
irf | good news from me .... | 10:17 |
irf | I am able to boot the machine from woL as well as from Ironic | 10:17 |
openstackgerrit | Aparna proposed openstack/proliantutils: Modify minimum disk for RAID 0 in hpssa https://review.openstack.org/311714 | 10:19 |
*** vmud213 has joined #openstack-ironic | 10:36 | |
*** chlong has joined #openstack-ironic | 10:38 | |
*** Nisha has quit IRC | 10:40 | |
*** yolanda has quit IRC | 10:47 | |
*** fragatina has joined #openstack-ironic | 10:49 | |
*** yolanda has joined #openstack-ironic | 10:51 | |
*** fragatina has quit IRC | 10:53 | |
TheJulia | Good morning everyone | 10:58 |
*** itamarl has quit IRC | 11:01 | |
*** irf has quit IRC | 11:02 | |
*** irf has joined #openstack-ironic | 11:08 | |
jroll | ohai TheJulia | 11:09 |
TheJulia | Good morning jroll | 11:09 |
jroll | IRONIC_VM_COUNT=64 | 11:10 |
jroll | this should be fun :D | 11:10 |
TheJulia | How much ram do you have? :) | 11:10 |
jroll | 128gb, I think? | 11:11 |
jroll | yep | 11:11 |
TheJulia | I would suggest popcorn if I could eat it | 11:14 |
*** yolanda has quit IRC | 11:16 | |
*** yolanda has joined #openstack-ironic | 11:16 | |
*** yolanda has quit IRC | 11:29 | |
*** yolanda has joined #openstack-ironic | 11:30 | |
*** trown|outtypewww is now known as trown | 11:43 | |
*** e0ne has joined #openstack-ironic | 11:44 | |
*** raginbajin has quit IRC | 11:50 | |
divya | hi ironicers... | 11:53 |
TheJulia | good morning | 11:55 |
divya | nova boot is failing for physical bare metal ironic node. | 12:03 |
divya | so i need to debug this issue, so is there any troubleshooting doc? | 12:03 |
TheJulia | divya: Generally the best place to start is to look at ironic node-show output for the node nova attempted to schedule on to | 12:07 |
divya | ironic node is created successfully | 12:08 |
jroll | divya: hi, we have a start on a troubleshooting doc here: http://docs.openstack.org/developer/ironic/deploy/troubleshooting.html | 12:08 |
divya | http://paste.openstack.org/show/495518/ | 12:08 |
*** wajdi has joined #openstack-ironic | 12:08 | |
divya | this is the steps i fillowed to create node | 12:09 |
jroll | divya: your paste looks like the pxe boot to the deploy ramdisk is failing, I'd suggest looking at the node's console output to start | 12:09 |
jroll | likely a network issue | 12:09 |
*** baoli has joined #openstack-ironic | 12:11 | |
*** athomas_ has joined #openstack-ironic | 12:12 | |
*** baoli_ has joined #openstack-ironic | 12:12 | |
divya | _Jgoll : u metn ironic node-get-console | 12:12 |
TheJulia | divya: actually attaching a screen and retrying | 12:13 |
TheJulia | to the physical chassis | 12:13 |
divya | TheJulia : i am getting pxe failure that's it. then nova boot shows "no valid host error" | 12:14 |
*** xavierr has joined #openstack-ironic | 12:14 | |
*** e0ne has quit IRC | 12:14 | |
TheJulia | upon failure, the ironic nova virt driver will tell nova that it failed and will attempt to reschedule | 12:15 |
TheJulia | hence "no valid host" | 12:15 |
xavierr | cd /brazil | 12:16 |
*** baoli has quit IRC | 12:16 | |
divya | TheJulia : nova hypervisor-list shows ironic node. | 12:16 |
xavierr | after 24 hours... back home! (: | 12:17 |
jroll | divya: no, like TheJulia said, attach a physical screen to the machine (or something like serial-over-lan with ipmi) | 12:17 |
xavierr | morning all | 12:17 |
jroll | xavierr: \o/ | 12:17 |
TheJulia | xavierr: congrats! :) | 12:17 |
TheJulia | divya: Like jroll said as well, you likely have a networking issue somewhere between the physical machine and neutron, so the most logical step is to evaluate what the physical node is seeing, and work backwards from there. | 12:19 |
*** vmud213 has quit IRC | 12:19 | |
TheJulia | As such, verify MAC addresses, if you have multiple nics, verify the intended port is connected and booting first | 12:20 |
divya | Jroll, TheJulia : is it neutron/ironic config issue or issue reaching pxe server? | 12:22 |
divya | TheJulia : i have only one NIC connected, removed other nodes. when i execute nova boot, bare metal node is powered on and pxe error is shown | 12:23 |
TheJulia | divya: I don't think we have enough information to know for sure. Based on your paste output, it looks like it is an issue reaching the pxe server, your tcpdump makes me wonder if it is a networking config issue. | 12:23 |
TheJulia | divya: Does the pxe loader load at all, or does it just time out trying to get an address? | 12:24 |
TheJulia | That would be a hint as to possible root causes | 12:25 |
*** dprince has joined #openstack-ironic | 12:25 | |
divya | i don't find any clue in n-cpu.log, is there a way to check? | 12:25 |
xavierr | jroll, \o/ | 12:26 |
xavierr | TheJulia, tks : ) | 12:26 |
jroll | divya: you need to look at the actual boot process on the hardware | 12:26 |
*** irf has quit IRC | 12:27 | |
divya | Jroll : pxe loader not loaded. | 12:29 |
*** mbound has joined #openstack-ironic | 12:30 | |
*** wajdi has quit IRC | 12:33 | |
*** jjohnson2_ has joined #openstack-ironic | 12:33 | |
*** vmud213 has joined #openstack-ironic | 12:35 | |
jroll | divya: that's the message on the console? | 12:35 |
TheJulia | jlvillal: By chance is there an etherpad for grenade notes? | 12:42 |
*** mjturek1 has joined #openstack-ironic | 12:44 | |
*** athomas_ has quit IRC | 12:49 | |
*** jaypipes has joined #openstack-ironic | 12:51 | |
*** krtaylor has joined #openstack-ironic | 12:51 | |
divya | NO. Direct console shows "Boot failed. PXE network" message 2 times, that's it. | 12:58 |
divya | Jroll, TheJulia : Direct console shows "Boot failed. PXE network" message 2 times, that's it. | 13:00 |
jroll | hrm, sounds like a network issue | 13:00 |
jroll | but doesn't give much info | 13:01 |
*** nicodemos has joined #openstack-ironic | 13:01 | |
jroll | I guess I'd tcpdump for dhcp requests where neutron-dhcp-agent is running | 13:01 |
divya | $ sudo tcpdump -vv port 67 or port 68 | 13:08 |
divya | didn't show anything | 13:08 |
jroll | divya: right, so the dhcp requests aren't getting to your control plane, likely something in your network architecture | 13:10 |
*** rbradfor_home is now known as rbradfor | 13:11 | |
*** Goneri has joined #openstack-ironic | 13:12 | |
divya | jroll : neutron-dhcp-agent is running and vi /opt/stack/data/neutron/dhcp/c35cbfde-db1a-4224-8098-a0e67b772a20/host file shows the same ip as shown in nova list | 13:12 |
*** vmud213 has quit IRC | 13:14 | |
jroll | divya: sure, if the dhcp request doesn't get there, dhcp and therefore pxe booting won't work | 13:18 |
divya | Jroll : could u take a look and give some clue to fix it? http://paste.openstack.org/show/495869/ | 13:20 |
jroll | divya: not sure, but looks like it's something in the network infrastructure, not the openstack configuration | 13:22 |
deray | krotscheck, hi.. when I do npm install @ironic-webclient dir.. I face an issue | 13:24 |
krotscheck | deray: What's up? | 13:25 |
deray | krotscheck, hi .. g'morning :) everything fine here .. barring this: http://paste.openstack.org/show/495870/ :) | 13:27 |
divya | Jroll : Thanks, no openstack config, i am little bit happy now. | 13:28 |
krotscheck | deray: Well, first of all, you don't need to run sudo. | 13:28 |
divya | Jroll: issue could be connectivity between controller and physical bare metal server? | 13:28 |
krotscheck | deray: Secondly, if you do run sudo, it's likely mucked up the permissions in your npm cache directory. | 13:29 |
jroll | divya: yes | 13:29 |
*** baoli_ has quit IRC | 13:30 | |
jroll | devananda: news on the project install guide stuff: https://review.openstack.org/#/c/301284/16/specs/newton/project-specific-installguides.rst | 13:30 |
divya | Jroll : assigned controller em3' | 13:31 |
krotscheck | deray: So first of all, I'd say- 'sudo npm cache clear; npm cache clear; sudo rm -rf ./node_modules' | 13:31 |
deray | krotscheck, but I believe I need to use "sudo -E" .. as I have a proxy setup as part of my env on my dev machine to access outside world. And npm will try installing all the devDependencies from internet, rt? | 13:31 |
*** yolanda has quit IRC | 13:32 | |
krotscheck | deray: Would this work? https://jjasonclark.com/how-to-setup-node-behind-web-proxy | 13:32 |
deray | krotscheck, there's no "./node_modules" in my current dir: /opt/stack/ironic-webclient | 13:32 |
divya | Jroll : assigned controller em3 ip as 100.100.100.10, bm node's em3 as 100.100.100.10 and tried ping. it works fine. | 13:32 |
deray | krotscheck, aha .. seems so .. looking | 13:33 |
divya | Jroll : now sure i am trying wrong | 13:33 |
krotscheck | deray: If there's no node_modules directory, then that's good - means that you never got to the point where it installed the downloaded packages in your local project. | 13:33 |
*** [1]cdearborn has joined #openstack-ironic | 13:34 | |
xavierr | do you guys have that etherpad link with all links for etherpad for each design sessions decisions? | 13:34 |
krotscheck | It basically means the problem can be isolated to the cache. | 13:35 |
TheJulia | xavierr: https://etherpad.openstack.org/p/summit-mitaka-ironic | 13:35 |
*** links has quit IRC | 13:35 | |
TheJulia | err | 13:35 |
TheJulia | wrong one | 13:35 |
TheJulia | xavierr: https://etherpad.openstack.org/p/ironic-newton-summit | 13:36 |
* TheJulia goes and makes more coffee | 13:36 | |
* xavierr One link to rule them all | 13:36 | |
jroll | divya: not sure how much more I can troubleshoot this, sorry :/ | 13:36 |
xavierr | TheJulia, thanks ;) | 13:36 |
*** yolanda has joined #openstack-ironic | 13:36 | |
divya | Jroll, TheJulia : Thanks much for the support. | 13:37 |
jroll | you're welcome | 13:37 |
jroll | JayF: that onmetal + devstack + coreos kernel panic thing, I'm even seeing it on latest alpha coreos (4.5.2 kernel) :/ can't remember if you had thoughts on why that would happen on onmetal but not a VM | 13:53 |
jroll | (also happens on the super old coreos ramdisk we publish, fwiw) | 13:53 |
jroll | JayF: https://gist.github.com/jimrollenhagen/87ca2baa012d651e032bed3a2db47433 if you feel like taking a look when you have some time :) | 13:54 |
TheJulia | jroll: by chance noapic on the kernel command line arguments? | 13:55 |
*** e0ne has joined #openstack-ironic | 13:56 | |
jroll | TheJulia: [ 0.000000] Command line: initrd=/opt/stack/data/ironic/tftpboot/c7467a06-47e1-44d3-af0e-d3d6f3e5e97d/deploy_ramdisk selinux=0 disk= iscsi_target_iqn= deployment_id= deployment_key= ironic_api_url= troubleshoot=0 text nofb nomodeset vga=normal console=ttyS0 systemd.journald.forward_to_console=yes ipa-debug=1 boot_option= ipa-api-url=http://65.61.151.138:6385 ipa-driver-name=agent_ssh | 13:56 |
jroll | boot_mode= coreos.configdrive=0 BOOT_IMAGE=/opt/stack/data/ironic/tftpboot/c7467a06-47e1-44d3-af0e-d3d6f3e5e97d/deploy_kernel ip=10.1.0.4:65.61.151.138:10.1.0.1:255.255.255.0 BOOTIF=01-52-54-00-3d-21-8a | 13:56 |
jlvillal | Good morning Ironic. | 13:56 |
jroll | whoa, the was long | 13:56 |
jroll | that* | 13:56 |
jroll | anyway, nope :/ | 13:56 |
TheJulia | jroll: I meant, try adding it :) | 13:56 |
TheJulia | null pointer dereference.... wow | 13:56 |
jroll | TheJulia: oh, it is on the host if that's what you meant | 13:56 |
jroll | but I can try that | 13:57 |
TheJulia | jroll: so this log is from a VM booting right? | 13:57 |
TheJulia | onmetal? | 13:57 |
*** xhku has joined #openstack-ironic | 13:57 | |
TheJulia | or is it native on onmetal? | 13:57 |
jroll | TheJulia: that log is from a devstack VM booting on an onmetal host | 13:57 |
jroll | onmetal is straight up bare metal | 13:57 |
jlvillal | TheJulia, Yes there is. You can follow the chain starting here: https://wiki.openstack.org/wiki/Meetings/Ironic-QA | 13:58 |
jlvillal | TheJulia, Which should get you here: https://etherpad.openstack.org/p/ironic-newton-summit-grenade-worksession | 13:58 |
*** fragatina has joined #openstack-ironic | 13:59 | |
TheJulia | jroll: okay, thats what I thought but I wanted to make sure... I would add noapic and see if that changes things on the kernel command line for the devstack vm booting... alternatively something is up with virtualization on that hardware I suspect, espescially since it is booting in paravirtual mode | 13:59 |
TheJulia | jroll: i bet if it was in full emulation mode, it wouldn't blink. | 14:00 |
TheJulia | jlvillal: thanks, found the etherpad and the github repo, need to find that window again and clone :) | 14:00 |
*** mtanino has joined #openstack-ironic | 14:00 | |
jroll | TheJulia: yeah, I'll try noapic and then dig in further | 14:01 |
cinerama | jlvillal: thanks for posting the link | 14:01 |
jroll | we certainly intend for virt to work on these :P | 14:01 |
jroll | TheJulia: muahaha, you rock, progress is had | 14:01 |
cinerama | also hi everyone! | 14:02 |
TheJulia | \o/ | 14:02 |
*** ametts has joined #openstack-ironic | 14:02 | |
jroll | \o cinerama | 14:02 |
deray | krotscheck, silly me .. there was already a tmp (file) in my home dir (/opt/stack). so, npm was not able to create a folder with the same name. Btw, setting up Npm behind a corporate web proxy also helped, I feel. Otherwise, would have hit that issue after resolving the earlier one. | 14:02 |
krotscheck | deray: Cool, so you're all good? | 14:03 |
deray | krotscheck, YES.. | 14:03 |
* TheJulia looks at her lab router's routes, raises an eyebrow, and feels like a coffee IV is needed | 14:03 | |
*** fragatina has quit IRC | 14:03 | |
krotscheck | Sweeeet | 14:04 |
deray | krotscheck, now going to start ``npm start``. But what's the port gulp uses? I already have some ports like 8081/2 used up | 14:04 |
openstackgerrit | Xavier proposed openstack/ironic: Updating links and removing unnecessary dollar symbol from doc page https://review.openstack.org/308821 | 14:04 |
krotscheck | deray: 8000. It should automatically open a browser. | 14:05 |
krotscheck | deray: Though, that's only for dev purposes. For production, run `npm pack` and unzip the tarball in a webroot of your own choice. | 14:06 |
deray | krotscheck, superrrr! browser opens with link: http://localhost:8000/#/config | 14:06 |
krotscheck | deray: Yep. Now tell it where your ironic api is, and it _should_ autodetect it (as long as you have CORS configured correctly) | 14:07 |
deray | krotscheck, sure | 14:07 |
*** baoli has joined #openstack-ironic | 14:07 | |
TheJulia | jroll: or it could be the host OS on your onmetal machine, I just don | 14:08 |
TheJulia | 't know the apic/paravirt interaction so not sure | 14:08 |
deray | krotscheck, gimme some time.. | 14:08 |
deray | krotscheck, lemme enable cors | 14:08 |
jroll | TheJulia: no, it's chugging along now, thanks :) | 14:12 |
*** mtanino has quit IRC | 14:12 | |
*** e0ne has quit IRC | 14:13 | |
*** e0ne has joined #openstack-ironic | 14:14 | |
openstackgerrit | Jarrod Johnson proposed openstack/pyghmi: Cope with empty agentless fields https://review.openstack.org/311751 | 14:17 |
*** ChrisAusten has joined #openstack-ironic | 14:23 | |
xavierr | guys, do we have spec for openstackclient support? | 14:26 |
*** itamarl has joined #openstack-ironic | 14:26 | |
TheJulia | https://github.com/openstack/ironic-specs/blob/master/specs/approved/ironicclient-osc-plugin.rst | 14:27 |
*** dprince has quit IRC | 14:29 | |
openstackgerrit | Jim Rollenhagen proposed openstack/ironic-specs: Add newton priorities doc https://review.openstack.org/311530 | 14:29 |
jroll | would love eyes/volunteers on that ^ | 14:30 |
xavierr | TheJulia, thanks | 14:33 |
*** mag009_ has joined #openstack-ironic | 14:40 | |
*** ppiela has joined #openstack-ironic | 14:43 | |
openstackgerrit | Jim Rollenhagen proposed openstack/ironic: Devstack: allow extra PXE params https://review.openstack.org/311757 | 14:44 |
jroll | ^ should be an easy one :P | 14:44 |
deray | krotscheck, everything goes fine till now.. have been able to add 2 ironic clouds. But when i try to see the node with one it remains on the ``loading nodes`` prompt and seems hung | 14:44 |
deray | krotscheck, ./config.json missing I feel .. need to provide the tenant/user name there? | 14:48 |
*** ayoung has joined #openstack-ironic | 14:50 | |
*** divya has quit IRC | 14:53 | |
deray | krotscheck, can u mail me a sample config.json to try out? .. and where to place that.. my email: debayan.ray@gmail.com | 14:57 |
openstackgerrit | Merged openstack/pyghmi: Cope with empty agentless fields https://review.openstack.org/311751 | 14:58 |
*** mtanino has joined #openstack-ironic | 14:59 | |
*** deray has quit IRC | 14:59 | |
jroll | AssertionError: 0 == 0 : | 15:00 |
jroll | gg | 15:00 |
jaypipes | jroll, devananda, jlvillal, JayF: mind checking my logic around this use case based on discussions around how the generic-resource-pools functionality can fulfill the multi-compute-host issues? http://paste.openstack.org/show/495878/ | 15:00 |
* jroll reads | 15:05 | |
*** [1]cdearborn has quit IRC | 15:06 | |
*** [1]cdearborn has joined #openstack-ironic | 15:06 | |
jroll | jaypipes: yeah, that matches my thoughts around it, I think it's accurate | 15:07 |
jroll | jaypipes: so the two issues I see right now are: | 15:08 |
jroll | 1) that's an explosion of effort if an operator has many hardware configurations | 15:08 |
*** itamarl has quit IRC | 15:09 | |
jroll | 2) when hardware is added/removed/taken offline for management, the resource pool would need to be updated every time | 15:09 |
jroll | maybe we can make ironic do that automagically, though? | 15:09 |
jroll | IOW, the operator is now responsible for ensuring that the resource pool has an accurate count of resources that can be scheduled to, rather than the ironic driver | 15:09 |
jroll | make sense? | 15:10 |
*** nicodemos has quit IRC | 15:13 | |
*** links has joined #openstack-ironic | 15:14 | |
*** links has quit IRC | 15:14 | |
*** sinh_ has joined #openstack-ironic | 15:15 | |
*** phschwartz_ has joined #openstack-ironic | 15:15 | |
*** mtreinish_ has joined #openstack-ironic | 15:16 | |
*** lekha_ has joined #openstack-ironic | 15:17 | |
*** cfarquhar has joined #openstack-ironic | 15:18 | |
*** cfarquhar has quit IRC | 15:18 | |
*** cfarquhar has joined #openstack-ironic | 15:18 | |
*** BadCub_ has joined #openstack-ironic | 15:18 | |
*** igordcar1 has joined #openstack-ironic | 15:19 | |
*** ijw has joined #openstack-ironic | 15:20 | |
*** davidlenwell has quit IRC | 15:20 | |
*** marlinc_ has joined #openstack-ironic | 15:21 | |
*** ijw_ has joined #openstack-ironic | 15:21 | |
*** dtantsur|afk has quit IRC | 15:21 | |
*** kuthurium has quit IRC | 15:21 | |
*** ppiela has quit IRC | 15:21 | |
*** mtreinish has quit IRC | 15:21 | |
*** lekha has quit IRC | 15:21 | |
*** rcernin has quit IRC | 15:21 | |
*** cfarquhar_ has quit IRC | 15:21 | |
*** phschwartz has quit IRC | 15:21 | |
*** marlinc has quit IRC | 15:21 | |
*** sinh has quit IRC | 15:21 | |
*** alineb has quit IRC | 15:21 | |
*** jrist has quit IRC | 15:21 | |
*** BadCub has quit IRC | 15:21 | |
*** igordcard has quit IRC | 15:21 | |
*** jlvillal has quit IRC | 15:21 | |
*** mtreinish_ is now known as mtreinish | 15:21 | |
*** dtantsur has joined #openstack-ironic | 15:21 | |
*** jlvillal has joined #openstack-ironic | 15:21 | |
*** dtantsur has quit IRC | 15:22 | |
*** dtantsur has joined #openstack-ironic | 15:22 | |
*** BadCub_ is now known as BadCub | 15:22 | |
*** rcernin has joined #openstack-ironic | 15:22 | |
*** kuthurium_ has joined #openstack-ironic | 15:23 | |
*** marlinc_ is now known as marlinc | 15:23 | |
krotscheck | deray: config.json is just one of the configuration options, you shouldn't need it. | 15:23 |
*** lekha_ is now known as lekha | 15:24 | |
*** ijw has quit IRC | 15:25 | |
*** garthb has joined #openstack-ironic | 15:27 | |
*** davidlenwell has joined #openstack-ironic | 15:28 | |
*** rcernin has quit IRC | 15:30 | |
*** moshele has quit IRC | 15:30 | |
*** fragatina has joined #openstack-ironic | 15:34 | |
*** ppiela has joined #openstack-ironic | 15:35 | |
*** yolanda has quit IRC | 15:35 | |
JayF | No meeting today, correct? | 15:36 |
jroll | right | 15:36 |
*** jrist has joined #openstack-ironic | 15:37 | |
*** chlong has quit IRC | 15:38 | |
*** fragatina has quit IRC | 15:41 | |
*** fragatina has joined #openstack-ironic | 15:42 | |
devananda | morning, all | 15:43 |
*** alineb has joined #openstack-ironic | 15:45 | |
jroll | hey devananda | 15:46 |
*** e0ne has quit IRC | 15:47 | |
jaypipes | jroll: sorry, went to grab coffee :) on 1) there is no way around that I can see. on 2) if hardware is taken offline for maintenance and it shouldn't be shceduled to, yes, the external script would set the --reserved value in resource-pool-inventory-update for the pool in question. | 15:49 |
jaypipes | jroll: and yes, we can use the ironic virt driver I believe. | 15:50 |
jroll | jaypipes: so you're okay with the ironic driver handling resource pool management? | 15:50 |
jaypipes | jroll: right now, it's abstract for me. "something" updates the inventory and reserved amounts :) | 15:50 |
jroll | heh. yeah. | 15:50 |
jaypipes | jroll: we can make the nova-compute daemon update it, sure, via ironic virt driver. | 15:51 |
jroll | jaypipes: okay, now you have my ears :) | 15:51 |
*** mbound has quit IRC | 15:52 | |
* jroll tries to reason about, if the compute daemon managed everything, if it would be possible to have that manage the host aggregates as well | 15:52 | |
* jroll hrms really hard | 15:52 | |
jaypipes | jroll: keep in mind that the general direction is that these APIs are to be the broken-out scheduler REST APIs. In the same way that the compute manager calls to Neutron's REST API when plugging VIFs for a VM, we can have the compute manager (i.e. not the ironic virt driver itself but rather the container that houses the ironic virt driver) do the calls to the new shceduler REST API. | 15:52 |
jroll | jaypipes: sure, and so could ironic, so we have a couple pathways to attack this | 15:53 |
jaypipes | right | 15:53 |
jaypipes | jroll: I'll leave that discussion for th eimplementation section of the spec versus the use case section ;) | 15:54 |
jroll | jaypipes: totes | 15:54 |
jroll | I'll mull this over | 15:54 |
jaypipes | jroll: mull away. | 15:54 |
jaypipes | jroll: I suppose that's better than mulla way. | 15:54 |
jroll | the host aggregate thing (e.g. when a new hardware config is enrolled in ironic) is the hard part | 15:54 |
* devananda catches up on scrollback | 15:54 | |
jaypipes | jroll: you would only need to add a new host aggregate for hardware that doesn't "fit" an existing resource class, not every time you added new hardware. | 15:55 |
jroll | jaypipes: could the same set of compute hosts handle all resource pools? | 15:55 |
jroll | yep, that's my thought | 15:55 |
devananda | jaypipes: L11-19 is not an accurate description of the current behavior | 15:55 |
devananda | if it is supposed to be, I will reply with some clarification | 15:56 |
* devananda continues reading | 15:56 | |
jroll | devananda: good catch, I glazed over that | 15:56 |
jaypipes | jroll: yes, theoretically all compute nodes could handle all resource classes. | 15:56 |
jaypipes | jroll: I just thought it was cleaner to have one set of compute hosts per agg and one resource class handled per agg. | 15:57 |
jroll | jaypipes: cool, so in theory if something knew about all compute hosts (and all compute hosts handled all resource classes), this could be totally done in code | 15:57 |
jaypipes | jroll: but I can add some commentary around putting them all in a single agg and having the resource pool have multiple inventory records, one for each resource class. | 15:57 |
jroll | jaypipes: agree, if the resource class count is low | 15:57 |
jroll | jaypipes: well, I was thinking aggr per resource class, but all aggrs on all hosts | 15:58 |
jaypipes | jroll: in the back of my mind is also the end goal of having the scheduler be partitioned, so only requests for certain things go to a subset of scheduler daemons. The most convenient partitioning scheme in Nova is the aggregate right now :) | 15:58 |
*** ChrisAusten has quit IRC | 15:59 | |
*** dmk0202 has quit IRC | 15:59 | |
*** yolanda has joined #openstack-ironic | 15:59 | |
jroll | jaypipes: e.g. just this change in the seq part: http://paste.openstack.org/show/495884/ | 15:59 |
*** Guest37590 has quit IRC | 15:59 | |
jaypipes | jroll: agg per resource class and all aggs on all hosts would provide no benefit to the scheduler, though... | 15:59 |
jaypipes | jroll: it would still have to consider all compute hosts for every request to launch an instance. | 15:59 |
jroll | jaypipes: sure, just thinking things through | 15:59 |
jaypipes | jroll: yup, totes. | 15:59 |
jroll | sure | 15:59 |
jaypipes | devananda: can you assist me there? :) how can I better describe the existing situation? | 16:00 |
jaypipes | devananda: is it juust lines 17-19 that are wrong? | 16:01 |
*** blakec has joined #openstack-ironic | 16:01 | |
jroll | jaypipes: so, I think there's less of a scale problem there for the ironic case, e.g. a compute host can handle 1k ironic nodes today. so a 100k server deployment would only be 100 compute hosts. scheduler shouldn't mind that too much :) | 16:01 |
devananda | jaypipes: happy to :) I'll edit and post another pastebin | 16:01 |
jaypipes | jroll: when you start talking about NFV use cases, though, where there are lots of policies around the capabilities of various hosts, it does start to weigh down the scheduler. | 16:02 |
jaypipes | devananda: rock on, thank you sir :) | 16:02 |
jroll | jaypipes: I'm not sure if that applies here | 16:02 |
jroll | anyway | 16:02 |
jroll | need to think on this more | 16:03 |
jroll | my goal is for operators to not need to do all of this work | 16:03 |
jroll | or at least when an ironic node becomes un-schedulable, that's handled automagically | 16:03 |
*** ChubYann has joined #openstack-ironic | 16:04 | |
jaypipes | jroll: k | 16:05 |
* JayF thinks he should read up on nova host aggreggates and resource pools | 16:06 | |
jroll | JayF: you should | 16:06 |
* jroll finds the novel | 16:06 | |
krotscheck | jroll: It was better on the big screen. | 16:07 |
jroll | JayF: http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html | 16:07 |
JayF | danke | 16:08 |
jroll | JayF: probably some small changes since, but that should get the idea in your head | 16:08 |
JayF | tyvvm | 16:09 |
jaypipes | krotscheck: lol :) | 16:10 |
*** links has joined #openstack-ironic | 16:12 | |
devananda | jaypipes: jroll: http://paste.openstack.org/show/495889/ | 16:15 |
*** blakec has quit IRC | 16:17 | |
jaypipes | devananda: rock on, thank you sir :) | 16:17 |
jroll | devananda: ++ | 16:17 |
jaypipes | devananda: this proposal I am working on will allow us to get rid of the (host, node) tuple entirely. :) | 16:18 |
devananda | jaypipes: thats fantastic! | 16:18 |
jaypipes | indeed. | 16:18 |
jaypipes | progress. we march forward! | 16:19 |
devananda | \o/ | 16:19 |
JayF | devananda: jaypipes: Is it OK to not reflect the CCM use case in that doc at all? I think it is just making sure it wasn't forgotten by accident | 16:19 |
jaypipes | JayF: CCM? | 16:20 |
devananda | CCM? | 16:20 |
JayF | ClusteredComputeManager | 16:20 |
jaypipes | JayF: you mean like vCenter? | 16:20 |
JayF | https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py | 16:20 |
jroll | so, that isn't really supported at all >.> | 16:20 |
devananda | i think its covered, but if not, please explain | 16:20 |
JayF | It just wasn't called out explicitly in the "how it works today" use case; I don't think it has to be | 16:21 |
jroll | devananda: well, CCM makes it possible to run more than one | 16:21 |
jroll | also makes it terrible | 16:21 |
devananda | it = the desired result | 16:21 |
*** sacharya has joined #openstack-ironic | 16:21 | |
devananda | sorry, typing with one hand while eating ... | 16:21 |
devananda | CCM allows one to run >1 n-cpu today, though it's really cludgy | 16:21 |
JayF | "really cludgy" is putting it nicely, lol | 16:22 |
jroll | I feel like we should pretend CCM doesn't exist for the sake of this document and future work | 16:22 |
devananda | with this work, we won't need CCM any more, and we'll be able to run >1 | 16:22 |
JayF | +1 | 16:22 |
jroll | devananda: I think JayF is just asking if you want to note it in your edited section | 16:23 |
*** jistr has quit IRC | 16:23 | |
devananda | jaypipes: how do you see nova handling the baremetal case where an n-cpu process is terminated? | 16:23 |
devananda | jaypipes: with libvirt, clearly those instances become unmanageable and may be rescheduled -- with ironic, that's not the case, but we could easily support re*associating* them with another nova-compute process | 16:24 |
jroll | nova-manage move-ironic-hosts --from downed-compute | 16:24 |
jroll | er, s/hosts/instances | 16:24 |
devananda | yea, something like that | 16:24 |
jaypipes | devananda: same as a normal n-cpu process today. on restart nothing is lost (simply reads its instances from the database and carries on as normal). | 16:25 |
jaypipes | devananda: with libvirt, nothing happens to the instances at all. | 16:25 |
jroll | right, so I imagine the first pass is ^ which is the status quo | 16:25 |
jaypipes | devananda: the n-cpu daemon doesn't house the libvirt daemons. the host does. | 16:25 |
*** jaybeale has joined #openstack-ironic | 16:25 | |
devananda | jaypipes: i meant the libvirt driver | 16:26 |
jroll | but eventually we can optimize, given you could just update the db to point to a different compute daemon to make the instances manageable again | 16:26 |
jaypipes | devananda: that has no state itself. | 16:26 |
*** chihhsin has quit IRC | 16:26 | |
jaypipes | jroll: you wouldn't "move" instances from one compute node to another. there's no point in doing that. you would just restart the n-cpu service. | 16:26 |
*** chihhsin has joined #openstack-ironic | 16:26 | |
jroll | jaypipes: well, consider the case where that service is hosted on a vm that disappears/dies/etc | 16:27 |
jaypipes | jroll: this isn't like the neutron router agent case which has state in the agent about the routers it manages. | 16:27 |
jroll | or rather, the process must be down for a long time | 16:27 |
jroll | does that make more sense? | 16:27 |
jaypipes | jroll: it won't affect anything other than control plane communication to the instances that are associated with that n-cpu. | 16:27 |
jroll | right | 16:28 |
jroll | so since ironic instances don't live there | 16:28 |
jaypipes | jroll: n-cpu is not HA. period :) | 16:28 |
jaypipes | never has been, probably never will be... | 16:28 |
jroll | we can reduce downtime for control plane actions by just changing instance.compute_host or whatever in the db | 16:28 |
jroll | it's an optimization/hack | 16:28 |
jroll | but would be a nice thing to do | 16:28 |
devananda | cool. no local state on the n-cpu host makes it easy to work with | 16:29 |
*** ifarkas has quit IRC | 16:29 | |
jaypipes | jroll: would be easier to just set up a VIP to a passive n-cpu daemon that has the same compute node data and just use haproxy to fail over to the passive n-cpu. | 16:29 |
jaypipes | devananda: right. | 16:29 |
jaypipes | by "has the same compute node data" I mean in the DB, not in the n-cpu process itself, of course. | 16:30 |
jroll | jaypipes: sure, that's fine too | 16:30 |
jroll | my experience with active/passive on a vip tells me a mysql update is easier :P | 16:30 |
jaypipes | heh, fair enough. | 16:30 |
jaypipes | let's tackle that issue when we get to it. | 16:30 |
devananda | this is making more and more sense to me - and I like what I'm understanding :) | 16:31 |
jroll | yeah, totally | 16:31 |
jroll | like I said, first pass is what you described, later we optimize :) | 16:31 |
jaypipes | ++ | 16:31 |
devananda | jaypipes: one more question, then I probably need to run: we will need $something to correlate the nova flavors to baremetal node "types" | 16:32 |
jroll | devananda: as long as we can make most of the resource pool management automated (which we can), we can do it | 16:32 |
jroll | devananda: a flavor can be linked to a resource pool, e.g. the flavor says "requires 1 baremetal-max-cpu resource" | 16:32 |
devananda | jroll: today, the nova scheduler is matching flavor to the actual hardware. with this proposal, it won't be doing that, it sounds like | 16:33 |
devananda | but rather matching flavor to resource pool | 16:33 |
jroll | devananda: right, so you have a resource pool for each distinct hardware config | 16:33 |
devananda | and then it's up to the operator to make sure those are all created correctly | 16:33 |
jroll | (which today would look like a flavor) | 16:33 |
jroll | right | 16:33 |
jaypipes | devananda: it will be a flavor that has a single resource class in it... the one matching the baremetal configuration type. | 16:33 |
jaypipes | devananda: with a requested_amount of 1 always for that resource class. | 16:34 |
jroll | and that was what you and penick didn't like about it - because all of that work | 16:34 |
devananda | but also - once that request is routed to an n-cpu host, that host (or really, the nova.virt.ironic driver) needs to select the appropriate ironic node -- and ironic does not (today) have a "resource pool" or "flavor" property to match against | 16:34 |
jaypipes | jroll: no, not a resource class for each distinct hardware configuration... | 16:34 |
jroll | O_o | 16:35 |
jaypipes | jroll: it would be a resource class for each distinct server type. there can be many different distinct configurations of hardware or manufacturer that provide a single "server type". | 16:35 |
devananda | we will want to codify a way to do that matching in our REST API. that's not exactly what our current tags / claims proposal does | 16:35 |
jaypipes | jroll: sorry, we need better terminology here :) | 16:35 |
jroll | jaypipes: I feel like I need an example here | 16:36 |
JayF | How does that work with the potential for dynamic types then? | 16:36 |
jroll | oh lawd | 16:36 |
jaypipes | jroll, JayF: sure, lemme paste an example. | 16:36 |
JayF | http://specs.openstack.org/openstack/ironic-specs/specs/backlog/exposing-hardware-capabilities.html | 16:36 |
devananda | JayF: yeeeeaaah..... | 16:37 |
JayF | obviously that's not hashed out yet, but I know dynamic hardware config is something Ironic has wanted to do for a while, anything from reconfiguring a setting to "I have a bunch of hardware that $fancy_chassis assembles into a server on demand" | 16:37 |
jaypipes | JayF: yeah, the RSA use case... but that's not exactly what I'm referring to here. | 16:38 |
JayF | "RSA use case" ? | 16:38 |
devananda | JayF: if each configuration of $fancy_hardware is expressed as a separate flavor, we would need one Instance to claim from multiple resource pools at once. That's ... ugly. But, so is a flavor that has non-defined CPU count. | 16:38 |
jaypipes | JayF, jroll: I'm referring to the complaint that penick had about having to create a flavor (or resource class) every time Yahoo! registered a few new racks with hardware that was just slighly different from the last generation of hardware they racked. | 16:39 |
JayF | devananda: so if a single node X could provide flavors A, B, C, it'd be in 3 different resource pools (one for each config), and we'd have to mark it as used in all places on scheduling? | 16:40 |
jroll | jaypipes: yeah, sorry, I guess I think of config as a combination of "cpu/ram/disk/extra_specs_or_capabilities_or_whatever_loaded_term" | 16:40 |
devananda | jaypipes: his complaint wasn't specifically about creating a new flavor, but about starting a new n-cpu process | 16:40 |
devananda | jaypipes: new flavor is just fine | 16:40 |
jaypipes | JayF, jroll: if the same "server type" (say, 800G HDD, 128G RAM, 2-socket Xeon, 4 10G NIC) could be serviced by multiple models and generations of HP, Dell, and Supermicro hardware, you would only need a single "server type", not many of them. | 16:40 |
jroll | jaypipes: essentially resource pool per flavor, ya? | 16:40 |
jroll | at a very basic level of flavor | 16:41 |
*** [1]cdearborn has quit IRC | 16:41 | |
JayF | jaypipes: I'm talking about the reverse case; I have a single server type which can provide multiple flavors | 16:41 |
devananda | guys, we now have 3 conversations going at once | 16:41 |
JayF | jaypipes: it's a many-to-many mapping in some use cases | 16:41 |
jaypipes | jroll: well, a resource pool can provide multiple resource classes :) so you could technically have a single resource pool that provided IRON_BASE and IRON_CRAZY_PERF resources. | 16:41 |
devananda | this has just gotten too hard to track | 16:41 |
devananda | *for me to track | 16:41 |
jroll | jaypipes: gdi. resource class per flavor. | 16:41 |
*** dprince has joined #openstack-ironic | 16:42 | |
*** blakec has joined #openstack-ironic | 16:42 | |
devananda | jaypipes: do you have an ERD of these concepts? the mapping of resource pool - resource class - flavor - nova compute - host agg ... | 16:42 |
jaypipes | jroll: heh, yes. a flavor represents both the quantity of requested resources (amounts of some set of resource classes) as well as the qualitative side of the equation (the capabilities) | 16:43 |
devananda | it's enough new terminology that I sort of want to fall back to looking at a data model :) | 16:43 |
jaypipes | jroll: so you could have multiple flavors each with the same ironic resource class, but exposing different capabilities (for instance SSD machines vs. HDD machines) | 16:43 |
jroll | jaypipes: right, yep, that matches what I thought/expected | 16:44 |
jaypipes | coolio. have I confused everyone adequately enough today? :) | 16:44 |
jroll | and I think that matches at least the "config capabilities on the fly" use case | 16:44 |
jroll | never! | 16:44 |
*** rloo has joined #openstack-ironic | 16:44 | |
*** ohamada_ has quit IRC | 16:45 | |
jaypipes | hehe | 16:45 |
devananda | hmm. jaypipes, in your pastebin example, is IRON_BASE a flavor or a resource class? | 16:45 |
jaypipes | devananda: resource class. | 16:45 |
devananda | aaaah | 16:45 |
*** harlowja has joined #openstack-ironic | 16:45 | |
jaypipes | devananda: the flavor would request 1 of IRON_BASE and be caled something like "Standard bare metal" or whatever | 16:45 |
devananda | ok. so then that example doesn't discuss flavors at all? | 16:45 |
jaypipes | correct. | 16:45 |
devananda | could you add that? I misunderstood and thought IRON_BASE was a flavor | 16:46 |
jaypipes | devananda: the flavor is the combination of a set of requested resource classes along with a set of capabilities. | 16:46 |
devananda | gotcha | 16:46 |
devananda | so in a virt cloud, a resource class might be "cpu" and a flavor might be "8 cpu" | 16:46 |
jroll | yep | 16:46 |
devananda | whereas in a baremetal cloud, a resource class might be "8 cpu machine" and a flavor would be "1 of those machine" | 16:47 |
jaypipes | devananda: those capabilities are currently called "extra_specs" and we are actively trying to standardize that side of the flavor. the resource providers stuff is all about standardizing the quantitative side of the flavor (the requested amounts of stuff) | 16:47 |
jaypipes | devananda: precisely. | 16:47 |
devananda | jaypipes: that's REALLY cool. and also really different from today. | 16:47 |
jaypipes | indeed. | 16:47 |
devananda | and the missing piece for me was that this doc didn't assert anything about flavor, and so I imputed it incorrectly | 16:47 |
jaypipes | devananda: check out the dependency diagram at the bottom of https://blueprints.launchpad.net/nova/+spec/compute-node-inventory-newton | 16:47 |
jaypipes | devananda: it's a LOT of work to try to get done for all this remodeling going on. | 16:48 |
devananda | *blink blink* | 16:48 |
jaypipes | devananda: with the end goal being the broken-out scheduler with a full REST API. | 16:48 |
devananda | right | 16:48 |
jaypipes | devananda: what we've been discussed today is *only* the generic-resource-pools and resource-providers-dynamic-resource-classes ones. | 16:49 |
jroll | jaypipes: I assume you've taken this into account, but we'll need to make sure a flavor and a compute host that doesn't require/expose cpus/ram doesn't explode things | 16:50 |
devananda | jaypipes: good stuff, thanks for the brainshare! | 16:51 |
jroll | +1 | 16:51 |
*** daemontool has joined #openstack-ironic | 16:52 | |
jaypipes | jroll: sorry, not quite following you on that last one... could you elaborate? | 16:52 |
jroll | devananda: so the next steps for us to decide if this is feasible, is to play around with doing the pool management in code (e.g. ironic virt driver or ironic-conductor talks to the scheduler REST API when a node becomes (un)schedulable | 16:52 |
devananda | jroll: yep | 16:52 |
jroll | jaypipes: so cpu/ram is special, right, in that it's an implicit resource pool | 16:52 |
jroll | and the resource tracker(?) populates that | 16:53 |
devananda | jroll: and playing with the claims API stuff, since our virt driver will now need to "select" a node to place the instance on, rather than being handed that info | 16:53 |
jroll | (iirc, this is what the inventory thing is about) | 16:53 |
jaypipes | jroll: no, CPU and RAM are not in a resource pool. A compute node is a resource provider that contains fixed amounts of RAM and CPU. A resource pool provides resources that are *shared* among multiple compute nodes (via the aggregate association). | 16:54 |
jroll | jaypipes: so what happens if a compute daemon doesn't populate the cpu/ram pools, or a flavor doesn't require any cpu/ram resources? | 16:54 |
jroll | ok, right, I'm way off | 16:54 |
jaypipes | jroll: no worries, after a certain point all this stuff makes one's head bend. | 16:54 |
jroll | so what does an ironic compute node report for ram/cpu? is a flavor that needs 0 ram and 0 cpus okay? :) | 16:55 |
* TheJulia suspects thats true with most complex features | 16:55 | |
devananda | jroll: I *think* the ironic virt driver doesn't need to report up anything about the cpu/ram/disk on an ironic Node | 16:55 |
devananda | but there is a missing piece right now | 16:56 |
devananda | *someone* must know that this Ironic cluster can provide X of this resource type and Y of that resource type | 16:56 |
jroll | devananda: yeah, it's just an edge case I want to be sure is included | 16:56 |
jaypipes | jroll: An ironic compute node exposes no resources itself. An ironic compute node only exposes resources of the pools that the aggregates expose that the compute node is associated with. | 16:56 |
jlvillal | I'm going to send this email out about the Grenade stuff: http://paste.openstack.org/show/495891/ | 16:56 |
*** wajdi has joined #openstack-ironic | 16:56 | |
jlvillal | In about five minutes. Unless there are objections/comments. | 16:56 |
devananda | jroll: not an edge case -- I actually think this is the crux of things. ironic virt driver will stop reporting cpu/ram/disk to the scheduler | 16:57 |
jroll | jaypipes: okay, just making sure unit tests that have zeroes for these resources is in the plans :P | 16:57 |
devananda | it won't report anytng at all | 16:57 |
jaypipes | jroll: an ironic compute node's get_available_resources() call will return only the resources from the pools it is associated with, nothing more. whereas a "normal" compute node would do that, PLUS return the resources that the compute host itself (via the libvirt driver) has access to. | 16:57 |
jroll | jaypipes: so that's all in the driver. cool. | 16:57 |
devananda | it will increment/decrement the resource pool counters for the resource types that are associated with that compute host, when they are consumed/released | 16:57 |
jaypipes | right. | 16:57 |
TheJulia | jlvillal: lgtm | 16:58 |
devananda | so, we need a way for that driver to determine how many units of that resource type are in its pool | 16:58 |
jroll | devananda: right, making sure there isn't a count_cpus_and_report() in compute/manager.py or something | 16:58 |
jroll | devananda: yep. that's a rest api call | 16:58 |
jlvillal | TheJulia: thanks :) | 16:58 |
devananda | jroll: ahh. right. things outside the driver would be a problem here | 16:58 |
jaypipes | devananda: actually, no, the ironic virt driver will just return nothing for get_available_resources()... let me explain... | 16:58 |
*** blakec has quit IRC | 16:59 | |
jroll | devananda: this is the "nova resource-pool-inventory-update" calls in http://paste.openstack.org/show/495878/ | 16:59 |
*** rloo_ has joined #openstack-ironic | 17:00 | |
*** irf has joined #openstack-ironic | 17:00 | |
krtaylor | jlvillal, I guess that means we *are* having a meeting this week :) jk, looks good | 17:00 |
jaypipes | devananda: the compute node's resource tracker, upon starting up, queries the DB for the aggregates that it is associated to. These aggregates have a set of resource pool objects associated to them that themselves return an inventory of the resources the pool exposes. This information is known to the compute node resource tracker BEFORE ever calling to the virt driver's get_available_resources() API call. For the ironic virt driver, since there | 17:00 |
jaypipes | are no resources that the virt driver itself provides on the compute node, it would return nothing. For hypervisor drivers, they return the resources that the compute host exposes for guests. | 17:00 |
*** blakec has joined #openstack-ironic | 17:00 | |
jlvillal | krtaylor: I only canceled the summit one :P | 17:01 |
*** jaybeale has quit IRC | 17:02 | |
devananda | jaypipes: sure. and for hypervisor drivers, when they create an instance, they decrement the resources in that pool | 17:02 |
*** jaybeale has joined #openstack-ironic | 17:02 | |
mat128 | @here meeting time? | 17:02 |
jroll | mat128: no meeting today | 17:02 |
JayF | there's no meeting today | 17:02 |
devananda | jaypipes: clearly, ironic driver should do the same thing (decrement resources on allocation, increment resources on instance deletion) | 17:02 |
*** gugl has quit IRC | 17:02 | |
jaypipes | devananda: that is how things currently work, yes. however, I am working towards having the scheduler do the claim process and the scheduler would do that decrementing... | 17:02 |
jroll | devananda: the scheduler handles the pools | 17:03 |
mat128 | oh, ok fine :) thanks | 17:03 |
jroll | right that | 17:03 |
devananda | jaypipes: oh! gotcha | 17:03 |
jroll | hence the rest apis :) | 17:03 |
devananda | makes sense :) | 17:04 |
*** [1]cdearborn has joined #openstack-ironic | 17:04 | |
devananda | jroll: it sounds like we will, eventually, want to provide a means to keep the nova resource pools in sync with ironic's available hardware types and counts in near-real-time, eg. if nodes are taken offline for maintenance or such | 17:05 |
jroll | devananda: right, that's what I keep saying | 17:06 |
devananda | but that is out of band from the nova-compute <-> ironic-api interactions | 17:06 |
jroll | I think that is critical to doing this work this way | 17:06 |
jroll | (instead of the currently proposed thing) | 17:06 |
devananda | it could be a separate ironic-nova-sync service or something | 17:06 |
jroll | well, the virt driver could do that | 17:06 |
jaypipes | devananda: or it could be done in the ironic virt driver itself. | 17:06 |
devananda | it could. but doesn't have to. | 17:06 |
*** rloo has quit IRC | 17:06 | |
*** rloo_ has quit IRC | 17:07 | |
jroll | I feel like I'd probably prefer it to | 17:07 |
*** rloo has joined #openstack-ironic | 17:07 | |
devananda | jroll: how will the nova-compute process know when an ironic node is added/removed from the available pool? | 17:07 |
jroll | devananda: it would just count available things periodically | 17:08 |
jroll | it doesn't need to know anything about the nodes other than schedulable and not schedulable | 17:08 |
devananda | jroll: we know how well polling and counting works ... | 17:08 |
*** fragatina has quit IRC | 17:08 | |
devananda | jroll: yes, it needs to know how to map each node into resource pools | 17:09 |
jroll | sure, that too | 17:09 |
devananda | that could be a tag like "nova_resource_class_name" | 17:09 |
jroll | either way it will need that, and it will poll and count | 17:09 |
jroll | the only difference is that poll is via rest api for the driver, or db for the separate thing | 17:09 |
devananda | jroll: there's been a bug open for a while now about the race that having it poll causes | 17:09 |
jroll | well, this is a bit different | 17:10 |
jroll | here, it's simply counting | 17:10 |
devananda | otoh, if ironic-conductor were calling to the scheduler API, it could be done in response to state changes | 17:10 |
jroll | today, it is tracking the state of those nodes | 17:10 |
devananda | no lag for the poll loop | 17:10 |
jroll | the resource pool here only cares about the count, fwiw | 17:10 |
devananda | jroll: right, got that it only cares about the count, but something needs to correlate each Node to a resource pool | 17:11 |
JayF | nova is keeping a counter, but still when it wants a node, it asks Ironic for one | 17:11 |
JayF | so worst case with dated info is a build fails later due to thinking we had more capacity than we did, or vice versa (a build failing despite there being $small_number of usable nodes) | 17:11 |
jroll | devananda: I understand that | 17:11 |
jroll | I'm not concerned about the metadata, that's easy | 17:12 |
devananda | JayF: exactly. that race condition is an open bug against our nova driver today | 17:12 |
JayF | that's what I thought. Just making sure I'm following along properly | 17:12 |
devananda | folks have proposed using notifications to inform nova, so that it has a more up to date view of available resources | 17:12 |
jroll | I tend to think that's a somewhat separate thing | 17:12 |
devananda | i'm pointing out that, if we put this logic in the nova.virt.ironic driver, and it does a poll/count loop, we're in the same situation | 17:12 |
JayF | the big difference between then and now | 17:13 |
jroll | the bug today is caused by nova not being up to date on the state and picking a *specific* node that is wrong | 17:13 |
* JayF == jroll | 17:13 | |
devananda | how we update the MAX(resource_count) in a pool is orthogonal to how we consume resources from the pool | 17:13 |
JayF | he took the words from me | 17:13 |
*** trown is now known as trown|lunch | 17:13 | |
jroll | in this future, nova just asks ironic to "give me a node matching this resource pool" | 17:13 |
jroll | ironic handles all the state things | 17:13 |
devananda | jroll: sure. so it's less of a race, but still a race | 17:13 |
*** gugl has joined #openstack-ironic | 17:13 | |
jroll | so we reduce that bug to just late fails (in the virt layer) when capacity is depleted | 17:14 |
JayF | jroll: or false fails in the other case | 17:14 |
jroll | which is... /shrug | 17:14 |
jroll | ah, yes | 17:14 |
devananda | if the last node of class_X just went offline, and someone requests it, there's a window where nova will allow the request, even though ironic knows it can't succeed | 17:14 |
JayF | the other case is the one that sucks more, bceause it's a fail for no good reason | 17:14 |
devananda | yea | 17:14 |
devananda | I just fixed this thing in Ironic, now I need to wait N minutes for Nova to catch up | 17:14 |
jroll | sure, it will just fail in the compute instead of the scheduler, which I don't think is a big deal | 17:14 |
*** ijw_ has quit IRC | 17:14 | |
jroll | but yeah the reverse is nastier | 17:15 |
jroll | at any rate, if we have a fast API to get a count of things that match a resource class, we can just sync every 5s :) | 17:15 |
devananda | heh | 17:15 |
JayF | I mean, could we do something clever like before failing out a request for being out of capacity, you poll Ironic right then to get fresh data? I guess not, given that happens in the scheduler far away from Ironic specific code... | 17:15 |
jroll | (I'm not opposed to the conductor handling this, fwiw, but it needs to do a full count, not simply "this node was updated, let me tell nova") | 17:15 |
devananda | JayF: that requires n-sched polling ironic directly | 17:16 |
jroll | yep | 17:16 |
JayF | devananda: yep, that's what I was thinking, which is a nogo | 17:16 |
*** rbudden has joined #openstack-ironic | 17:17 | |
devananda | thanks, guys. this has been really helpful | 17:17 |
jroll | totes, thanks indeed :D | 17:18 |
* jroll will think on this between tempest runs and summit writeups | 17:18 | |
devananda | I need to step away for a bit, gotta catch up on bills and stuff here for a while today | 17:18 |
* jroll plays first of tha month | 17:19 | |
*** piet has joined #openstack-ironic | 17:20 | |
*** [1]cdearborn has quit IRC | 17:23 | |
gugl | Hi folks, just wondering if this doc is update-to-date? http://docs.openstack.org/developer/ironic/deploy/install-guide.html | 17:23 |
JayF | We try to keep the documentation up to date; if it's not accurate let us know. | 17:24 |
*** daemontool_ has joined #openstack-ironic | 17:24 | |
cinerama | gugl, are you having a specific issue with the process? we can talk you through a fix (and fix the docs!) | 17:24 |
TheJulia | ++ | 17:24 |
*** blakec1 has joined #openstack-ironic | 17:25 | |
gugl | I am very new to ironic...so not sure if it is accurate...before I start want to double check... | 17:25 |
jroll | it should be up to date | 17:26 |
gugl | cool | 17:26 |
cinerama | gugl, are you planning on installing from source or OS packages? | 17:27 |
*** daemontool has quit IRC | 17:27 | |
*** blakec has quit IRC | 17:28 | |
*** david-lyle has joined #openstack-ironic | 17:28 | |
*** david-lyle has quit IRC | 17:29 | |
*** baoli has quit IRC | 17:29 | |
xavierr | guys, could you take a look at this patch: https://review.openstack.org/#/c/308821/ | 17:30 |
*** david-lyle has joined #openstack-ironic | 17:30 | |
xavierr | is just some updates on dev-quickstart.rst | 17:30 |
JayF | I think that $ is intended | 17:31 |
JayF | to indicate you're at a bash prompt after ssh'ing | 17:31 |
*** fragatina has joined #openstack-ironic | 17:32 | |
*** fragatina has quit IRC | 17:33 | |
*** fragatina has joined #openstack-ironic | 17:33 | |
*** links has quit IRC | 17:33 | |
xavierr | JayF, should I add on the beginning of that command? | 17:34 |
JayF | xavierr: I honestly don't think it matters much either way | 17:35 |
JayF | just saying it wasn't some random $ :) | 17:35 |
xavierr | JayF, understood :) | 17:37 |
gugl | devstack | 17:37 |
gugl | cinerama: devstack | 17:38 |
*** piet has quit IRC | 17:39 | |
*** piet has joined #openstack-ironic | 17:39 | |
gugl | does Horizon has ironic panel? | 17:40 |
gugl | or it is part of compute show in Horizon? | 17:40 |
TheJulia | gugl: ironic-ui, under early development | 17:40 |
gugl | TheJulia: ic | 17:40 |
TheJulia | gugl: You can use Ironic via nova or directly, depending on what you want to achieve in the end. | 17:42 |
gugl | TheJulia: will like to set list of all baremetal instances, deactivate and delete baremetal instances and view baremetal instance details | 17:44 |
*** blakec1 has quit IRC | 17:44 | |
gugl | *get | 17:44 |
*** blakec1 has joined #openstack-ironic | 17:44 | |
TheJulia | gugl: So in that case, seems like you would want nova although there is the distinct difference in an unused node and a deployed instance, so you'll need to use the ironic command line to add your baremetal nodes and likely do anything required in your environment configuration wise so they are able to be used for instance deployments by nova | 17:48 |
gugl | TheJulia: so if it is set up used by nova, the instances will show up in compute in horizon? otherwise the instances won't show up there, right? | 17:51 |
TheJulia | deployed instances should be visible, if visible by the user in the tenant | 17:52 |
*** piet has quit IRC | 17:55 | |
gugl | TheJulia: let me recap....if it is not set up used by nova, then the deployed and undeployed instances will show up under compute(nova), if it is not set up used by nova, then the undeployed instances will not show up in compute, only deployed instances. right? | 17:58 |
*** blakec1 has quit IRC | 17:58 | |
gugl | first should be * if it is set up used by nova | 17:58 |
*** piet has joined #openstack-ironic | 17:59 | |
TheJulia | gugl: please define what you mean by undeployed | 17:59 |
gugl | not provisioned | 17:59 |
*** praneshp has joined #openstack-ironic | 17:59 | |
TheJulia | nodes not provisioned will only be visible with-in the ironic cli or the ironic-ui as "available" nodes | 18:00 |
*** blakec1 has joined #openstack-ironic | 18:00 | |
TheJulia | gugl: we also have concepts of nodes being cleaned after deletion or prior to being made available, managable but not available, enrolled but not available. This is part of our state machine, which you can see a digram of at http://docs.openstack.org/developer/ironic/dev/states.html | 18:01 |
gugl | TheJulia: those states are only visition in ironic...not in nova, right? | 18:02 |
TheJulia | Essentially, all you should see as a nova user, is nodes in active state | 18:03 |
gugl | TheJulia: ic | 18:04 |
*** ijw has joined #openstack-ironic | 18:05 | |
*** trown|lunch is now known as trown | 18:05 | |
*** rloo has quit IRC | 18:05 | |
TheJulia | gugl: We have an instance_uuid field in our database/api/cli/ui that ties back to the nova instance UUID | 18:06 |
*** jjohnson2_ has quit IRC | 18:06 | |
*** blakec1 has quit IRC | 18:08 | |
*** jjohnson2 has joined #openstack-ironic | 18:09 | |
gugl | TheJulia: Once it shows up in nova, it can be deactivate and deleted through nova, it is also visition in ironic, I assume | 18:10 |
*** [1]cdearborn has joined #openstack-ironic | 18:10 | |
TheJulia | gugl: the deployed instance yes, deleting a node in nova just transitions the node to the DELETING state in our state machine | 18:11 |
TheJulia | stepping away for a little bit, bbiab | 18:12 |
*** ppiela has quit IRC | 18:12 | |
*** rloo has joined #openstack-ironic | 18:13 | |
*** fragatina has quit IRC | 18:13 | |
*** [1]cdearborn has quit IRC | 18:13 | |
*** fragatina has joined #openstack-ironic | 18:14 | |
*** [1]cdearborn has joined #openstack-ironic | 18:14 | |
gugl | TheJulia: thanks for the helps | 18:18 |
*** fragatina has quit IRC | 18:19 | |
*** blakec1 has joined #openstack-ironic | 18:20 | |
*** Haomeng|2 has quit IRC | 18:21 | |
*** Haomeng|2 has joined #openstack-ironic | 18:21 | |
*** ppiela has joined #openstack-ironic | 18:22 | |
*** irf has quit IRC | 18:22 | |
*** irf has joined #openstack-ironic | 18:23 | |
openstackgerrit | Clif Houck proposed openstack/ironic-specs: Add spec for image caching https://review.openstack.org/310594 | 18:24 |
*** Sukhdev has joined #openstack-ironic | 18:24 | |
*** Haomeng|2 has quit IRC | 18:24 | |
*** Sukhdev has quit IRC | 18:26 | |
*** Sukhdev has joined #openstack-ironic | 18:27 | |
*** irf has quit IRC | 18:29 | |
*** mjturek1 has quit IRC | 18:30 | |
*** mkoderer__ has quit IRC | 18:32 | |
*** fragatina has joined #openstack-ironic | 18:40 | |
*** dprince has quit IRC | 18:42 | |
*** baoli has joined #openstack-ironic | 18:45 | |
*** mjturek1 has joined #openstack-ironic | 18:49 | |
*** ChrisAusten has joined #openstack-ironic | 18:55 | |
*** alejandrito has joined #openstack-ironic | 18:55 | |
*** alejandrito has quit IRC | 18:56 | |
*** alejandrito has joined #openstack-ironic | 18:57 | |
*** alejandrito has quit IRC | 18:57 | |
jroll | JayF: bored? :) https://review.openstack.org/#/c/311757/ | 19:01 |
JayF | You're aware I can't land that, right mister ptl? I'm not core on ironic ;) | 19:02 |
JayF | +1'd though, looks sane to me | 19:02 |
jroll | ah dang | 19:02 |
jroll | lol | 19:02 |
jroll | thanks | 19:02 |
*** piet has quit IRC | 19:03 | |
*** moshele has joined #openstack-ironic | 19:07 | |
*** Mr_T has quit IRC | 19:07 | |
*** Mr_T has joined #openstack-ironic | 19:13 | |
*** yolanda has quit IRC | 19:14 | |
*** blakec1 has quit IRC | 19:18 | |
*** blakec1 has joined #openstack-ironic | 19:21 | |
*** baoli has quit IRC | 19:23 | |
*** [1]cdearborn has quit IRC | 19:26 | |
*** [1]cdearborn has joined #openstack-ironic | 19:26 | |
*** absubram has joined #openstack-ironic | 19:30 | |
openstackgerrit | Merged openstack/ironic: Fix tox cover command https://review.openstack.org/306854 | 19:35 |
openstackgerrit | Merged openstack/ironic: Devstack: allow extra PXE params https://review.openstack.org/311757 | 19:44 |
*** baoli has joined #openstack-ironic | 19:45 | |
*** baoli_ has joined #openstack-ironic | 19:47 | |
*** moshele has quit IRC | 19:50 | |
*** baoli has quit IRC | 19:51 | |
*** lucasagomes has quit IRC | 19:53 | |
openstackgerrit | Jarrod Johnson proposed openstack/pyghmi: Add disk inventory when possible from Lenovo IMM https://review.openstack.org/311826 | 19:54 |
*** absubram has quit IRC | 19:56 | |
*** absubram has joined #openstack-ironic | 19:58 | |
*** lucasagomes has joined #openstack-ironic | 20:00 | |
*** mjturek1 has quit IRC | 20:02 | |
*** mjturek1 has joined #openstack-ironic | 20:02 | |
*** mjturek1 has quit IRC | 20:06 | |
*** mjturek1 has joined #openstack-ironic | 20:07 | |
*** mjturek1 has quit IRC | 20:07 | |
*** baoli has joined #openstack-ironic | 20:08 | |
*** mjturek1 has joined #openstack-ironic | 20:08 | |
*** baoli_ has quit IRC | 20:08 | |
*** baoli has quit IRC | 20:08 | |
*** baoli has joined #openstack-ironic | 20:09 | |
*** mjturek2 has joined #openstack-ironic | 20:12 | |
*** mjturek1 has quit IRC | 20:12 | |
openstackgerrit | Jarrod Johnson proposed openstack/pyghmi: Add disk inventory when possible from Lenovo IMM https://review.openstack.org/311826 | 20:15 |
*** mjturek1 has joined #openstack-ironic | 20:15 | |
openstackgerrit | Jarrod Johnson proposed openstack/pyghmi: Add disk inventory when possible from Lenovo IMM https://review.openstack.org/311826 | 20:16 |
*** mjturek1 has quit IRC | 20:16 | |
*** mjturek2 has quit IRC | 20:16 | |
openstackgerrit | Merged openstack/pyghmi: Add disk inventory when possible from Lenovo IMM https://review.openstack.org/311826 | 20:22 |
*** izaakk_ has joined #openstack-ironic | 20:22 | |
*** serverascode_ has joined #openstack-ironic | 20:23 | |
*** vdrok_ has joined #openstack-ironic | 20:23 | |
*** Sukhdev has quit IRC | 20:27 | |
*** rbudden has quit IRC | 20:27 | |
*** sylwesterB_ has joined #openstack-ironic | 20:28 | |
*** yhvh- has joined #openstack-ironic | 20:28 | |
*** bapalm_ has joined #openstack-ironic | 20:28 | |
*** sylwesterB has quit IRC | 20:29 | |
*** bapalm has quit IRC | 20:29 | |
*** izaakk has quit IRC | 20:29 | |
*** odyssey4me has quit IRC | 20:29 | |
*** morgabra has quit IRC | 20:29 | |
*** mrda has quit IRC | 20:29 | |
*** mmedvede has quit IRC | 20:29 | |
*** vdrok has quit IRC | 20:29 | |
*** serverascode has quit IRC | 20:29 | |
*** yhvh has quit IRC | 20:29 | |
*** izaakk_ is now known as izaakk | 20:29 | |
*** sylwesterB_ is now known as sylwesterB | 20:29 | |
*** vdrok_ is now known as vdrok | 20:29 | |
*** mrda has joined #openstack-ironic | 20:30 | |
*** baoli has quit IRC | 20:31 | |
*** morgabra has joined #openstack-ironic | 20:32 | |
*** vishwanathj has joined #openstack-ironic | 20:33 | |
*** serverascode_ is now known as serverascode | 20:34 | |
*** odyssey4me has joined #openstack-ironic | 20:35 | |
*** adu has joined #openstack-ironic | 20:35 | |
*** moshele has joined #openstack-ironic | 20:35 | |
*** jjohnson2 has quit IRC | 20:43 | |
*** mjturek1 has joined #openstack-ironic | 20:44 | |
*** jaybeale has quit IRC | 20:49 | |
*** ijw has quit IRC | 20:51 | |
*** rbudden has joined #openstack-ironic | 20:52 | |
*** mmedvede has joined #openstack-ironic | 20:52 | |
*** piet has joined #openstack-ironic | 20:53 | |
*** Goneri has quit IRC | 20:57 | |
*** dmk0202 has joined #openstack-ironic | 21:00 | |
*** adu has quit IRC | 21:01 | |
*** Sukhdev has joined #openstack-ironic | 21:09 | |
*** intr1nsic has quit IRC | 21:09 | |
*** xek_ has joined #openstack-ironic | 21:10 | |
*** mjturek1 has quit IRC | 21:10 | |
*** intr1nsic has joined #openstack-ironic | 21:10 | |
*** anush has quit IRC | 21:10 | |
*** Sukhdev has quit IRC | 21:10 | |
*** xek has quit IRC | 21:11 | |
*** trown is now known as trown|outtypewww | 21:11 | |
*** anush has joined #openstack-ironic | 21:11 | |
*** dmk0202 has quit IRC | 21:12 | |
*** jlvillal has quit IRC | 21:12 | |
*** jlvillal has joined #openstack-ironic | 21:12 | |
*** mjturek1 has joined #openstack-ironic | 21:14 | |
*** adu has joined #openstack-ironic | 21:21 | |
*** mbound has joined #openstack-ironic | 21:25 | |
*** [1]cdearborn has quit IRC | 21:26 | |
*** mjturek1 has quit IRC | 21:36 | |
openstackgerrit | Jim Rollenhagen proposed openstack/ironic: Test post don't upvote https://review.openstack.org/311865 | 21:42 |
openstackgerrit | Jim Rollenhagen proposed openstack/ironic: Test post don't upvote https://review.openstack.org/311865 | 21:43 |
*** afaranha has joined #openstack-ironic | 21:43 | |
*** caiobo has joined #openstack-ironic | 21:44 | |
JayF | jroll: a little curious what you're up to | 21:45 |
jroll | JayF: grenade things | 21:45 |
jroll | see the depends-on | 21:45 |
JayF | anything I can do to help? | 21:45 |
jroll | nah, I'm heading out | 21:46 |
jroll | so what I've done so far is | 21:46 |
jroll | run devstack on onmetal with 64 VMs | 21:46 |
JayF | that's right, you're EST now. d'oh. | 21:46 |
jroll | run tempest smoke | 21:46 |
jroll | see what breaks | 21:46 |
JayF | exclude those tests, continue | 21:47 |
JayF | got it | 21:47 |
jroll | like, the tests I skipped, just don't work | 21:47 |
jroll | because devstack vms have one nic, they require two | 21:47 |
JayF | got it | 21:47 |
jroll | but now I think I hit some races | 21:47 |
jroll | and then running in serial, I think I hit weird dirty environment things | 21:47 |
JayF | so is there a doc for doing what you're doing? mocking out the gate? just installing devstack and going with it? | 21:47 |
JayF | I want to push button recieve gate vm | 21:48 |
jroll | so now I run in the gate and start fresh tomorrow | 21:48 |
JayF | got it | 21:48 |
jroll | not really, jlvillal has some things for making it gate-like | 21:48 |
*** Sukhdev has joined #openstack-ironic | 21:48 | |
jroll | I'm just running devstack with the config from our docs, but 64 VMs | 21:48 |
jlvillal | 64? | 21:48 |
JayF | gotcha | 21:48 |
jroll | oh and turns out the pxe append params thing needs noapic on onmetal | 21:48 |
jroll | jlvillal: yes | 21:48 |
jlvillal | Cool :) | 21:49 |
jroll | jlvillal: 128gb ram | 21:49 |
jroll | oh and HARDWARE NOT NESTED VIRT | 21:49 |
* jroll wants hw in the gate real bad | 21:49 | |
*** Sukhdev has quit IRC | 21:49 | |
adu | has anyone gotten ironic to work with non-x86 platforms, like raspberry pi? | 21:52 |
JayF | rpi doesn't support the primitives needed to be provisioned with ironic | 21:54 |
JayF | like the ability to pxe boot or remotely configure stuff | 21:55 |
adu | yeah, I'm aware, but you can create a specially crafted sdcard to do network boot similar to pxe, like a tftp downloaded kernel, etc | 21:56 |
jroll | but you'll overwrite that sd card with an image each time you deploy :/ | 21:56 |
adu | jroll: so, just have all images contain the specially crafted bootloader... | 21:57 |
jroll | yeah, I suppose | 21:57 |
*** ametts has quit IRC | 21:57 | |
jroll | so to answer your question, no, I don't believe anyone has run ironic against a raspi :) | 21:57 |
*** wajdi has quit IRC | 21:58 | |
adu | jroll: I'd like to try | 21:58 |
jroll | I'd love to see the results :) | 21:58 |
*** Sukhdev has joined #openstack-ironic | 21:58 | |
jroll | I have a raspi here on my desk that is currently unemployed | 21:58 |
JayF | you'd also need external power control | 21:58 |
adu | jroll: I have about 10 unemployed raspis | 21:58 |
* jroll looks at his finger | 21:59 | |
jroll | fake power driver ftw | 21:59 |
JayF | egad | 21:59 |
adu | I have a WeMo switch I can use for remote power controll | 21:59 |
JayF | honestly if you're using fake power driver, I think an array of sd card writers would be more efficient than ironic | 21:59 |
JayF | lol | 21:59 |
jroll | JayF: real hardware test bed though | 22:00 |
jroll | clearly I'm not going to deploy a DC full of pis | 22:00 |
* jroll would rather deploy a kitchen full of pies | 22:00 | |
JayF | I mean, fake power driver, nearly fake management driver | 22:00 |
JayF | you're not testing a lot at that point | 22:00 |
adu | http://www.belkin.com/us/F7C027-Belkin/p/P-F7C027;jsessionid=C8BF5079206E98A3A3D6D1A67CF50A96/ | 22:00 |
JayF | I don't think we have a wemo power driver, but that's a cool as hell idea for one | 22:01 |
adu | I've also seen people deploy *pairs* of raspis, one for remote management and one for the actual work | 22:01 |
*** ChrisAusten has quit IRC | 22:02 | |
adu | since they're so cheap | 22:02 |
adu | you could just have one always running that ironic-agent thing | 22:02 |
adu | or does that assume it's running on the target? | 22:03 |
JayF | well that's not exactly how the ironic-agent-thing works :P | 22:03 |
JayF | IPA is a provisioning ramdisk | 22:03 |
JayF | aka API calls like "erase my disk" and "write this image" | 22:03 |
JayF | not "reboot me" and such, which is generally left to BMCs/PDUs | 22:03 |
adu | JayF: well, sorry, but the documentation is hard to read, since I've never been able to get ironic or nova to work | 22:03 |
adu | I don't know how it's supposed to be, all I know is roumors and documentation | 22:04 |
jroll | adu: well, here's how ironic deploys a thing, in terms of the interactions between ironic and hardware http://docs.openstack.org/developer/ironic/deploy/user-guide.html#example-1-pxe-boot-and-iscsi-deploy-process | 22:04 |
*** afaranha has quit IRC | 22:05 | |
jroll | there's some more general things about how it interacts with nova if you scroll up | 22:05 |
adu | so iscsi is a block storage protocol, right? | 22:06 |
*** rbudden has quit IRC | 22:08 | |
*** praneshp has quit IRC | 22:08 | |
jroll | adu: yes, so that deployment method exposes the hard disk as an iscsi device, which ironic mounts and writes | 22:10 |
jroll | below that is one that does not use iscsi, but rather http+dd | 22:11 |
*** mbound has quit IRC | 22:11 | |
adu | raspi/raspbian already supports iscsi and uboot (which is an implementation of some of PXE for raspi), and I would probably disable ipmi, because the servers are always on, so I think in theory it should work, I just need to figure out how to configure everything | 22:11 |
jroll | sure, if you can get it to work that would be awesome | 22:12 |
jroll | ironic needs to be able to reboot the server and such, though, hence ipmi | 22:12 |
adu | well, full IPMI support would require a pair deployment | 22:14 |
adu | pairs would also allow you to have the second raspi hooked up to the raw console | 22:14 |
jroll | sure | 22:15 |
*** moshele has quit IRC | 22:15 | |
*** absubram has quit IRC | 22:16 | |
*** ijw has joined #openstack-ironic | 22:16 | |
*** fragatin_ has joined #openstack-ironic | 22:19 | |
*** fragatin_ has quit IRC | 22:19 | |
jlvillal | jroll: Or anyone else. Would there be an objection to getting rid of the ironic-bm-logs directory? And moving that content up one directory? | 22:20 |
jlvillal | clarkb is asking me about it over in infra | 22:20 |
jroll | I'll pop over there | 22:20 |
*** fragatina has quit IRC | 22:22 | |
*** ijw has quit IRC | 22:22 | |
*** adu has quit IRC | 22:26 | |
*** blakec1 has quit IRC | 22:27 | |
*** david-lyle has quit IRC | 22:28 | |
*** mkovacik has quit IRC | 22:34 | |
*** lucasagomes has quit IRC | 22:36 | |
*** fragatina has joined #openstack-ironic | 22:39 | |
*** fragatina has quit IRC | 22:39 | |
*** fragatina has joined #openstack-ironic | 22:40 | |
*** lucasagomes has joined #openstack-ironic | 22:41 | |
*** piet has quit IRC | 22:49 | |
*** davidlenwell has quit IRC | 22:55 | |
*** caiobo has quit IRC | 23:02 | |
*** gugl has quit IRC | 23:02 | |
*** davidlenwell has joined #openstack-ironic | 23:03 | |
*** ijw has joined #openstack-ironic | 23:10 | |
*** Nisha has joined #openstack-ironic | 23:11 | |
*** rloo has quit IRC | 23:13 | |
*** ijw has quit IRC | 23:16 | |
*** jrist has quit IRC | 23:19 | |
*** Sukhdev has quit IRC | 23:22 | |
*** Sukhdev has joined #openstack-ironic | 23:25 | |
*** blakec1 has joined #openstack-ironic | 23:27 | |
*** vishwanathj has quit IRC | 23:30 | |
*** keedya has joined #openstack-ironic | 23:33 | |
*** jrist has joined #openstack-ironic | 23:34 | |
*** ChrisAusten has joined #openstack-ironic | 23:34 | |
*** keedya has quit IRC | 23:37 | |
*** keedya has joined #openstack-ironic | 23:38 | |
*** Nisha has quit IRC | 23:48 | |
*** wajdi has joined #openstack-ironic | 23:51 | |
*** blakec1 has quit IRC | 23:51 | |
*** chlong has joined #openstack-ironic | 23:52 | |
*** rloo has joined #openstack-ironic | 23:52 | |
*** mtanino has quit IRC | 23:56 | |
*** jaybeale has joined #openstack-ironic | 23:58 | |
*** piet has joined #openstack-ironic | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!