*** ronis has joined #akanda | 08:48 | |
*** ronis has quit IRC | 13:12 | |
*** cleverdevil has joined #akanda | 17:12 | |
stupidnic | adam_g: just an update. I went ahead and wiped and reinstalled our OS install from the ground up and I now have astara/akanda working, but the instances aren't spawning on the compute nodes (issue is on my side) - but I can see that the rug is trying to communicate with the instance so it looks like those directions are pretty good. I have made a couple of modificaitons to it for clarity | 18:09 |
---|---|---|
stupidnic | I will also work on creating an upstart script for getting astara to start on boot via upstart | 18:10 |
adam_g | stupidnic, nice! | 18:28 |
adam_g | stupidnic, you might be interested in some .deb packaging i started last cycle https://github.com/gandelman-a/akanda-packaging/tree/master/deb/akanda-rug/debian i'm going to be dusting that off soon and doing some proper astara packages, your upstart script would be a welcome contribution there | 18:29 |
stupidnic | adam_g; I will check them out | 18:30 |
cleverdevil | that's awesome news, stupidnic. | 18:32 |
cleverdevil | pull requests to the docs are welcome, of course :) | 18:32 |
cleverdevil | (or, patches, I should say) | 18:32 |
stupidnic | well adam_g did most of the work | 18:32 |
stupidnic | I just tweaked them for clarity | 18:33 |
stupidnic | I do have an implementation question. I am using linuxbridge, do I need the linuxbridge agent running on the controller? I am thinking yes, but just confirming | 18:47 |
*** cleverdevil has quit IRC | 19:19 | |
*** ronis has joined #akanda | 19:28 | |
stupidnic | Okay. So further testing, I can create instances on my own from Cirros images in glance with no problem, however creating an Akanda router from the image I am getting an error on the compute node | 20:40 |
stupidnic | the traceback references image_meta.disk_format != 'raw' | 20:40 |
stupidnic | am I missing some metadata from glance? | 20:40 |
stupidnic | Okay. So confirming this here... I went into a tenant project and spawned an instance using the Akanda Router and booted up fine. | 20:57 |
stupidnic | Something in the call from the rug to bring up the router instance isn't correct | 20:58 |
stupidnic | adam_g: having a problem spawning an akanda/astara router instance in the compute nodes | 21:00 |
stupidnic | http://paste.openstack.org/show/vTs2RhwP1nCIcwTAczLW/ | 21:00 |
*** cleverdevil has joined #akanda | 21:06 | |
adam_g | stupidnic, do you have the command you used to publish the image handy? | 21:23 |
stupidnic | sure | 21:23 |
stupidnic | glance image-create --name "Astara Router" --file ./akanda_appliance.raw --disk-format raw --container-format bare --progress | 21:24 |
adam_g | stupidnic, thats strange, you say booting the same image manually works? | 21:27 |
stupidnic | Yep | 21:27 |
adam_g | stupidnic, using the same flavor? | 21:27 |
stupidnic | just a normal instance setup using horizon... boots fine | 21:27 |
stupidnic | yep | 21:27 |
stupidnic | let me confirm 100% | 21:27 |
adam_g | stupidnic, what version of nova are you running? | 21:28 |
stupidnic | 2.30.1 installed from Ubuntu's Liberty repo | 21:28 |
stupidnic | Yep, same flavor, booting from the image and creating a new volume | 21:28 |
stupidnic | instance boots | 21:29 |
stupidnic | I am double checking my image_uuid matches the uuid in my glance | 21:30 |
stupidnic | yep confirmed | 21:30 |
adam_g | your traceback looks different than the nova code im looking at | 21:30 |
adam_g | https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/storage/rbd_utils.py?h=stable/liberty#n193 | 21:30 |
stupidnic | they match | 21:31 |
adam_g | image_meta.disk_format vs image_meta.get('disk_format') | 21:32 |
stupidnic | hmmm | 21:32 |
stupidnic | okay... yeah I see what you are saying | 21:32 |
adam_g | stupidnic, this makes me really curious, whats 'dpkg -l | grep nova' show for version? | 21:32 |
stupidnic | checking that now | 21:33 |
stupidnic | ii nova-common 2:12.0.0-0ubuntu2~cloud0 all OpenStack Compute - common files | 21:33 |
stupidnic | that seems off | 21:33 |
adam_g | looks right | 21:34 |
stupidnic | looking at what is installed on the controller though... that matches what is on the compute nodes | 21:34 |
stupidnic | so why does the branch not match? | 21:35 |
stupidnic | or tag I guess | 21:35 |
adam_g | not sure what you mean | 21:35 |
stupidnic | the one that is tagged as stable/liberty | 21:35 |
stupidnic | doesn't match what is in the packages | 21:35 |
adam_g | i still dont follow | 21:35 |
adam_g | 2:12.0.0-0ubuntu2~cloud0 is the liberty release | 21:35 |
stupidnic | https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/storage/rbd_utils.py?h=stable/liberty#n193 | 21:36 |
stupidnic | but that doesn't match my rbd_utuls.py | 21:36 |
adam_g | yea | 21:36 |
adam_g | downloading the ubuntu package to see if, for some reason, theyre patching it | 21:36 |
adam_g | hmm | 21:37 |
adam_g | oh duh | 21:37 |
adam_g | the top of stable/liberty is: Fix attibute error when cloning raw images in Ceph | 21:37 |
stupidnic | ah... hahah | 21:37 |
stupidnic | well that would explain it | 21:37 |
adam_g | the ubuntu package is built from the liberty release, after the next point release you'll have that one | 21:37 |
stupidnic | okay... so manual patch time | 21:38 |
adam_g | yea.. :\ | 21:38 |
adam_g | https://review.openstack.org/#/c/237801/ | 21:38 |
stupidnic | sometimes... when you are on the bleeding edge... you get cut | 21:38 |
stupidnic | weird how I didn't trigger the bug with the normal instance stand up | 21:39 |
adam_g | yeah, im not sure either | 21:39 |
stupidnic | my images are all raw to take advantage of CoW | 21:40 |
cleverdevil | were you booting that instance as volume backed> | 21:40 |
cleverdevil | ? | 21:40 |
cleverdevil | different code path | 21:40 |
cleverdevil | (because openstack) | 21:40 |
adam_g | cleverdevil, lol | 21:40 |
stupidnic | cleverdevil: so "boot from image (creates a new volume)" has a completely different code path? | 21:41 |
cleverdevil | indeed. | 21:41 |
cleverdevil | this is one of my biggest OpenStack pet peeves :) | 21:41 |
stupidnic | man... | 21:41 |
stupidnic | alright well that explains it at least | 21:42 |
cleverdevil | yeah, that's why the fix that adam_g pointed out was in *nova* | 21:42 |
cleverdevil | the fact that nova has anything to do with block devices at all is just plain silly. | 21:42 |
cleverdevil | but, hey, its a historical artifact | 21:42 |
stupidnic | right... of course how silly of me | 21:42 |
cleverdevil | a byproduct of the fact that nova used to be compute+network+blockstorage | 21:43 |
cleverdevil | in spite of the existence of neutron and cinder :P | 21:43 |
stupidnic | Alright... well now I have to figure out how to patch this | 21:43 |
adam_g | stupidnic, one sec ill build you a pkg | 21:44 |
adam_g | (for testing) | 21:44 |
stupidnic | adam_g: cool... thanks | 21:44 |
adam_g | oh wait | 21:44 |
stupidnic | I was just going to take the files and put them into SaltStack and push that out | 21:44 |
adam_g | maybe i cant, i forgot this is cloud archive i can't just use the lp PPA | 21:44 |
stupidnic | no problem | 21:44 |
adam_g | stupidnic, thats probably easier | 21:45 |
stupidnic | Yeah,. ill do that | 21:45 |
adam_g | building a willy backport of nova for trusty is a bit of a PITA using lp PPAs | 21:45 |
stupidnic | no problem. I am going to just grab the rbd_utils.py and then push that out to the compute nodes | 21:45 |
stupidnic | thankfully it is only the one file | 21:46 |
adam_g | and only one line, at that | 21:46 |
stupidnic | two actually, saltstack gives me a diff | 21:50 |
stupidnic | there are two references to disk_format | 21:51 |
*** cleverdevil has quit IRC | 21:51 | |
stupidnic | :) | 21:51 |
stupidnic | on to the next error | 21:53 |
*** ronis has quit IRC | 21:56 | |
*** cleverdevil has joined #akanda | 22:04 | |
stupidnic | adam_g: alright... so the next error I am getting is NovaException: Unexpected vif_type=binding_failed | 22:17 |
stupidnic | I assume this is a configuration error on my part | 22:17 |
stupidnic | again as before I can stand up instances through horizon that have network interfaces attached to them no problem | 22:25 |
stupidnic | they obviously don't get IP addresses due to no router, but the interface is there | 22:26 |
adam_g | stupidnic, got a trace? | 22:28 |
stupidnic | adam_g: I am looking at the neutron logs now | 22:28 |
stupidnic | I am seeing something about Executable not found: conntrack (filter match = conntrack) | 22:28 |
stupidnic | so maybe that's the reason | 22:28 |
stupidnic | adam_g: here you go http://paste.openstack.org/show/aopJYRkqSU8AjzJb9ZlP/ | 22:34 |
stupidnic | I saw some errors regarding conntrack in the compute nodes neutron plugin, but that wasn't the issue | 22:35 |
adam_g | stupidnic, id expect theres another error buried somewhere in one of your neutron logs | 22:35 |
stupidnic | I am tailing both nova-compute and plugin-linuxbridge on the compute nodes | 22:36 |
stupidnic | anywhere else to look? | 22:37 |
adam_g | stupidnic, is plugin-linuxbridge = the linuxbridge agent ? | 22:39 |
stupidnic | yeah | 22:39 |
adam_g | are you passing the ml2 config file to both the neutron-server and the agent? | 22:39 |
stupidnic | Hmmm... Is there a setting for that in the neutron.conf? | 22:40 |
adam_g | stupidnic, no, you just pass to the daemon as an additonal '--config-file' argument | 22:41 |
stupidnic | hmmm okay. let me confirm that | 22:41 |
adam_g | ie | 22:41 |
adam_g | /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini | 22:41 |
stupidnic | yeah this is all from Ubuntu's packages so I think they are doing that, but let me confirm | 22:41 |
stupidnic | yeah it's there | 22:42 |
adam_g | stupidnic, if you run 'neutron agent-list' do you see the lb agent running? | 22:48 |
stupidnic | yes | 22:57 |
stupidnic | Like I said I can spin up an instance with horizon and it has no problem | 23:00 |
*** cleverdevil has quit IRC | 23:00 | |
stupidnic | I built a cirros image and it boots without issue and has an eth0 | 23:01 |
stupidnic | Is it possible that I created the management network in the wrong project? | 23:13 |
*** cleverdevil has joined #akanda | 23:38 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!