Friday, 2015-11-13

*** ronis has joined #akanda08:48
*** ronis has quit IRC13:12
*** cleverdevil has joined #akanda17:12
stupidnicadam_g: just an update. I went ahead and wiped and reinstalled our OS install from the ground up and I now have astara/akanda working, but the instances aren't spawning on the compute nodes (issue is on my side) - but I can see that the rug is trying to communicate with the instance so it looks like those directions are pretty good. I have made a couple of modificaitons to it for clarity18:09
stupidnicI will also work on creating an upstart script for getting astara to start on boot via upstart18:10
adam_gstupidnic, nice!18:28
adam_gstupidnic, you might be interested in some .deb packaging i started last cycle https://github.com/gandelman-a/akanda-packaging/tree/master/deb/akanda-rug/debian  i'm going to be dusting that off soon and doing some proper astara packages, your upstart script would be a welcome contribution there18:29
stupidnicadam_g; I will check them out18:30
cleverdevilthat's awesome news, stupidnic.18:32
cleverdevilpull requests to the docs are welcome, of course :)18:32
cleverdevil(or, patches, I should say)18:32
stupidnicwell adam_g did most of the work18:32
stupidnicI just tweaked them for clarity18:33
stupidnicI do have an implementation question. I am using linuxbridge, do I need the linuxbridge agent running on the controller? I am thinking yes, but just confirming18:47
*** cleverdevil has quit IRC19:19
*** ronis has joined #akanda19:28
stupidnicOkay. So further testing, I can create instances on my own from Cirros images in glance with no problem, however creating an Akanda router from the image I am getting an error on the compute node20:40
stupidnicthe traceback references image_meta.disk_format != 'raw'20:40
stupidnicam I missing some metadata from glance?20:40
stupidnicOkay. So confirming this here... I went into a tenant project and spawned an instance using the Akanda Router and booted up fine.20:57
stupidnicSomething in the call from the rug to bring up the router instance isn't correct20:58
stupidnicadam_g: having a problem spawning an akanda/astara router instance in the compute nodes21:00
stupidnichttp://paste.openstack.org/show/vTs2RhwP1nCIcwTAczLW/21:00
*** cleverdevil has joined #akanda21:06
adam_gstupidnic, do you have the command you used to publish the image handy?21:23
stupidnicsure21:23
stupidnicglance image-create --name "Astara Router" --file ./akanda_appliance.raw --disk-format raw --container-format bare --progress21:24
adam_gstupidnic, thats strange, you say booting the same image manually works?21:27
stupidnicYep21:27
adam_gstupidnic, using the same flavor?21:27
stupidnicjust a normal instance setup using horizon... boots fine21:27
stupidnicyep21:27
stupidniclet me confirm 100%21:27
adam_gstupidnic, what version of nova are you running?21:28
stupidnic2.30.1 installed from Ubuntu's Liberty repo21:28
stupidnicYep, same flavor, booting from the image and creating a new volume21:28
stupidnicinstance boots21:29
stupidnicI am double checking my image_uuid matches the uuid in my glance21:30
stupidnicyep confirmed21:30
adam_gyour traceback looks different than the nova code im looking at21:30
adam_ghttps://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/storage/rbd_utils.py?h=stable/liberty#n19321:30
stupidnicthey match21:31
adam_gimage_meta.disk_format vs image_meta.get('disk_format')21:32
stupidnichmmm21:32
stupidnicokay... yeah I see what you are saying21:32
adam_gstupidnic, this makes me really curious, whats 'dpkg -l | grep nova' show for version?21:32
stupidnicchecking that now21:33
stupidnicii  nova-common                          2:12.0.0-0ubuntu2~cloud0              all          OpenStack Compute - common files21:33
stupidnicthat seems off21:33
adam_glooks right21:34
stupidniclooking at what is installed on the controller though... that matches what is on the compute nodes21:34
stupidnicso why does the branch not match?21:35
stupidnicor tag I guess21:35
adam_gnot sure what you mean21:35
stupidnicthe one that is tagged as stable/liberty21:35
stupidnicdoesn't match what is in the packages21:35
adam_gi still dont follow21:35
adam_g2:12.0.0-0ubuntu2~cloud0 is the liberty release21:35
stupidnichttps://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/storage/rbd_utils.py?h=stable/liberty#n19321:36
stupidnicbut that doesn't match my rbd_utuls.py21:36
adam_gyea21:36
adam_gdownloading the ubuntu package to see if, for some reason, theyre patching it21:36
adam_ghmm21:37
adam_goh duh21:37
adam_gthe top of stable/liberty is: Fix attibute error when cloning raw images in Ceph21:37
stupidnicah... hahah21:37
stupidnicwell that would explain it21:37
adam_gthe ubuntu package is built from the liberty release, after the next point release you'll have that one21:37
stupidnicokay... so manual patch time21:38
adam_gyea.. :\21:38
adam_ghttps://review.openstack.org/#/c/237801/21:38
stupidnicsometimes... when you are on the bleeding edge... you get cut21:38
stupidnicweird how I didn't trigger the bug with the normal instance stand up21:39
adam_gyeah, im not sure either21:39
stupidnicmy images are all raw to take advantage of CoW21:40
cleverdevilwere you booting that instance as volume backed>21:40
cleverdevil?21:40
cleverdevildifferent code path21:40
cleverdevil(because openstack)21:40
adam_gcleverdevil, lol21:40
stupidniccleverdevil: so "boot from image (creates a new volume)" has a completely different code path?21:41
cleverdevilindeed.21:41
cleverdevilthis is one of my biggest OpenStack pet peeves :)21:41
stupidnicman...21:41
stupidnicalright well that explains it at least21:42
cleverdevilyeah, that's why the fix that adam_g pointed out was in *nova*21:42
cleverdevilthe fact that nova has anything to do with block devices at all is just plain silly.21:42
cleverdevilbut, hey, its a historical artifact21:42
stupidnicright... of course how silly of me21:42
cleverdevila byproduct of the fact that nova used to be compute+network+blockstorage21:43
cleverdevilin spite of the existence of neutron and cinder :P21:43
stupidnicAlright... well now I have to figure out how to patch this21:43
adam_gstupidnic, one sec ill build you a pkg21:44
adam_g(for testing)21:44
stupidnicadam_g: cool... thanks21:44
adam_goh wait21:44
stupidnicI was just going to take the files and put them into SaltStack and push that out21:44
adam_gmaybe i cant, i forgot this is cloud archive i can't just use the lp PPA21:44
stupidnicno problem21:44
adam_gstupidnic, thats probably easier21:45
stupidnicYeah,. ill do that21:45
adam_gbuilding a willy backport of nova for trusty is a bit of a PITA using lp PPAs21:45
stupidnicno problem. I am going to just grab the rbd_utils.py and then push that out to the compute nodes21:45
stupidnicthankfully it is only the one file21:46
adam_gand only one line, at that21:46
stupidnictwo actually, saltstack gives me a diff21:50
stupidnicthere are two references to disk_format21:51
*** cleverdevil has quit IRC21:51
stupidnic:)21:51
stupidnicon to the next error21:53
*** ronis has quit IRC21:56
*** cleverdevil has joined #akanda22:04
stupidnicadam_g: alright... so the next error I am getting is NovaException: Unexpected vif_type=binding_failed22:17
stupidnicI assume this is a configuration error on my part22:17
stupidnicagain as before I can stand up instances through horizon that have network interfaces attached to them no problem22:25
stupidnicthey obviously don't get IP addresses due to no router, but the interface is there22:26
adam_gstupidnic, got a trace?22:28
stupidnicadam_g: I am looking at the neutron logs now22:28
stupidnicI am seeing something about Executable not found: conntrack (filter match = conntrack)22:28
stupidnicso maybe that's the reason22:28
stupidnicadam_g: here you go http://paste.openstack.org/show/aopJYRkqSU8AjzJb9ZlP/22:34
stupidnicI saw some errors regarding conntrack in the compute nodes neutron plugin, but that wasn't the issue22:35
adam_gstupidnic, id expect theres another error buried somewhere in one of your neutron logs22:35
stupidnicI am tailing both nova-compute and plugin-linuxbridge on the compute nodes22:36
stupidnicanywhere else to look?22:37
adam_gstupidnic, is plugin-linuxbridge = the linuxbridge agent ?22:39
stupidnicyeah22:39
adam_gare you passing the ml2 config file to both the neutron-server and the agent?22:39
stupidnicHmmm... Is there a setting for that in the neutron.conf?22:40
adam_gstupidnic, no, you just pass to the daemon as an additonal '--config-file' argument22:41
stupidnichmmm okay. let me confirm that22:41
adam_gie22:41
adam_g/usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini22:41
stupidnicyeah this is all from Ubuntu's packages so I think they are doing that, but let me confirm22:41
stupidnicyeah it's there22:42
adam_gstupidnic, if you run 'neutron agent-list' do you see the lb agent running?22:48
stupidnicyes22:57
stupidnicLike I said I can spin up an instance with horizon and it has no problem23:00
*** cleverdevil has quit IRC23:00
stupidnicI built a cirros image and it boots without issue and has an eth023:01
stupidnicIs it possible that I created the management network in the wrong project?23:13
*** cleverdevil has joined #akanda23:38

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!