*** jheiss has joined #openstack | 00:00 | |
*** littleidea has quit IRC | 00:01 | |
*** JordanRinke_ has joined #openstack | 00:01 | |
JordanRinke_ | Any swift ninjas around to give me a quick assist? | 00:01 |
---|---|---|
letterj | JordanRinke_: What's up? | 00:05 |
JordanRinke_ | doing a 3 node install - 2 of the boxes work fine | 00:08 |
JordanRinke_ | the 3rd object replicator fails, but I can't figure out how to track it down / get debugging info out of it | 00:09 |
letterj | replicator fails? To start? To run? Did you check to see if rsync is enabled? | 00:09 |
letterj | Also, double check the *.conf files are the same on all 3 servers? | 00:10 |
*** rnirmal has quit IRC | 00:12 | |
*** enigma has quit IRC | 00:13 | |
JordanRinke_ | yeah, confirmed the config files are the same one all the boxes | 00:13 |
JordanRinke_ | and rsync is configured the same too | 00:13 |
JordanRinke_ | when I try to start object-replicator it says it makes a pass, and then I never see anything from it again and the process is no longer running | 00:13 |
JordanRinke_ | is there any way to make these services spit out verbose debugging? that would be a huge help and I cant find anything on how to configure that | 00:14 |
JordanRinke_ | the last line I get right now is "Starting replication pass." and then nothing, it dies right after startup | 00:15 |
letterj | What do you see when you run the swift-object-replicator /etc/swift/object-server.conf | 00:15 |
JordanRinke_ | on console, nothing | 00:16 |
JordanRinke_ | in /var/log/messages | 00:16 |
JordanRinke_ | the last line is it starting the replication pass, and that is it | 00:17 |
JordanRinke_ | is there a flag in the conf that I can throw for super verbose or debug logging? | 00:17 |
letterj | In the default section of the config file you can set LOG_LEVEL=DEBUG | 00:18 |
*** Ryan_Lane has quit IRC | 00:19 | |
letterj | So you aren't redirecting log messages anywhere special | 00:19 |
letterj | Sorry, LOG_LEVEL=debug | 00:19 |
*** dendrobates is now known as dendro-afk | 00:19 | |
JordanRinke_ | uh, same results | 00:20 |
JordanRinke_ | no additional output anywhere that I could tell | 00:21 |
*** Ryan_Lane has joined #openstack | 00:21 | |
letterj | Any rsync errors? | 00:21 |
JordanRinke_ | I don't see any in /var/log/messages | 00:22 |
JordanRinke_ | I am a little concerned that setting the logging level to debug did nothing for me | 00:22 |
*** clauden has quit IRC | 00:24 | |
letterj | Post the config file so I can take a look at it. | 00:26 |
gholt | I'm not sure it'd run any different, but you can also try: swift-object-replicator /etc/swift/object-server.conf once | 00:29 |
JordanRinke_ | trying that, getting the same failure scenario | 00:30 |
JordanRinke_ | http://paste.openstack.org/show/843/ | 00:30 |
vishy | JordanRinke: might want to check to make sure there aren't any spurious iptables rules on that host | 00:31 |
JordanRinke_ | vishy: checked, exact same rules on working and non working box | 00:32 |
gholt | If you ls /etc/swift you've got rings and /etc/swift/swift.conf with the hash in it? | 00:33 |
*** winston-d has joined #openstack | 00:33 | |
* gholt is stabbing wildly | 00:33 | |
*** kashyap has quit IRC | 00:34 | |
winston-d | gholt: Hi there. Can we continue the investigation on weird object-server activity? | 00:34 |
letterj | Also, check the permissions on the /srv/node/<blah> directories. Make sure they are owned by swift | 00:35 |
JordanRinke_ | letterj: they are | 00:35 |
JordanRinke_ | if we could at least figure out how to increase the logging level, I would have something to go on... I can't believe I can't find anything to do verbose/debug level logging | 00:35 |
winston-d | gholt: we were talking about python and eventlet version... | 00:36 |
letterj | JordanRinke_: Can you paste the config from a working box? | 00:37 |
gholt | winston-d: I asked some of guys here and everyone is at a complete loss as to why the code would get stuck in a 'system' read loop. It makes no sense. | 00:37 |
JordanRinke_ | letterj: it is literally the exact same | 00:37 |
*** bkkrw has quit IRC | 00:38 | |
gholt | I'm out for the night guys, I'm past my work hours already and I have a lot of stuff to get done tomorrow. | 00:38 |
winston-d | gholt: well... object-server was stucking in writing 'system' to tmp... | 00:38 |
winston-d | gholt: ok, have a nice night and see u tomorrow | 00:39 |
letterj | Just to be clear. when you run swift-object-server <config>, it runs for a time, writes something to the log that says it makes a replication pass and then exits with now error. | 00:40 |
letterj | now/no | 00:41 |
JordanRinke_ | runs for a time, being like a half a second, yes | 00:41 |
letterj | Can you paste what is written in the log? | 00:41 |
JordanRinke_ | ar 9 19:29:10 cybera2 object-replicator Starting object replicator in daemon mode. Mar 9 19:29:10 cybera2 object-replicator Starting object replication pass. | 00:41 |
JordanRinke_ | and that is it | 00:41 |
winston-d | any chance redbo and creiht still around? | 00:42 |
letterj | winston-d: No they are gone for the day | 00:43 |
winston-d | ...bad luck | 00:43 |
letterj | JordanRinke_: Anything in dmseg? | 00:44 |
JordanRinke_ | One moment... I rebooted it... windows roots showing through ;) | 00:45 |
*** reldan has quit IRC | 00:46 | |
*** hadrian has quit IRC | 00:47 | |
JordanRinke_ | nope | 00:47 |
JordanRinke_ | no new messages in dmesg when I try to run it | 00:47 |
*** rchavik has joined #openstack | 00:48 | |
letterj | If the configs are the same it has to be something wrong with that box. I'm at a loss. Sorry | 00:48 |
*** kashyap has joined #openstack | 00:50 | |
letterj | I'm going to head out for some dinner. | 00:52 |
*** retr0h has quit IRC | 00:53 | |
kpepple | JordanRinke_: based on your log, it appears to be dying either on replicator.py line 526 (stats = eventlet.spawn(self.heartbeat)) or 527 (lockup_detector = eventlet.spawn(self.detect_lockups)) .... or the log is getting eaten somewhere else. you might want to make sure you have the correct eventlet version on that machine | 00:57 |
JordanRinke_ | nice, do you have any idea how to increase the debugging on the service? | 00:57 |
JordanRinke_ | nice dude, eventlet 9.12, upgraded to 9.13... works | 00:59 |
*** dendro-afk is now known as dendrobates | 01:00 | |
*** reldan has joined #openstack | 01:00 | |
kpepple | JordanRinke_: good (because you were at the lowest level of debugging anyway :) ) | 01:00 |
*** lamar has joined #openstack | 01:01 | |
* kpepple takes a victory lap | 01:01 | |
*** dragondm has quit IRC | 01:01 | |
dabo | good detective work, kpepple! | 01:02 |
kpepple | dabo: blaming eventlet is always a sound debugging strategy :) | 01:02 |
*** littleidea has joined #openstack | 01:08 | |
*** azneita has quit IRC | 01:14 | |
*** azneita has joined #openstack | 01:15 | |
*** azneita has joined #openstack | 01:15 | |
*** paltman has quit IRC | 01:18 | |
*** reldan has quit IRC | 01:22 | |
*** MarkAtwood has quit IRC | 01:29 | |
*** mahadev has quit IRC | 01:32 | |
winston-d | kpepple: could you help me with my case? | 01:34 |
winston-d | kpepple: some strange Swift issue | 01:35 |
kpepple | winston-d: i can't say i'm a swift expert, but what's the problem ? | 01:35 |
*** ericrw has quit IRC | 01:36 | |
winston-d | kpepple: well, I tried to switch from Devauth to swauth. swauth-add-user always failed because object-server is stucking at writing massive 'system' to tmp. | 01:38 |
kpepple | kpepple: sorry, what does "writing massive 'system' to tmp" mean ? it just keeps writing the same thing to the log ? | 01:39 |
kpepple | winston-d: sorry, see above ^^^^ | 01:39 |
kpepple | winston-d: got a paste of the log ? | 01:39 |
*** DSpair has joined #openstack | 01:39 | |
winston-d | kpepple: well, that was long story, do you mind looking at IRC log http://eavesdrop.openstack.org/irclogs/%23openstack.2011-03-09.log starting from 2011-03-09T15:38 | 01:42 |
* kpepple pulls up eavesdrop | 01:43 | |
kpepple | winston-d: are you dillon-w there ? | 01:44 |
winston-d | kpepple: yup, that's my another secret identity | 01:45 |
kpepple | winston-d: so http://paste.openstack.org/show/819/ is the log ? | 01:45 |
winston-d | kpepple: yes. | 01:45 |
winston-d | kepple: that's the log on proxy-server, for 'swauth-add-user'. | 01:46 |
winston-d | kepple: I stopped all server. and start only object-server, then do 'swauth-add-user', proxy-server complained it cannot connect to accout server. then i start account-server & container-server. then do 'swauth-add-user' again, it still failed, (saying no final status of PUT from object-server), BUT once the 'swauth-add-user' executed, object-server started to write to tmp, with the word 'system' repeatly! | 01:48 |
kpepple | winston-d: what happens when you just start object-server without any of the other services ? same thing ? | 01:50 |
winston-d | kpepple: if only object-server, 'swauth-add-user' will fail (of course), and object-server won't write to tmp. | 01:51 |
winston-d | kpepple: i think the problem is, proxy-server asks object-server to do PUT, but somehow, object-server starts to write to tmp, and doesn't stop. | 01:53 |
*** dendrobates is now known as dendro-afk | 01:53 | |
gholt | Home now. Best we can tell, swift/obj/server.py gets stuck in a loop reading (and therefore writing) 'system' on the line for chunk in iter(lambda: request.body_file.read( | 01:54 |
*** gregp76 has quit IRC | 01:54 | |
gholt | Uhm, I think we were going to check the webob version next. | 01:54 |
kpepple | gholt: server.py:386 ? | 01:54 |
*** MarkAtwood has joined #openstack | 01:55 | |
gholt | Yeah, though I guess that depends exactly which version he's running, but yeah. | 01:55 |
gholt | Never seen this odd behaviour before | 01:55 |
winston-d | the version of webob? | 01:55 |
winston-d | let me check | 01:55 |
winston-d | WebOb-1.0.3-py2.6.egg | 01:56 |
gholt | I think we verified the correct version of eventlet. And we now you're on RHEL, which we haven't tested though. | 01:56 |
gholt | Hmm. I've got 1.0-1~lucid2 of webob, but newer should be better, right? :) | 01:57 |
*** mahadev has joined #openstack | 01:58 | |
gholt | I'm giving the newer version of webob a whirl here... | 01:58 |
gholt | Yep, that's it. Finally. :) | 01:58 |
gholt | Mine's going into a crazy loop now. | 01:58 |
kpepple | gholt: funny ... Jordan's problem earlier was his version of eventlet | 01:59 |
gholt | Sheesh, dependencies suck. Heh | 01:59 |
gholt | Let me see if I can figure out "why?!" | 02:00 |
winston-d | gholt: great, finally we found the root cause. | 02:03 |
*** joearnold has quit IRC | 02:05 | |
*** Code_Bleu2 has joined #openstack | 02:05 | |
Code_Bleu2 | I am testing creating volumes...and the status is stuck at "creating" I have checked the objectstore log and i dont see anything there. Is there some other way i can find out why it is stuck on "creating"? | 02:06 |
*** dirakx has joined #openstack | 02:09 | |
Ryan_Lane | Code_Bleu2: check the nova-compute log | 02:10 |
Ryan_Lane | I think this is where I noticed an issue last time | 02:10 |
Ryan_Lane | which driver are you trying to use? | 02:11 |
Code_Bleu2 | ryan_lane: i know nothing about a driver..sorry...im a n00b | 02:11 |
Ryan_Lane | well, if you didn't define a driver in the config file, you are using the default one ;) | 02:12 |
Ryan_Lane | Code_Bleu2: is the nova-objectstore daemon running? | 02:12 |
winston-d | gholt: if i downgrade the version of webob, is it possible to get the problem fix? | 02:12 |
Code_Bleu2 | Ryan_Lane: as far as i know | 02:13 |
Ryan_Lane | Code_Bleu2: ps -ef | grep objectstore | 02:13 |
gholt | winston-d: Very probably. The version I had worked 1.0-1~lucid2, but that's an Ubuntu thing. | 02:13 |
Code_Bleu2 | Ryan_Lane: yes its there | 02:13 |
winston-d | gholt: let me try 1.0.1 | 02:13 |
Ryan_Lane | Code_Bleu2: you just have one node, or multiple? | 02:14 |
Code_Bleu2 | Ryan_Lane: one | 02:14 |
Ryan_Lane | ok. any issues in either the nova-objectstore log, or the nova-compute log? | 02:14 |
Ryan_Lane | any in the scheduler log? | 02:14 |
Code_Bleu2 | Ryan_Lane: havent checked the scheduler log yet | 02:15 |
Ryan_Lane | what's the output of "nova-manage service list"? | 02:15 |
Code_Bleu2 | Ryan_Lane: nothing has stood out yet in any of the other logs | 02:15 |
Ryan_Lane | does it show the objectstore as up? | 02:15 |
Ryan_Lane | :-) means up XXX means down | 02:16 |
Code_Bleu2 | Ryan_Lane: compute, sched, network | 02:16 |
Ryan_Lane | no objectstore? | 02:16 |
Code_Bleu2 | Ryan_Lane: nope | 02:16 |
Ryan_Lane | gah | 02:17 |
Code_Bleu2 | Ryan_Lane: did service nova-objectstore restart..and it still isnt listed | 02:17 |
Ryan_Lane | I should be saying nova-volume | 02:17 |
Ryan_Lane | the service you need is nova-volume | 02:17 |
Ryan_Lane | the objectstore only hands out images. it doesn't do volumes | 02:18 |
Ryan_Lane | (I haven't worked on nova in a few weeks. heh) | 02:18 |
Ryan_Lane | I don't think the objectstore shows up in the service list | 02:18 |
Code_Bleu2 | Ryan_Lane: i dont think i have nova-volume | 02:18 |
Ryan_Lane | Code_Bleu2: you'll need to install nova-volume | 02:18 |
Ryan_Lane | if it isn't already installed | 02:19 |
winston-d | gholt: good news, webob 1.0.1 works... | 02:19 |
Ryan_Lane | you'll also need to make sure it runs. it's kind of a PITA to set up | 02:19 |
Code_Bleu2 | Ryan_Lane: doing it now | 02:19 |
*** littleidea has quit IRC | 02:19 | |
winston-d | gholt: god, this was killing me. it cost me 3 days | 02:19 |
winston-d | kepple: changing webob version from 1.0.3 to 1.0.1 fix the issue. | 02:20 |
Code_Bleu2 | Ryan_Lane: apt-get installed it...but still no service nova-volume | 02:20 |
winston-d | gholt: so, guess i can safely delete those very LARGE tmp files? | 02:20 |
Ryan_Lane | Code_Bleu2: check the log file, it'll likely have an error in it | 02:21 |
Ryan_Lane | Code_Bleu2: I believe the default driver wants you to set up LVM | 02:21 |
Ryan_Lane | and set it up in a pretty specific way | 02:21 |
gholt | winston-d: Oh yeah. Sheesh. webob seems to have changed what they convert their wsgi input to. | 02:21 |
Code_Bleu2 | Ryan_Lane: which log? | 02:21 |
Ryan_Lane | Code_Bleu2: nova-volume | 02:22 |
Code_Bleu2 | Ryan_Lane: doesnt exist | 02:22 |
winston-d | gholt: it'd be great if the swift doc could clear specify the python package version. | 02:22 |
Ryan_Lane | Code_Bleu2: which distro are you using, and are you using the packages? | 02:22 |
Code_Bleu2 | Ryan_Lane: getting this trying to apt-get Err http://ppa.launchpad.net/nova-core/trunk/ubuntu/ maverick/main nova-volume all 2011.2~bzr760-0ubuntu0ppa1~maverick1 | 02:23 |
Code_Bleu2 | 404 Not Found | 02:23 |
Code_Bleu2 | ubuntu 10.10 64bit | 02:23 |
Ryan_Lane | oh. odd | 02:24 |
Ryan_Lane | Code_Bleu2: that happens when you run "apt-get install nova-volume" as root? | 02:24 |
Code_Bleu2 | Ryan_Lane: yes | 02:24 |
*** DSpair has quit IRC | 02:24 | |
Ryan_Lane | Code_Bleu2: run "apt-get update", then try again | 02:25 |
Ryan_Lane | if that doesn't work, maybe it is missing in trunk... | 02:26 |
Code_Bleu2 | Ryan_Lane: working | 02:26 |
Code_Bleu2 | Ryan_Lane: now what..its still in "creating" state | 02:27 |
Ryan_Lane | well, those volumes will likely stay that way | 02:27 |
Ryan_Lane | I don't know how to get them out of that state | 02:27 |
Ryan_Lane | Code_Bleu2: check to make sure volume is actually running, and that the log file has no errors though | 02:27 |
Ryan_Lane | I just started using volume recently, so I can't answer all of your questions ;) | 02:28 |
Code_Bleu2 | Ryan_Lane: volume group nova-volumes doesn't exist | 02:28 |
Ryan_Lane | right | 02:28 |
Ryan_Lane | so it wants a volume group called nova-volume to exist | 02:28 |
Ryan_Lane | you need to make an LVM volume for it | 02:28 |
Ryan_Lane | specifically called nova-volume | 02:28 |
Ryan_Lane | you can use a physical device, a partition, or a disk image for this | 02:29 |
Code_Bleu2 | Ryan_Lane: Can i use any other name...or it has to be that one? | 02:29 |
Ryan_Lane | I think you can use a different name, if you modify the configuration (/etc/nova/nova.conf) | 02:29 |
Ryan_Lane | let me see if I can find the flag | 02:30 |
gholt | winston-d: We'd never keep up with the versioning to be honest. And newer versions that claim backwards compat shouldn't be broken. :/ | 02:30 |
Code_Bleu2 | Ryan_Lane: so you dont know how i cant get rid of the volume that is stuck in "creating"? | 02:30 |
Ryan_Lane | Code_Bleu2: the volume group flag is "--volume_group=<volumegroupname>". The default is nova-volumes | 02:31 |
Ryan_Lane | Code_Bleu2: nope. | 02:31 |
winston-d | gholt: is rackspace using Ubuntu for production? | 02:32 |
gholt | winston-d: Yeppers, and just apt-get to install. 1.0.3 hasn't hit, or we would've seen this breakage. | 02:32 |
Ryan_Lane | Code_Bleu2: but once you get the service operating properly, you can create new ones that won't be stuck in that state | 02:32 |
winston-d | gholt: Ubuntu with KVM or Xen? I'm sticking to RHEL because KVM. | 02:33 |
Code_Bleu2 | Ryan_Lane: My current VG is using all my storage space..I guess i will need to shrink my VG to create another one | 02:34 |
gholt | winston-d: Well, Cloud Files and Cloud Servers are separate clusters, so Cloud Files has no virtualization going on. | 02:34 |
Ryan_Lane | Code_Bleu2: is this just for testing? | 02:34 |
Ryan_Lane | Code_Bleu2: if so, make a disk image, make a physical volume using it, and make a new VG | 02:35 |
winston-d | gholt: i see. I guess Cloud Servers are based on xen. | 02:35 |
Ryan_Lane | you can do that inside of your current VG, if you want. it's inefficient, but it works for testing | 02:35 |
Code_Bleu2 | Ryan_Lane: true...didnt think of that | 02:35 |
gholt | winston-d: Yeah, webob 1.0.3 is busted. If you make a webob.Request.blank('/', body='stuff') and then do req.body_file.read() over and over, you get the same 'stuff' over and over. | 02:36 |
Ryan_Lane | ok. gotta go | 02:36 |
Ryan_Lane | Code_Bleu2: good luck :) | 02:36 |
*** Ryan_Lane is now known as Ryan_Lane|away | 02:36 | |
gholt | winston-d: I'm guessing they do a bg = req.body_file and then read from the bf or something. | 02:36 |
winston-d | gholt: great. so for now, swift will have to stay with webob < 1.0.3. | 02:37 |
notmyname | *sigh* looks like time for another webob patch | 02:37 |
*** bcwaldon has joined #openstack | 02:37 | |
*** Code_Bleu2 has quit IRC | 02:38 | |
*** ccustine has quit IRC | 02:38 | |
gholt | notmyname: Where do I post bugs to them? | 02:39 |
notmyname | good question. it's always been hard to find. let me dig up the links | 02:41 |
gholt | I found their google group, maybe something will surface there | 02:41 |
notmyname | gholt: http://groups.google.com/group/paste-users/topics | 02:41 |
notmyname | ya. | 02:41 |
*** bcwaldon has quit IRC | 02:42 | |
notmyname | gholt: also http://trac.pythonpaste.org/pythonpaste/query?status=assigned&status=new&status=reopened&component=webob&order=priority&col=id&col=summary&col=status&col=type&col=priority&col=milestone&col=component&report=10 | 02:43 |
*** ericrw has joined #openstack | 02:44 | |
*** matiu_ has joined #openstack | 02:47 | |
*** matiu_ has joined #openstack | 02:47 | |
winston-d | gholt: Is there any way I can shorten swauth- commands by omitting '-A https://XXX/auth/'? | 02:49 |
redbo | alias myswauth='swauth -A https://...' :) | 02:50 |
*** matiu has quit IRC | 02:51 | |
*** matiu_ is now known as matiu | 02:51 | |
notmyname | A=https://...; swauth -A $A | 02:51 |
gholt | Someday we should add SWAUTH_ env vars like st has ST_ env vars. :) | 02:52 |
winston-d | well, alias it is. | 02:52 |
redbo | oh wait it's swauth-*. hard to alias that. | 02:52 |
winston-d | .... then ENV it is. | 02:53 |
* gholt adds something else to his ever growing todo list :) | 02:53 | |
redbo | on my todo list, each item ends in "gholt's mom" | 02:53 |
gholt | I really can't believe this webob stuff sometimes. It seems that now, even if I grab the body_file ahead of time and just use that, webob doesn't fully read the request and causes a broken pipe on the remote end. :/ | 02:54 |
gholt | And I love that easy_install has no easy_uninstall still, hehe. | 02:55 |
creiht | gholt: maybe you should try pip | 02:56 |
creiht | it has an uninstall | 02:56 |
gholt | Yeah, keep meaning to; habits... | 02:56 |
* creiht understands | 02:57 | |
*** mahadev has quit IRC | 02:57 | |
winston-d | I saw a lot logs for replicator, does that mean they are replicating too frequently? | 02:58 |
gholt | Apparently my pip doesn't have uninstall or something. | 03:00 |
*** zul has quit IRC | 03:03 | |
gholt | Lol, and the newest pip has uninstall but doesn't seem to do anything. | 03:03 |
*** zul has joined #openstack | 03:05 | |
gholt | winston-d: It's probably just running so fast because the cluster is empty right now. It should run no faster than once every 30 seconds (by default). | 03:08 |
winston-d | gholt: is that configurable? | 03:09 |
gholt | There's a run_pause setting | 03:10 |
winston-d | let me check. anyway, using default value should be fine right? | 03:11 |
gholt | If you put run_pause = 300 under your [object-replicator] section for instance it'll pause for five minutes between runs. | 03:11 |
gholt | Yeah, defaults should be fine. | 03:12 |
gholt | We rotate storage logs out pretty frequently on the storage nodes themselves, though we keep them for a pretty long time in archive. proxy logs are the ones that really matter for billing and user auditing. | 03:15 |
gholt | And I expect the replicators will get less noisy in the future as we optimize them further. | 03:15 |
winston-d | 'noisy', that's a good word to describe them. :) | 03:16 |
*** ak4d7 has joined #openstack | 03:26 | |
*** mahadev has joined #openstack | 03:26 | |
*** tlehman has joined #openstack | 03:28 | |
*** tlehman has left #openstack | 03:28 | |
*** mahadev has quit IRC | 03:30 | |
gholt | winston-d: This webob thing is annoying; I'm sorry you had to be the one to surface it. :/ | 03:35 |
*** MarkAtwood has quit IRC | 03:36 | |
winston-d | gholt: :) | 03:36 |
*** dillon-w has joined #openstack | 03:37 | |
* dillon-w test | 03:37 | |
* winston-d test against dillon-w | 03:37 | |
*** mahadev has joined #openstack | 03:37 | |
* dillon-w is trying new IRC clients | 03:37 | |
gholt | lol | 03:38 |
*** dillon-w has quit IRC | 03:47 | |
*** ericrw has quit IRC | 03:47 | |
*** dillon-w has joined #openstack | 03:48 | |
*** pvo has joined #openstack | 03:49 | |
dillon-w | winston-d: test | 03:51 |
winston-d | dillon-w: test | 03:51 |
winston-d | test | 03:52 |
* winston-d leaving for lunch... | 03:53 | |
* dillon-w leaving for lunch | 03:53 | |
uvirtbot | New bug: #732403 in nova "ensure_bridge fails when using FlatDHCPManager" [Undecided,New] https://launchpad.net/bugs/732403 | 03:57 |
*** ovidwu has quit IRC | 03:57 | |
*** magglass1 has quit IRC | 03:57 | |
*** ovidwu has joined #openstack | 03:57 | |
*** pvo has quit IRC | 04:00 | |
uvirtbot | New bug: #732405 in nova "RunInstance stopped at networking" [Undecided,New] https://launchpad.net/bugs/732405 | 04:06 |
*** zenflyfishing has joined #openstack | 04:12 | |
zenflyfishing | hello | 04:12 |
HugoKuo | morning | 04:12 |
HugoKuo | did someone tell me anything in pass 12 hours | 04:12 |
zenflyfishing | I figured it would be good to join the openstack initiative and see if I could lend a hand... | 04:13 |
*** lamar has quit IRC | 04:16 | |
HugoKuo | what's the real meaning of flat_network_dhcp_start flag ? it means dnsmasq assign IP for instance from which ip ? | 04:20 |
HugoKuo | assume this project network is 192.168.2.0/24 so if I ser fla_network_dhcp_start=30 , does dnsmasq start assign ip from 192.168.2.30 ? | 04:21 |
*** adjohn has joined #openstack | 04:22 | |
zenflyfishing | flat dhcp - PCs are in same broadcast domain as server | 04:25 |
zenflyfishing | from what I can tell the flag is set to tell the server that PCs are in same broadcast domain as server | 04:26 |
*** ak4d7 has quit IRC | 04:27 | |
zenflyfishing | Check this HugoKuo https://code.arc.nasa.gov/nova/devref/network.html | 04:27 |
HugoKuo | zenflyfishing : thx | 04:28 |
HugoKuo | flat_network_dhcp_start: | 04:30 |
HugoKuo | Dhcp start for FlatDhcp .................. I'm not sure what it menas :< bcz I set it to 30 but dhcp still assin from 192.168.2.2 :< | 04:30 |
*** mahadev has quit IRC | 04:53 | |
Ryan_Lane|away | HugoKuo: I believe the start for that is based on how you created the network using nova-manage | 04:57 |
*** Ryan_Lane|away is now known as Ryan_Lane | 04:57 | |
HugoKuo | Ryan_Lane : nova-manage network $project network create 172.16.2.0/24 1 256 so it write dhcp_start 172.16.2.2 | 04:59 |
Ryan_Lane | yeah 172.16.2.1 is the gateway | 04:59 |
HugoKuo | so that --flat_network_dhcp_start is does work in nova.conf ? | 04:59 |
Ryan_Lane | I'm guessing not (though I haven't actually tried it) | 05:00 |
Ryan_Lane | I'd have to drill down through the source to see what it's supposed to do | 05:00 |
HugoKuo | got it , I'm going to change it in database | 05:01 |
HugoKuo | btw , due to I want instance ip combine with my coporate network . | 05:01 |
*** Manikandan1 has joined #openstack | 05:02 | |
HugoKuo | if I let the gateway to 172.16.2.1 which is my coporate network gateway... | 05:03 |
Manikandan1 | hi while registering images in openstack using euca2ools | 05:03 |
HugoKuo | It cause me can not get metadata | 05:03 |
Ryan_Lane | well, FlatDHCP does things that you probably don't want | 05:03 |
Manikandan1 | for all ramdisk, kernal and image i got ami-- | 05:03 |
Manikandan1 | its correct | 05:03 |
Ryan_Lane | it assumes the gateway can set NAT rules | 05:03 |
HugoKuo | got it | 05:03 |
Ryan_Lane | the gateway is the nova-network service | 05:04 |
HugoKuo | I'll set it to 127.0.0.1...... | 05:04 |
HugoKuo | oh thanks , I have a last question.... | 05:04 |
Ryan_Lane | so, you should likely set the gateway to the nova-network node, use a backend IP range, and use floating IPs for the corporate network IPs | 05:04 |
*** gregp76 has joined #openstack | 05:05 | |
Ryan_Lane | 127.0.0.1 likely won't work | 05:05 |
HugoKuo | thanks ..... try it now | 05:05 |
Ryan_Lane | the network node will set NAT rules for the floating IPs | 05:05 |
Ryan_Lane | they'll map to the private range | 05:05 |
HugoKuo | thank u so much .... | 05:06 |
Ryan_Lane | you're welcome | 05:07 |
openstackhudson | Project nova build #618: SUCCESS in 1 min 52 sec: http://hudson.openstack.org/job/nova/618/ | 05:08 |
openstackhudson | Tarmac: Fixes nova.sh to run properly the first time. We have to get the zip file after nova-api is running. | 05:08 |
*** magglass1 has joined #openstack | 05:23 | |
*** mahadev has joined #openstack | 05:33 | |
*** f4m8_ is now known as f4m8 | 05:41 | |
*** f4m8 is now known as f4m8_ | 05:41 | |
*** f4m8_ is now known as f4m8 | 05:42 | |
*** mahadev has quit IRC | 05:42 | |
Code_Bleu | Was working on nova-volumes and for whatever reason when i went to restart everything..my networking isnt working. Anyone know what this error is? http://paste.openstack.org/show/844/ | 05:43 |
HugoKuo | Ryan_Lane : thanks u so much ......you figure out alomost my problem | 05:43 |
HugoKuo | Ryan_Lane : thanks u so much ......you figure out all of my problem | 05:44 |
*** naehring has joined #openstack | 05:44 | |
Ryan_Lane | glad to hear it's working for you | 05:44 |
HugoKuo | I'm going to spilt network and scheduler to other machine | 05:44 |
HugoKuo | is there has any docs talk about what should be installed or setup ? | 05:45 |
Ryan_Lane | I think the official docs do, but they don't mention so using the packages | 05:45 |
HugoKuo | I'm not sure which package should be installed for network and scheduler | 05:45 |
HugoKuo | ok... | 05:45 |
Ryan_Lane | nova-network, and nova-scheduler | 05:46 |
Ryan_Lane | it'll install the rest | 05:46 |
HugoKuo | there's a dirty way....I'm going to install all stuff then disable other service :> | 05:46 |
HugoKuo | thanks | 05:46 |
Ryan_Lane | yw | 05:46 |
HugoKuo | use same nova.conf file which is on CC ? | 05:47 |
Ryan_Lane | I use the same nova.conf on all systems | 05:47 |
openstackhudson | Project nova build #619: SUCCESS in 1 min 50 sec: http://hudson.openstack.org/job/nova/619/ | 05:48 |
openstackhudson | Tarmac: Modifies S3ImageService to wrap LocalImageService or GlanceImageService. It now pulls the parts out of s3, decrypts them locally, and sends them to the underlying service. It includes various fixes for image/glance.py, image/local.py and the tests. | 05:48 |
openstackhudson | I also uncovered a bug in glance so for the glance backend to work properly, it requires the patch to glance here lp:~vishvananda/glance/fix-update or Glance's Cactus trunk r80. | 05:48 |
*** guynaor has joined #openstack | 05:50 | |
*** guynaor has left #openstack | 05:50 | |
*** cascone has joined #openstack | 05:53 | |
Code_Bleu | Was working on nova-volumes and for whatever reason when i went to restart everything..my networking isnt working. Anyone know what this error is? http://paste.openstack.org/show/844/ | 05:55 |
*** MarkAtwood has joined #openstack | 05:55 | |
*** hub_cap has joined #openstack | 06:09 | |
*** mahadev has joined #openstack | 06:10 | |
*** nRy has joined #openstack | 06:18 | |
*** zenmatt has quit IRC | 06:26 | |
*** bluetux has joined #openstack | 06:30 | |
*** mRy has joined #openstack | 06:36 | |
*** ramkrsna has joined #openstack | 06:36 | |
*** mRy is now known as Guest85162 | 06:36 | |
*** nRy has quit IRC | 06:39 | |
HugoKuo | I added a new compute node , and it works fine . so I want to disable the nova-compute service on CC. I use "service nova-manage stop" | 06:40 |
*** gregp76 has quit IRC | 06:40 | |
HugoKuo | But when I use nova-manage service list | 06:41 |
HugoKuo | the compute service is still Enabled :< | 06:41 |
HugoKuo | Jack nova-compute enabled XXX 2011-03-10 06:37:31 | 06:41 |
HugoKuo | Dewars nova-compute enabled XXX 2011-03-10 06:41:44 | 06:42 |
HugoKuo | btw , what is XXX ? is that normal ~? thanks | 06:42 |
kpepple | HugoKuo: use "$ service nova-compute stop" not nova-manage. the XXX means that it is not running | 06:46 |
HugoKuo | oh sorry just wrong type | 06:47 |
HugoKuo | I use service nova-compute stop :> | 06:47 |
HugoKuo | it's ok , eventhough the service status is Enabled | 06:47 |
HugoKuo | while I run up instance , shceduler do not use the resources on CC | 06:48 |
HugoKuo | kpepple: looks good ................thanks | 06:48 |
HugoKuo | wait " the XXX means that it is not running " ...........but Dewars node is running well ~ how come :< | 06:51 |
*** matiu_ has joined #openstack | 06:52 | |
*** matiu_ has joined #openstack | 06:52 | |
*** mgoldmann has joined #openstack | 06:53 | |
*** nerens has joined #openstack | 06:54 | |
kpepple | HugoKuo: the XXX is printed if the service status hasn't been updated within 15 seconds in the DB. enabled is printed if the service is marked enabled in the database. see /usr/bin/nova-manage:566 for the python code. | 06:54 |
HugoKuo | thanks ~ | 06:55 |
*** matiu has quit IRC | 06:55 | |
*** matiu_ is now known as matiu | 06:55 | |
*** j05h has quit IRC | 07:14 | |
Manikandan1 | when i run a instance in openstack it show in scheduling state wt to do any one help me reg this | 07:15 |
*** miclorb_ has quit IRC | 07:15 | |
*** j05h has joined #openstack | 07:21 | |
*** photron has joined #openstack | 07:29 | |
*** allsystemsarego has joined #openstack | 07:30 | |
*** allsystemsarego has joined #openstack | 07:30 | |
*** rchavik has quit IRC | 07:30 | |
*** matiu has quit IRC | 07:31 | |
*** rchavik has joined #openstack | 07:32 | |
*** miclorb_ has joined #openstack | 07:34 | |
winston-d | Manikandan1: scheduling usually takes a while, but if everything is fine, the VM will eventually be scheduled to certain node and the state is changed to running | 07:36 |
*** mRy has joined #openstack | 07:36 | |
*** mRy is now known as Guest22299 | 07:36 | |
*** guigui has joined #openstack | 07:37 | |
*** Guest85162 has quit IRC | 07:40 | |
*** MarcMorata has joined #openstack | 07:55 | |
*** lionel has quit IRC | 07:59 | |
*** lionel has joined #openstack | 07:59 | |
*** reldan has joined #openstack | 08:13 | |
*** azneita has quit IRC | 08:14 | |
*** allsystemsarego has quit IRC | 08:15 | |
*** rcc has joined #openstack | 08:16 | |
*** ianweller has quit IRC | 08:17 | |
*** allsystemsarego has joined #openstack | 08:17 | |
*** localhost has quit IRC | 08:21 | |
*** skiold has joined #openstack | 08:25 | |
ttx | mtaylor, soren: did one of you push novaclient to Hudson ? | 08:26 |
mtaylor | ttx: yeah | 08:26 |
ttx | mtaylor: ok, cool. | 08:26 |
*** mahadev has quit IRC | 08:26 | |
HugoKuo | I seperate network & scheduler from all in one | 08:27 |
HugoKuo | after that I stop network & scheduler service on that all in one machine | 08:28 |
soren | mtaylor: From packages? | 08:28 |
soren | Or something else? | 08:28 |
soren | ttx: What's missing before we can upload it to Ubuntu? | 08:29 |
ttx | soren: a sanity check. | 08:29 |
HugoKuo | then start network & scheduler service on (172.16.2.79) API & ObjectStore & mysql on 172.16.2.78 ......... | 08:29 |
ttx | soren: though I can't find novaclient in the PPA | 08:29 |
*** CloudChris has joined #openstack | 08:30 | |
ttx | mtaylor: something went wrong in the PPA copy ? | 08:30 |
HugoKuo | instance status pending on networking ........ should I modify any IP address in nova.conf which I copy from 172.16.2.78(all in one) | 08:30 |
ttx | mtaylor: oh, you copied them to nova-core/ppa... probably should have been nova-core/trunk ? | 08:31 |
soren | nova-core is what we use for stuff that gets installed on Hudson. | 08:32 |
soren | Nothing gets put there automatically. | 08:32 |
soren | I'm not sure that's the ideal approach, but that's how we do it right now. | 08:32 |
soren | err.. | 08:32 |
soren | s/nova-core/nova-core\/ppa/g | 08:32 |
ttx | hm | 08:33 |
soren | And s/Hudson/Jenkins/ | 08:33 |
* soren hasn't committed that one to muscle memory yet. | 08:33 | |
ttx | soren: we'll need novaclient in the trunk PPA as well, if we want the ubuntu PPA builds to work, right ? | 08:33 |
soren | Certainly. | 08:33 |
soren | We also want it auto-built and whatnot. | 08:34 |
ttx | soren: ok, please copy them then, so that zones2 landing doesn't break PA builds | 08:34 |
*** kashyap has quit IRC | 08:34 | |
ttx | (hopefully) | 08:34 |
ttx | s/PA/PPA | 08:34 |
soren | Hm... | 08:35 |
soren | Yeah,ok. | 08:35 |
ttx | Once we have builds that show that novaclient is OK for what we need, I'll push to natty NEW | 08:35 |
ttx | just in case we uncover some issue | 08:36 |
*** mRy has joined #openstack | 08:36 | |
*** mRy is now known as Guest36834 | 08:37 | |
*** DigitalFlux has joined #openstack | 08:37 | |
*** MarcMorata has quit IRC | 08:38 | |
soren | ttx: You could file the FFe now, just for good measure. | 08:39 |
*** Nacx has joined #openstack | 08:39 | |
*** Guest22299 has quit IRC | 08:40 | |
ttx | ack. | 08:41 |
Manikandan1 | i have installed in one machine only | 08:43 |
*** miclorb_ has quit IRC | 08:44 | |
Manikandan1 | how to add nodes in node to nova | 08:44 |
*** reldan has quit IRC | 08:44 | |
soren | nodes in node? | 08:44 |
soren | I'm not sure what that means | 08:44 |
*** eikke has joined #openstack | 08:45 | |
HugoKuo | Manikandanl : | 08:46 |
HugoKuo | did you means you want a archi like eucalyptus CC and NC ? | 08:47 |
*** cascone has quit IRC | 08:48 | |
*** kashyap has joined #openstack | 08:51 | |
*** bkkrw has joined #openstack | 08:51 | |
Manikandan1 | yes | 08:55 |
*** burris has quit IRC | 08:55 | |
*** burris has joined #openstack | 08:56 | |
HugoKuo | I think yes | 08:59 |
HugoKuo | set bridge in compute node | 09:00 |
HugoKuo | then connect to network node | 09:00 |
HugoKuo | set iptables on network node | 09:00 |
*** daveiw has joined #openstack | 09:01 | |
openstackhudson | Project dashboard-tarmac build #10,396: FAILURE in 1 min 52 sec: http://hudson.openstack.org/job/dashboard-tarmac/10396/ | 09:02 |
openstackhudson | Project nova-bexar-tarmc build #6,208: FAILURE in 35 sec: http://hudson.openstack.org/job/nova-bexar-tarmc/6208/ | 09:03 |
openstackhudson | Project nova-tarmac build #68,007: FAILURE in 35 sec: http://hudson.openstack.org/job/nova-tarmac/68007/ | 09:03 |
ttx | soren: LP downtime: ^^ | 09:04 |
openstackhudson | Project dashboard-tarmac build #10,397: STILL FAILING in 35 sec: http://hudson.openstack.org/job/dashboard-tarmac/10397/ | 09:06 |
openstackhudson | Project nova-bexar-tarmc build #6,209: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-bexar-tarmc/6209/ | 09:07 |
openstackhudson | Project nova-tarmac build #68,008: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/68008/ | 09:07 |
openstackhudson | Project dashboard-tarmac build #10,398: STILL FAILING in 35 sec: http://hudson.openstack.org/job/dashboard-tarmac/10398/ | 09:11 |
*** gaveen has joined #openstack | 09:11 | |
openstackhudson | Project nova-bexar-tarmc build #6,210: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-bexar-tarmc/6210/ | 09:12 |
HugoKuo | can not start up nova-api http://pastebin.com/AipyZu7R | 09:12 |
openstackhudson | Project nova-tarmac build #68,009: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/68009/ | 09:12 |
*** ramkrsna has quit IRC | 09:13 | |
HugoKuo | I did disable nova-compute nova-network nova-scheduler and empty networks , fixed_ips in mysqlDB | 09:14 |
ttx | HugoKuo: looks like you haven't set a --state_path, it looks for keys in the /usr/lib/pymodules directory | 09:14 |
* ttx fetches some coffee | 09:14 | |
HugoKuo | you r right ....... | 09:15 |
HugoKuo | trying ~ | 09:15 |
soren | ttx: Yeah, looks like it's coffee break time. | 09:16 |
openstackhudson | Project dashboard-tarmac build #10,399: STILL FAILING in 35 sec: http://hudson.openstack.org/job/dashboard-tarmac/10399/ | 09:16 |
* soren disables a bunch of jenkins jobs | 09:16 | |
openstackhudson | Project nova-bexar-tarmc build #6,211: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-bexar-tarmc/6211/ | 09:17 |
openstackhudson | Project nova-tarmac build #68,010: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/68010/ | 09:17 |
* soren sobs | 09:17 | |
HugoKuo | ttx : thanks .....it resolved my problem | 09:18 |
*** irahgel has joined #openstack | 09:20 | |
*** drico has joined #openstack | 09:21 | |
*** gaveen has quit IRC | 09:22 | |
HugoKuo | additional question ......while I isolated netowkr+scheduler from Cloud Controller(API+mysql+objectstore) ... | 09:26 |
*** ramkrsna has joined #openstack | 09:26 | |
HugoKuo | should I set bridge in this machine ? | 09:26 |
*** hadrian has joined #openstack | 09:27 | |
*** gasbakid has joined #openstack | 09:29 | |
rcc | launchpad offline QQ | 09:30 |
soren | rcc: Indeed. | 09:30 |
*** ChanServ sets mode: +o soren | 09:30 | |
*** miclorb has joined #openstack | 09:30 | |
*** soren changes topic to "Launchpad in read-only mode. Expected back at 10:30 UTC | Wiki: http://wiki.openstack.org/ | Nova Docs: nova.openstack.org | Swift Docs: swift.openstack.org | Logs: http://eavesdrop.openstack.org/irclogs/ | http://paste.openstack.org/" | 09:30 | |
*** soren sets mode: -o soren | 09:31 | |
*** gaveen has joined #openstack | 09:35 | |
*** adjohn has quit IRC | 09:36 | |
*** mRy has joined #openstack | 09:36 | |
*** mRy is now known as Guest92841 | 09:37 | |
*** Guest36834 has quit IRC | 09:40 | |
*** ChanServ sets mode: +o soren | 09:40 | |
*** soren changes topic to "Wiki: http://wiki.openstack.org/ | Nova Docs: nova.openstack.org | Swift Docs: swift.openstack.org | Logs: http://eavesdrop.openstack.org/irclogs/ | http://paste.openstack.org/" | 09:40 | |
openstackhudson | Yippie, build fixed! | 09:40 |
openstackhudson | Project dashboard-tarmac build #10,400: FIXED in 2.1 sec: http://hudson.openstack.org/job/dashboard-tarmac/10400/ | 09:40 |
openstackhudson | Yippie, build fixed! | 09:41 |
openstackhudson | Project nova-bexar-tarmc build #6,212: FIXED in 1.9 sec: http://hudson.openstack.org/job/nova-bexar-tarmc/6212/ | 09:41 |
openstackhudson | Yippie, build fixed! | 09:46 |
openstackhudson | Project nova-tarmac build #68,011: FIXED in 2.9 sec: http://hudson.openstack.org/job/nova-tarmac/68011/ | 09:46 |
soren | ttx: Copied python-novaclient to the trunk ppa. | 09:50 |
*** MarcMorata has joined #openstack | 09:53 | |
*** zul_ has joined #openstack | 09:54 | |
ttx | soren: cool. Filed FFe. | 09:55 |
soren | ttx: bug no? | 09:55 |
ttx | bug 732461 | 09:55 |
uvirtbot | Launchpad bug 732461 in ubuntu "[FFe] Please add python-novaclient to universe" [Wishlist,Confirmed] https://launchpad.net/bugs/732461 | 09:55 |
HugoKuo | almost done :> | 10:00 |
HugoKuo | I think my last problem is iptables configuration | 10:01 |
Manikandan1 | i run nova-compute i got CRITICAL nova [-] Class get_connection cannot be found | 10:13 |
*** hazmat has quit IRC | 10:17 | |
HugoKuo | log file ? | 10:23 |
*** zul has quit IRC | 10:24 | |
*** zul_ is now known as zul | 10:24 | |
*** zul has joined #openstack | 10:24 | |
*** zul_ has joined #openstack | 10:25 | |
* soren misses vishy | 10:27 | |
soren | AHA! | 10:31 |
*** zul_ has quit IRC | 10:34 | |
*** mRy has joined #openstack | 10:36 | |
*** mRy is now known as Guest24180 | 10:37 | |
*** soren sets mode: -o soren | 10:38 | |
*** Guest92841 has quit IRC | 10:40 | |
HugoKuo | does anyone know which iptables is correct ? http://pastebin.com/2Y1bZ6Uz | 10:41 |
soren | HugoKuo: The former. | 10:46 |
soren | HugoKuo: It's the API server that can actually act as the metadata server. | 10:46 |
HugoKuo | thanks | 10:46 |
uvirtbot | New bug: #732493 in openstack-dashboard "Unable to change the project manager of a project" [Undecided,New] https://launchpad.net/bugs/732493 | 10:47 |
*** burris has quit IRC | 10:54 | |
*** burris has joined #openstack | 10:59 | |
*** miclorb has quit IRC | 11:01 | |
*** kashyap has quit IRC | 11:03 | |
*** reldan has joined #openstack | 11:06 | |
*** fayce has quit IRC | 11:09 | |
HugoKuo | how to del -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.16.2.79:8773 | 11:17 |
HugoKuo | I use iptables -D it shows me bad rule :< | 11:17 |
HugoKuo | everytime I restart nova-network , it always write wrong iptables :< | 11:18 |
*** kashyap has joined #openstack | 11:21 | |
*** mRy has joined #openstack | 11:37 | |
*** mRy is now known as Guest21335 | 11:37 | |
*** Guest24180 has quit IRC | 11:40 | |
*** h0cin has joined #openstack | 11:54 | |
*** h0cin has joined #openstack | 11:54 | |
*** daveiw has left #openstack | 11:58 | |
*** reldan has quit IRC | 12:10 | |
*** ctennis has quit IRC | 12:13 | |
soren | HugoKuo: Wrong how? | 12:16 |
HugoKuo | wait a moment | 12:17 |
Code_Bleu | Was working on nova-volumes and for whatever reason when i went to restart everything..my networking isnt working. Anyone know what this error is? http://paste.openstack.org/show/844/ | 12:23 |
*** colinnich_ has quit IRC | 12:25 | |
*** colinnich has joined #openstack | 12:26 | |
*** mjmac_ is now known as mjmac | 12:32 | |
soren | Code_Bleu: Yes. Not your fault. | 12:33 |
soren | Code_Bleu: It's a bug. I've filed a patch, but it needs to be reviewed first. | 12:33 |
HugoKuo | my network component's iptables auto writed by nova-network | 12:36 |
*** daveiw has joined #openstack | 12:36 | |
*** ctennis has joined #openstack | 12:36 | |
*** ctennis has joined #openstack | 12:36 | |
HugoKuo | it's weired http://pastebin.com/KQ1vaHE5 | 12:36 |
HugoKuo | after nova-network start it auto write my iptable like this | 12:37 |
*** mRy has joined #openstack | 12:37 | |
*** mRy is now known as Guest45309 | 12:37 | |
HugoKuo | and it cause my host can not ping outside for example "google.com" | 12:37 |
HugoKuo | I know linux_net.py do this , is there anything wrong ? | 12:38 |
HugoKuo | soren : | 12:38 |
*** Guest21335 has quit IRC | 12:40 | |
*** Manikandan1 has quit IRC | 12:42 | |
*** photron has quit IRC | 12:42 | |
Code_Bleu | soren: do you have a link to the bug, so i can follow it? | 12:42 |
soren | Code_Bleu: I didn't file a bug. I can, though. | 12:44 |
soren | Code_Bleu: bug 732549 | 12:46 |
uvirtbot | Launchpad bug 732549 in nova "execvp fallout" [Undecided,New] https://launchpad.net/bugs/732549 | 12:46 |
soren | HugoKuo: I don't immediately see how any ff those rules would cause your host to not ping the outside. | 12:47 |
soren | At all. | 12:47 |
*** schegi has joined #openstack | 12:52 | |
* soren screams a little | 12:52 | |
HugoKuo | thanks | 12:53 |
HugoKuo | I try to reinstall it | 12:53 |
*** reldan has joined #openstack | 12:54 | |
*** zul has quit IRC | 12:56 | |
HugoKuo | -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.16.2.79:8773 | 13:00 |
HugoKuo | nova-networl auto write this wrong address | 13:00 |
*** schegi has quit IRC | 13:00 | |
HugoKuo | i should be 172.16.2.78 | 13:01 |
zykes- | the openstack panel | 13:03 |
zykes- | is that the one by nasa ? | 13:03 |
*** zul has joined #openstack | 13:05 | |
uvirtbot | New bug: #732549 in nova "execvp fallout" [Undecided,New] https://launchpad.net/bugs/732549 | 13:07 |
*** dovetaildan has quit IRC | 13:09 | |
soren | HugoKuo: Well, is your cc_host flag configured correctly? | 13:11 |
*** dovetaildan has joined #openstack | 13:11 | |
HugoKuo | CC_host must assign to which machine ? | 13:11 |
HugoKuo | I assign to CC instead of network | 13:12 |
HugoKuo | I assign to CC instead of nova-network box | 13:12 |
soren | I don't understand what you're saying. | 13:12 |
soren | but cc_host should point to an api server. | 13:12 |
HugoKuo | yes I did... | 13:12 |
HugoKuo | that's weired | 13:13 |
HugoKuo | it set that iptables , depends on gateway ip address | 13:13 |
*** fayce has joined #openstack | 13:13 | |
*** ppetraki has joined #openstack | 13:22 | |
soren | Man, so many things broke since yesterday, it's not even funny. | 13:32 |
ttx | that was an intense merge day. | 13:32 |
soren | I'm almost done repairing everything. | 13:33 |
soren | This is a crappy timezone. | 13:33 |
czajkowski | soren: where are you now ? | 13:33 |
soren | czajkowski: At home. | 13:34 |
czajkowski | soren: yer very strange | 13:34 |
soren | czajkowski: Indeed. | 13:34 |
czajkowski | you'd think you'd be happy with that timezone | 13:34 |
soren | czajkowski: It's crappy because I'm pretty much alone, so if things break at the end of everybody else's day, they pass out, meaning to fix it the next day, except the next day, I can't get anything done because all this stuff is acting up. | 13:35 |
*** guynaor has joined #openstack | 13:35 | |
*** bcwaldon has joined #openstack | 13:35 | |
czajkowski | soren: short straw, before you log off tonight break everyhing so folks in USa have to fix it | 13:35 |
czajkowski | :) | 13:35 |
czajkowski | someone has gone and broken my email connection, cant send any mail which makes sending invites out extremely hard :s | 13:36 |
*** justinsb has quit IRC | 13:36 | |
uvirtbot | New bug: #732570 in nova "Meta-data service broken if ramdisk or kernel is missing (which is valid!)" [Undecided,New] https://launchpad.net/bugs/732570 | 13:36 |
soren | czajkowski: I thought e-mail was dead. Replaced by facebook and twitter. | 13:36 |
czajkowski | soren: not for clients, when I want to invite them to a BBQ | 13:36 |
*** mRy has joined #openstack | 13:37 | |
*** mRy is now known as Guest63226 | 13:37 | |
*** Guest45309 has quit IRC | 13:40 | |
soren | czajkowski: Ah. No. | 13:44 |
*** guigui3 has joined #openstack | 13:44 | |
soren | It looks like my iptables manager thingamajig will be merged shortly. | 13:46 |
*** guigui has quit IRC | 13:46 | |
czajkowski | yay for thingamjiggys | 13:46 |
*** ctennis has quit IRC | 13:46 | |
*** gasbakid has quit IRC | 13:48 | |
*** guigui3 has quit IRC | 13:49 | |
*** ctennis has joined #openstack | 13:49 | |
*** ctennis has joined #openstack | 13:49 | |
*** bcwaldon has quit IRC | 13:49 | |
*** pvo has joined #openstack | 13:52 | |
*** justinsb has joined #openstack | 13:52 | |
*** dprince has joined #openstack | 13:54 | |
fayce | hey guys, we are trying a 3 boxes config ("Jack": nova-compute+nova-api+nova-objectstore "Absolut":nova-network+nova-scheduler "Dewars":nova-compute)... do you have suggestion regarding the network topology for that ? | 13:57 |
*** matclayton has joined #openstack | 13:58 | |
fayce | should I put Dewars on the same private network as Absolut (NIC1), and have Absolut connected to the corp network (NIC2) , and have Jack in the corp network | 13:59 |
fayce | ? | 13:59 |
fayce | or should I just all connect them to the same network ? | 13:59 |
*** mustfeed has joined #openstack | 14:00 | |
dweimer | The swift deployment guide mentions that RAID 5 or 6 configurations are not recommended for performance reasons. Is there a document that goes into more specifics as to why? | 14:01 |
*** bcwaldon has joined #openstack | 14:01 | |
*** HugoKuo has quit IRC | 14:01 | |
*** hadrian has quit IRC | 14:02 | |
creiht | dweimer: The swift use case is the pahological worst use case for parity based RAID | 14:02 |
creiht | random small reads and writes, with emphasis on writes | 14:03 |
*** pvo has quit IRC | 14:04 | |
creiht | In our testing with a 24 drive RAID 5/6, performance quickly dropped to below the performance of a single drive | 14:04 |
creiht | plus add 2+ week rebuilds when a drive goes bad | 14:04 |
creiht | it was a nostarter for us | 14:05 |
creiht | so swift stores 3 replicas, and works around failure | 14:05 |
creiht | no need for extra redundancy | 14:05 |
creiht | embrace it :) | 14:05 |
creiht | If you absolutely need the extra redundancy, then something like raid 10 would be fine | 14:06 |
dweimer | creiht: The raid 10 option could work. That would theoretically imrpove upload speeds as well right? | 14:07 |
creiht | dweimer: if you have the network for it, then possibly | 14:07 |
creiht | though with raid 10, you are using a lot of disk for each object (effectively 6 replicas) | 14:08 |
dweimer | creiht: If we raid 10 could we reduce the application level replicas to 2 or would that not be recommended? | 14:09 |
creiht | swift isn't optimized for the single stream, but for horizontal scalability | 14:09 |
*** guigui has joined #openstack | 14:09 | |
creiht | dweimer: relica counts other than 3 *should* work, but haven't been thoroughly tested | 14:09 |
creiht | but I wouldn't recommend it :) | 14:10 |
creiht | while on disk you still have the extra replicas, you have only 2 nodes with data | 14:10 |
creiht | so in failure scenarios, you still have data durability, but you will not have the same availability | 14:11 |
soren | sirp_: Around, by any chance? | 14:12 |
*** hggdh has quit IRC | 14:13 | |
*** pvo has joined #openstack | 14:13 | |
*** hggdh has joined #openstack | 14:13 | |
*** hggdh has joined #openstack | 14:13 | |
dweimer | Thanks, that helps. I'll give the raid 10 a test on the performance side, but 6 replicas is likely beyond our budget. We're looking at replacing our archive system and still have a few user who need fast writes. | 14:13 |
creiht | How fast? | 14:14 |
dweimer | Nothing blazing 70-80MB/s. We don't have money for anything but SATA drives though. | 14:15 |
creiht | Have you tested throughput with swift as is yet? | 14:16 |
dweimer | My current tests are in the 30MB/s range. I haven't spent any time optimizing beyond the multi-node deployment guide though. | 14:17 |
creiht | k | 14:17 |
creiht | dweimer: have you tried the latest trunk? | 14:17 |
dweimer | creiht: Nope, we're running bexar right now. | 14:17 |
creiht | It has a patch that tries to better parallelize the uploads | 14:17 |
creiht | it would be interesting to see if it helps | 14:18 |
*** HugoKuo has joined #openstack | 14:18 | |
dweimer | Alright, I'll give it a shot. | 14:18 |
dweimer | I need to finish getting our configuration management stuff setup first though. Pushing out updates to the nodes manually is frustrating. | 14:19 |
soren | vishy, xtoddx: Any particular reason why none of you set the iptables branch to approved? | 14:20 |
* creiht starts getting ready to head to pycon | 14:21 | |
mustfeed | hi can someone help me ? nova-network seems to have problems finding / accessing the database. | 14:21 |
HugoKuo | all service will startup after boot OS , is there any way to disable forever? | 14:26 |
soren | HugoKuo: Destroy the machine. | 14:26 |
HugoKuo | :> | 14:26 |
HugoKuo | that's a perfect way | 14:27 |
soren | "forever" is a long time. | 14:27 |
soren | It's the only way I can think of to prevent someone from accidentally enabling them again. Even. | 14:27 |
soren | s/Even/Ever/ | 14:27 |
HugoKuo | sorry I just want to stop network and scheduler service on CC | 14:27 |
soren | What's CC? | 14:28 |
HugoKuo | API + SQL+objetstore | 14:28 |
soren | Just remove the network and scheduler packages. | 14:28 |
HugoKuo | !!!! | 14:28 |
openstack | HugoKuo: Error: "!!!" is not a valid command. | 14:28 |
HugoKuo | ok! | 14:28 |
*** fayce has quit IRC | 14:32 | |
jaypipes | jk0: heh, I thought yesterday was my review day, but it was yours... :) | 14:36 |
ttx | jaypipes: that resulted in a good day for merges, we should do that every day. | 14:37 |
jaypipes | jk0: spent all day doing reviews and my review day is *next* Wednesday, not yestrerday :) | 14:37 |
jaypipes | ttx: ha | 14:37 |
*** mRy has joined #openstack | 14:37 | |
jaypipes | _0x44: around? just a heads up, it's your review day today: http://wiki.openstack.org/Nova/ReviewDays | 14:37 |
*** mRy is now known as Guest11069 | 14:37 | |
soren | jaypipes: He gets an e-mail, actually. | 14:37 |
jaypipes | soren: I didn't. | 14:37 |
soren | jaypipes: I failed to document that. | 14:37 |
soren | jaypipes: No... | 14:37 |
soren | jaypipes: Because your review day is *next week* | 14:38 |
jaypipes | soren: is _0x44 special? :P | 14:38 |
westmaas_ | lol | 14:38 |
jaypipes | soren: oh, dug. | 14:38 |
jaypipes | duhg. | 14:38 |
jaypipes | gah! | 14:38 |
jaypipes | duh. | 14:38 |
ttx | jaypipes: pwnd | 14:38 |
jaypipes | soren: poop on your logical reason. | 14:38 |
jaypipes | soren: BTW, are you going to bring your debugging hat to the summit. debugging hat++ | 14:39 |
mustfeed | nova-network seems to have problems finding / accessing the database... would be pleased if someone could help me on this | 14:39 |
soren | Yeah, I have a cron job that sends an e-mail. If it can't resolve the name on the schedule to an e-mail, it e-mails me instead letting me know that something's up. Being in this timezone, I can probably fix things up before most other people wake up. | 14:39 |
ttx | we could do a challenge of the best hat. | 14:39 |
jaypipes | mustfeed: got an error? | 14:40 |
*** mrsrikanth has joined #openstack | 14:40 | |
soren | jaypipes: It folds easily, so yes. I just don't think it'll work as well. | 14:40 |
*** jaypipes has left #openstack | 14:40 | |
*** jaypipes has joined #openstack | 14:40 | |
soren | jaypipes: When I used to be in an office, people knew that if I had the hat on, it had better be important before they bothered me. At an event like our summits, I'm sure I'd be bothered *more* if I wore it. | 14:41 |
*** Guest63226 has quit IRC | 14:41 | |
jaypipes | soren: why? doesn't work in PST timezone? ;) | 14:41 |
soren | It's never been tested in that environment, actually. | 14:41 |
soren | Nor PDT (which I suppose it will be at the time of the summit). | 14:42 |
soren | Or is it PDST? | 14:42 |
jaypipes | sandywalsh: zones2 heading off to the Tarmac pit. | 14:42 |
jaypipes | soren: not sure :) | 14:42 |
soren | Nope, PDT seems to be accurte. | 14:42 |
soren | accurate, even. | 14:43 |
jaypipes | soren: prolly PDT | 14:43 |
*** pvo has quit IRC | 14:43 | |
*** mustfeed has quit IRC | 14:43 | |
jaypipes | soren, ttx, sirp_: if nobody has any objections, I'd like to push live-migrations to Tarmac as well. https://code.launchpad.net/~nttdata/nova/live-migration/+merge/49699 | 14:44 |
soren | jaypipes: I suppose. | 14:44 |
soren | jaypipes: I wish trunk worked before we put more new stuff in it, though. | 14:45 |
*** DigitalFlux has quit IRC | 14:45 | |
jaypipes | soren: what is broken in trunk? | 14:45 |
soren | jaypipes: That said, I think all of the breakage is known and has bmp's, so.. | 14:45 |
ttx | jaypipes: most of it. | 14:45 |
soren | jaypipes: A bunch of stuff. | 14:45 |
soren | jaypipes: Let me find links. | 14:45 |
soren | https://bugs.launchpad.net/nova/+bug/732570 | 14:45 |
uvirtbot | Launchpad bug 732570 in nova "Meta-data service broken if ramdisk or kernel is missing (which is valid!)" [High,In progress] | 14:45 |
ttx | soren: see my comment on execvp-fallout | 14:45 |
soren | https://bugs.launchpad.net/nova/+bug/732549 | 14:46 |
uvirtbot | Launchpad bug 732549 in nova "execvp fallout" [Critical,In progress] | 14:46 |
soren | ttx: Oh, good catch. | 14:46 |
ttx | soren: coming from a dupe | 14:47 |
jaypipes | ttx, soren: sorry, how is the above "most of trunk is broken"? | 14:47 |
ttx | jaypipes: I guess that's for some definition of "most". Like you can't run instances from it. | 14:47 |
HugoKuo | crazy ing XD | 14:47 |
soren | jaypipes: Few things are broken, but they're rather integral and you can't really do anything useful with trunk right now. | 14:48 |
*** f4m8 is now known as f4m8_ | 14:48 | |
dprince | You can't. I haven't been able to spawn an instance since the execvp changes. | 14:48 |
jaypipes | ttx, soren: sure. this is why we need REAL functional tests.. | 14:48 |
ttx | jaypipes: you don't need to convince me of that. | 14:48 |
jaypipes | ttx, soren: and we need them linked to hudson. now. | 14:48 |
soren | jaypipes: I agree. | 14:48 |
soren | jaypipes: That's what I've been working towards. | 14:48 |
ttx | jaypipes: yesterday would have been better, even :) | 14:48 |
jaypipes | ttx: I was only working with what I had to work with. | 14:49 |
dprince | ttx: If you want to close my bug as a dup and fix everything inside of Soren's execvp branch I'm fine with that. | 14:49 |
soren | jaypipes: It's just taht until my iptables branch lands, I can't test a bloody thing because everything is unstable. | 14:49 |
*** CloudChris has left #openstack | 14:49 | |
ttx | dprince: which one ? | 14:49 |
jaypipes | soren: can we push the execvp branch through to fix a lot of it? | 14:49 |
*** JordanRinke_ has quit IRC | 14:49 | |
soren | jaypipes: You can start by approving it. | 14:49 |
dprince | ttx: 732403 | 14:49 |
ttx | jaypipes: it wasn't meant as a criticism of your review. Such breakage will continue to occur until we have those tests in. | 14:50 |
soren | jaypipes: and if we're feeling particularly ninja about it, we can approve it and ask forgiveness if it breaks. | 14:50 |
soren | i.e. breaks more than it's already broken. | 14:50 |
soren | jaypipes: As soon as my iptables branch lands, I can actually start to test things and trust that if it fails, it's not because of a race somewhere, but actually because something new is broken. | 14:51 |
ttx | dprince: oh, you proposed it already. I missed that | 14:51 |
ttx | soren: dprince has a branch for the "up" thing, so you don't *need to* fix that in your branch | 14:51 |
soren | I alrady did. | 14:52 |
ttx | you can alternatively approve his branch. | 14:52 |
jaypipes | soren, ttx: ninja'd it. | 14:52 |
ttx | ah. | 14:52 |
soren | jaypipes: *hugs* | 14:52 |
dprince | ttx: Yes. I'm hitting a slew of issues in the linux_net code ever since the execvp. | 14:52 |
ttx | soren: reject https://code.launchpad.net/~dan-prince/nova/nova_fix_bridge_interface_create/+merge/52839 | 14:52 |
jaypipes | soren: and the metadata service one? | 14:52 |
ttx | soren: I'll handle the bug | 14:52 |
soren | My tests will still fail, because they use the Ubuntu images which don't have a ramdisk, which no longer works. | 14:52 |
dprince | That 'up' issue seems to be the tip of the iceburg. | 14:52 |
jaypipes | dprince: 15 minutes. re-pull trunk... | 14:53 |
openstackhudson | Project nova build #620: SUCCESS in 1 min 55 sec: http://hudson.openstack.org/job/nova/620/ | 14:53 |
openstackhudson | Tarmac: Introduces the ZoneManager to the Scheduler which polls the child zones and caches their availability and capabilities. | 14:53 |
jaypipes | dprince: the fix is off to the tarmac pit. | 14:53 |
soren | dprince: I believe I've fixed everything else. | 14:53 |
dprince | Cool. Thanks. | 14:53 |
soren | I'm happily running instances, at least. | 14:53 |
*** dendro-afk is now known as dendrobates | 14:54 | |
soren | ttx: Done, by the way. | 14:55 |
*** guynaor has quit IRC | 14:55 | |
*** bkkrw has quit IRC | 14:55 | |
*** guigui has quit IRC | 14:55 | |
soren | I wonder how long it'll be before I get sufficiently annoyed that the Twisted help test case fails due to locale issues. | 14:56 |
*** scaleup has joined #openstack | 14:56 | |
ttx | jaypipes: about live-migration, maybe we should give termie a chance to approve it -- though it now has the required approvals so there is no real reason to. | 14:57 |
*** scaleup has quit IRC | 14:57 | |
_0x44 | jaypipes: I am around, but not yet up. I'm in PST right now | 14:58 |
*** gunga has joined #openstack | 15:00 | |
jaypipes | _0x44: ah, welcome back :) | 15:00 |
jaypipes | ttx: ya, sure, I pinged termie above about it... no worries. | 15:00 |
_0x44 | jaypipes: Grazie :) | 15:01 |
*** bkkrw has joined #openstack | 15:02 | |
openstackhudson | Project nova build #621: SUCCESS in 2 min 4 sec: http://hudson.openstack.org/job/nova/621/ | 15:03 |
openstackhudson | Tarmac: Fix a few things that were either missed in the execvp conversion or stuff that was merged after it, but wasn't updated accordingly. | 15:03 |
*** Tino has joined #openstack | 15:04 | |
Tino | Hi there | 15:04 |
*** pvo has joined #openstack | 15:06 | |
Tino | I need help with openstack... | 15:07 |
Tino | I can install and run nova und some node on a virtual box on my macbook but not on a real server | 15:07 |
*** ianweller has joined #openstack | 15:07 | |
*** ianweller is now known as Guest17534 | 15:08 | |
Tino | ubuntu 10.04 ist installed on both systems and I used the nova.sh script ant the instructions from here ... https://github.com/vishvananda/novascript | 15:09 |
Tino | on the real server I can not start an instance.. "unknown error occured" | 15:11 |
*** Guest17534 is now known as ianweller | 15:12 | |
*** ianweller has joined #openstack | 15:12 | |
soren | Tino: There's usually some useful information in the log files.. | 15:15 |
vishy | soren: there was only one core approve before todd. I don't know why todd didn't mark it | 15:15 |
soren | vishy: It's cool, I guess. It wouldn't have worked (due to the execvp branch) | 15:15 |
soren | vishy: ..but if someone would approve it now, that would by all sorts of awesome. | 15:16 |
soren | I just didn't feel like doing it myself since there might be some reason why you guys didn't. | 15:16 |
*** ctennis has quit IRC | 15:16 | |
*** Ryan_Lane has quit IRC | 15:18 | |
vishy | soren: sounds good... I noticed a couuple of bugs in disk.resize | 15:21 |
jaypipes | dprince, soren: k, execvp in trunk. | 15:22 |
vishy | 1) there is a typo in it from execvp | 15:22 |
soren | vishy: I fixed that. | 15:22 |
soren | 15:03 < openstackhudson> Project nova build #621: SUCCESS in 2 min 4 sec: http://hudson.openstack.org/job/nova/621/ | 15:22 |
soren | 15:03 < openstackhudson> Tarmac: Fix a few things that were either missed in the execvp conversion or stuff that was merged after it, but wasn't updated accordingly. | 15:22 |
soren | vishy: Try to keep up :) | 15:22 |
soren | At least I think I did. | 15:22 |
vishy | soren: cool, the other thing is that it doesn't seem to resize properly | 15:23 |
Tino | +soren there is nothing in the logs | 15:23 |
*** Ryan_Lane has joined #openstack | 15:23 | |
vishy | haven't been able to debug it yet, because I'm doing training | 15:23 |
soren | vishy: Let me just make sure that I actually did fix that resize execvp thing... | 15:24 |
vishy | soren: no worries, I won't have time to do anything until sunday | 15:24 |
*** matclayton has quit IRC | 15:24 | |
soren | Erk. | 15:24 |
soren | No. | 15:24 |
vishy | :( | 15:24 |
soren | Pushed to my lp:~soren/nova/execvp-fallout/ branch | 15:25 |
Tino | +soren is ther any special logfile for nove which is not at /var/log | 15:26 |
* vishy heads away :( | 15:26 | |
soren | Tino: They're in /var/log/nova, usually. | 15:26 |
Tino | damn.. there no /var/log/nova | 15:27 |
annegentle | what does state_path do, exactly? "Maintaining nova's state" - starting/stopping all services? | 15:27 |
soren | It's a directory that holds some state. | 15:27 |
*** mgoldmann has quit IRC | 15:28 | |
soren | Like instances data, network data, CA data... Stuff. | 15:28 |
* annegentle needs to understand "state" | 15:28 | |
soren | annegentle: ^ | 15:28 |
soren | annegentle: All the stuff we need to remember, but isn't in the database. | 15:28 |
soren | annegentle: Disk images, for instance. | 15:28 |
annegentle | soren: ah, ok. do other services store "state" in one location typically? Like is this a best practice? | 15:29 |
soren | annegentle: Yes. | 15:29 |
annegentle | soren: ok, thanks a bunch | 15:29 |
soren | annegentle: Sure thing. | 15:29 |
*** dragondm has joined #openstack | 15:31 | |
soren | jaypipes: Can you apply your ninja skills to this one, too? https://code.launchpad.net/~soren/nova/execvp-fallout/+merge/52870 | 15:32 |
*** hazmat has joined #openstack | 15:33 | |
*** rchavik has quit IRC | 15:34 | |
*** rnirmal has joined #openstack | 15:34 | |
soren | Perhaps we should have a standing exception for the two-core-dev-approvals for small fixes for regressions. | 15:37 |
*** mRy has joined #openstack | 15:37 | |
soren | These are pretty obvious, simple regressions. | 15:37 |
*** ddumitriu has joined #openstack | 15:37 | |
*** mRy is now known as Guest69500 | 15:37 | |
*** msassak has joined #openstack | 15:37 | |
Tino | +soren I could find any logs except the normal logs from ubuntu. /vat/log/nova does not exist. I have checked all screens | 15:38 |
soren | "all screens"? | 15:38 |
soren | What does that mean? | 15:38 |
jaypipes | soren: done. | 15:39 |
Tino | one for the ubuntu und when i start nove (like in the script i used) there will be another screen open to start the instances | 15:39 |
*** zenmatt has joined #openstack | 15:40 | |
soren | jaypipes: ta very much. | 15:40 |
*** Guest11069 has quit IRC | 15:40 | |
*** guigui has joined #openstack | 15:41 | |
*** hadrian has joined #openstack | 15:44 | |
jk0 | jaypipes: and here I thought you were just helping a guy out ;) | 15:44 |
Tino | +soren do you know any other working complete installation manual? the wiki and docs from openstack wont work. only the nova.sh script worked https://github.com/vishvananda/novascript but only on a virtual box not on a real server... | 15:46 |
annegentle | Tino: there are many install methods, what is your pupose in installing? | 15:47 |
*** MarkAtwood has quit IRC | 15:47 | |
annegentle | Tino: see http://docs.openstack.org/openstack-compute/admin/content/ch03s02.html for a list of methods | 15:47 |
*** MarkAtwood has joined #openstack | 15:47 | |
openstackhudson | Project nova build #622: SUCCESS in 1 min 53 sec: http://hudson.openstack.org/job/nova/622/ | 15:48 |
openstackhudson | Tarmac: Another little bit of fallout from the execvp branch. | 15:48 |
annegentle | Tino: it's possible you're installing trunk and trunk itself is not working. We've had a lot of code merged in the last day or so. | 15:48 |
sandywalsh | jaypipes, ttx thanks for getting zones2 through ... that's a huge relief! | 15:49 |
ttx | I hope zones3 will be a much smoother ride :) | 15:49 |
jaypipes | sandywalsh: now on to zones3 :) | 15:49 |
annegentle | Tino: an alternative package installation is ppa:nova-core/release rather than ppa:nova-core/rtrunk | 15:49 |
Tino | i tried the last 2 weeks all of this methods there... but nothing wirked.. some time an instance could not start, sometime an error occured before the start... so i tried the script i found in a video | 15:49 |
sandywalsh | jaypipes, exactly! | 15:49 |
annegentle | Tino: sub trunk for rtrunk I mean | 15:49 |
annegentle | Tino: might be a problem with your instance, might be a problem with networking, what troubleshooting have you tried? | 15:50 |
Tino | ah i tried the scirpt shown in the video on youtube.. and you told how to do it easy.. | 15:50 |
* soren twiddles thumbs, waiting for a race condition go "the other way" | 15:51 | |
Tino | i just tried to find any log files without success | 15:51 |
soren | How would you phrase that? I can't say "waiting for a race condition to happen", since the race condition is always there. It just usually ends well, but occasionally it works out differently. | 15:52 |
soren | Tino: Well, if you're using the nova.sh script, your logs are in the various screens. | 15:52 |
annegentle | Tino: right - which screen where you on when you tried to look in var/log/nova? | 15:53 |
Tino | i treid both.. i only have 2 | 15:53 |
soren | /var/log/nova will always show the same. | 15:53 |
*** lvaughn_ has quit IRC | 15:54 | |
soren | I mean... It won't show you different stuff depending on the screen you're looking at. It's a filesystem. It's orthogonal to the tty to which you're connected. | 15:54 |
Tino | but there is no /var/log/nova | 15:54 |
soren | Ok. I'm just saying that if there's no /var/log/nova, there won't be a /var/log/nova either if you look at it from another screen. | 15:55 |
Tino | yepp i know | 15:55 |
annegentle | Tino: well when I have installed wiht nova.sh I have something like 5-7 screens, what are missing? | 15:56 |
Tino | i got more screens starting the instances... but after nova.sh run i only have 2 | 15:56 |
annegentle | Tino: ok I"m going to run nova.sh and see what I get | 15:57 |
Tino | thats great.. thanks | 15:57 |
*** nidO has joined #openstack | 16:03 | |
dprince | sandywalsh: I'm getting a Class SchedulerManager cannot be found exception. | 16:07 |
dprince | sandywalsh: Any chance that is related to 782? | 16:08 |
dprince | I've been unable to create instances for a couple days now. | 16:08 |
dprince | :( | 16:08 |
sandywalsh | dprince, did you just pull trunk? | 16:08 |
dprince | sandywalsh: Yep. Using PPA packages just got this. | 16:09 |
*** mahadev has joined #openstack | 16:09 | |
sandywalsh | dprince, hmm, looking | 16:09 |
sandywalsh | dprince, can you paste trace? | 16:09 |
dprince | sandywalsh: I'm using packages. So is there perhaps a missing dependency or something? | 16:10 |
dprince | novaclient? | 16:10 |
*** guigui has quit IRC | 16:10 | |
sandywalsh | dprince, shouldn't be for SchedulerManager | 16:11 |
*** lvaughn has joined #openstack | 16:11 | |
sandywalsh | dprince, I assume you have novaclient installed? | 16:11 |
dprince | sandywalsh: http://paste.openstack.org/show/846/ | 16:11 |
dprince | sandywalsh: There isn't a package for 'novaclient' is there? | 16:12 |
dprince | sandywalsh: package i.e. Deb, etc. | 16:13 |
soren | there is. | 16:13 |
soren | python-novaclient or something. | 16:13 |
dprince | soren: roger that | 16:14 |
*** mahadev has quit IRC | 16:14 | |
JordanRi1ke | word | 16:14 |
sandywalsh | dprince, there should be now. ttx added it to ppa yesterday | 16:14 |
sandywalsh | dprince, Exception: No module named novaclient | 16:14 |
dprince | Yep. That did the trick. Can we add a dependency to nova-network so that it installs python-novaclient? | 16:15 |
dprince | Should I create a ticket for that or is it something you just take care of? | 16:15 |
dprince | Where is the source for the Deb packages BTW? In launchpad? | 16:15 |
soren | Yes. | 16:15 |
dprince | soren: Sorry. Yes to the ticket? (I asked to many questions) | 16:16 |
JordanRi1ke | anyone know how to set the logging level of swauth-add-user ? | 16:16 |
soren | dprince: Count to 10. | 16:16 |
JordanRi1ke | I run it and it just hangs, debug output could be nice | 16:16 |
*** bkkrw has quit IRC | 16:17 | |
soren | dprince: There. Added. | 16:17 |
dprince | Sweet. Thanks you sir. | 16:18 |
soren | It'll be included next time we build a package. | 16:18 |
Tino | annegentle could you find anything? | 16:19 |
*** johnpur has joined #openstack | 16:19 | |
*** ChanServ sets mode: +v johnpur | 16:19 | |
annegentle | Tino: still running ./nova.sh install... | 16:19 |
Tino | ah ok | 16:20 |
*** mahadev has joined #openstack | 16:20 | |
annegentle | Tino: it's interesting that it's picking up an image now, it didn't used to do that. Convenient. | 16:20 |
*** naehring has quit IRC | 16:20 | |
dprince | Hmmm. Still getting some errors after the execpv changes: | 16:21 |
dprince | http://paste.openstack.org/show/847/ | 16:21 |
jaypipes | soren: were you planning on putting a test case on this: https://code.launchpad.net/~soren/nova/dnsmasq-leasesfile-init/+merge/52421 as sirp_ suggested? Or is it a case of the whole thing is stubbed out in the unit tests anyway, so no point. | 16:21 |
annegentle | Tino: ok, taking a VirtualBox snapshot prior to running, standby... | 16:21 |
Tino | ok | 16:22 |
dprince | I can chase this. Just wanted to put it out there in case anybody else is having similar issues. | 16:23 |
soren | jaypipes: I guess I could. It'll be a couple of hours, though. | 16:23 |
*** cber2001 has joined #openstack | 16:24 | |
jaypipes | mark: is this Mark Washenberger? | 16:25 |
jaypipes | soren: up to you. | 16:25 |
*** ramkrsna has quit IRC | 16:26 | |
*** markwash has joined #openstack | 16:28 | |
*** dprince has quit IRC | 16:28 | |
bcwaldon | jaypipes: markwash == Mark Washenberger | 16:28 |
annegentle | Tino: ok, I have 9 screens running, and I've checked for /var/log/nova in 2 of them (test and nova) | 16:28 |
annegentle | Tino: and didn't find it yet | 16:28 |
annegentle | Tino: but in the test screen (number 8) I can run nova-manage service list | 16:29 |
jaypipes | bcwaldon: ah, k :) | 16:29 |
Tino | hmm okay.. i will chekc again my installation | 16:29 |
annegentle | Tino: heh, that's 8 | 16:29 |
jaypipes | bcwaldon: on this MP: https://code.launchpad.net/~markwash/nova/lp727225/+merge/52754, should nova-core be reviewing it? Mark's set the reviewer to "Titan" and I don't know who that is? | 16:29 |
annegentle | Tino: and nova-manage service list tells me that nova-network is running, nova-compute is running, but there's a DEBUG message from sql-alchemy | 16:29 |
markwash | jaypipes: I wasn't sure what to do with that field | 16:30 |
bcwaldon | jaypipes: Titan is a Rackspace team (of which I am a member) | 16:30 |
markwash | jaypipes: initially I tried to set up multiple requestors | 16:30 |
markwash | jaypipes: but I might have mixed things up | 16:30 |
annegentle | Tino: so, it looks like trunk isn't quite running today, the nice thing with nova.sh is that you can do ./nova.sh branch lp:~username/nova/branchname | 16:30 |
annegentle | Tino: which gives you the option of installing another branch | 16:31 |
jaypipes | markwash: oh, hi :) | 16:31 |
Tino | ok.. i will try another branch... release instead of trunk? | 16:31 |
jaypipes | markwash: no worries :) Just do a Request another Review, type in nova-core, and select the top group there.. | 16:31 |
markwash | jaypipes: thanks, will do | 16:31 |
annegentle | Tino: ok, now I'm trying to remember how to get the release branch :) | 16:31 |
bcwaldon | jaypipes: should we always add nova-core to merge props? | 16:32 |
jaypipes | markwash: looks like a great patch, so I didn't want it to get lost in the backlog... | 16:32 |
jaypipes | bcwaldon: it's the default I believe if you don't select anything. | 16:32 |
*** mustfg has joined #openstack | 16:32 | |
Tino | in the .sh i can change from /trunk to /release | 16:32 |
bcwaldon | jaypipes: ok, thanks for the info | 16:32 |
jaypipes | bcwaldon: but it's up to you. If you want to do a team review before proposing to lp:nova, you can do that too... | 16:32 |
mustfg | can someone help me with the mysql database ? | 16:32 |
mustfg | with nova compute | 16:33 |
jaypipes | mustfg: error? | 16:33 |
markwash | jaypipes: should I also do a bzr commit --fixes ... ? | 16:33 |
annegentle | Tino: uh, that's for packages, this is for development branches housed in Launchpad. | 16:33 |
Tino | hmm | 16:33 |
jaypipes | markwash: yup, that would allow LP to auto-link the commit with the bug, and mark the bug Fix Committed when your patch lands in trunk. | 16:33 |
annegentle | Tino: hang on, I'll find it - or does anyone know if off the top of their head? | 16:33 |
annegentle | Tino: ah, here you go. lp:nova/bexar | 16:34 |
jaypipes | markwash: --fixes=lp:727225 | 16:34 |
*** kashyap has quit IRC | 16:34 | |
annegentle | Tino: that's the latest release (Feb 3, 2011) | 16:34 |
Tino | ok i wil tried ./nova.sh branch lp:nova/bexar | 16:35 |
annegentle | Tino: Great! It really is the easiest way to try out nova... and get your feet wet, so to speak. | 16:36 |
*** mRy has joined #openstack | 16:37 | |
Tino | now i have a strange prob with the screens... when i ust .7nova.sh run i will have some screens (detached and one attached) but it will not start.. i need to reboot i think | 16:37 |
*** mRy is now known as Guest23691 | 16:37 | |
jaypipes | annegentle: icy wet. :) | 16:37 |
nidO | Hi, would it be possible to get a little help with the setup of swift please - i've got a 4-node cluster on vm's up and running (1xproxy/auth, 3xstorage), the rings are setup and balanced, i've been able to succesfully setup a user, I can correctly auth and stat the user, but trying to upload anything or post a container is failing | 16:39 |
*** ddumitriu has quit IRC | 16:41 | |
*** Guest69500 has quit IRC | 16:41 | |
*** neuro_damage has joined #openstack | 16:42 | |
neuro_damage | jaypipes: sup mang | 16:42 |
neuro_damage | jaypipes: I see we're working on new exciting thins ;) | 16:43 |
jaypipes | neuro_damage: hey :) how are ya? | 16:43 |
*** maple_bed has joined #openstack | 16:43 | |
neuro_damage | jaypipes: pretty good man, about to hit up work which is a little commute, I'll hit you up when I get int | 16:44 |
neuro_damage | I was checking out SWIFT | 16:44 |
*** ctennis has joined #openstack | 16:44 | |
neuro_damage | saw your comments in the documentation | 16:44 |
jaypipes | neuro_damage: heh :) | 16:44 |
jaypipes | neuro_damage: that's just about my only contribution to swift ;) | 16:45 |
neuro_damage | jaypipes: you're everywhere man, everywhere! omnipotent keeper of all cloud source code ;) | 16:45 |
neuro_damage | jaypipes: haha, yeah I believe it was written just before you got there, then it seems they eventually opensourced it haha | 16:45 |
*** ctennis has quit IRC | 16:47 | |
*** ctennis has joined #openstack | 16:47 | |
*** maplebed has quit IRC | 16:47 | |
*** dendrobates is now known as dendro-afk | 16:47 | |
*** dprince has joined #openstack | 16:49 | |
jaypipes | soren: hey, could you check out http://paste.openstack.org/show/848/? It *seems* that nova.utils is hitting the datastore before the configuration (which has the FLAGS.sql_connection in it) is even read? | 16:50 |
jaypipes | soren: this is mustfg's log file, btw. | 16:50 |
*** kashyap has joined #openstack | 16:51 | |
*** enigma has joined #openstack | 16:55 | |
*** eikke has quit IRC | 17:01 | |
*** sandywalsh has quit IRC | 17:02 | |
*** mustfg has quit IRC | 17:03 | |
openstackhudson | Project nova build #623: SUCCESS in 1 min 57 sec: http://hudson.openstack.org/job/nova/623/ | 17:03 |
openstackhudson | Tarmac: initializing instance power state on launch to 0 (fixes EC2 API bug) | 17:03 |
*** sandywalsh has joined #openstack | 17:03 | |
*** mustfg has joined #openstack | 17:04 | |
*** skiold has quit IRC | 17:05 | |
Tino | annegentle thanks for the tip with the new branch... now it works fine... but... the new instances are not comming up... just pending | 17:09 |
Tino | ah i found three nova running.. i think this will not work... damn it... i kill them now and start a new one | 17:11 |
*** mustfg has quit IRC | 17:12 | |
*** ewindisch_ has joined #openstack | 17:14 | |
ewindisch_ | hi | 17:14 |
ewindisch_ | I hear that I"m causing trouble | 17:14 |
_0x44 | ewindisch_: Don't do that then. :) | 17:14 |
*** lamar has joined #openstack | 17:15 | |
ewindisch_ | _0x44: imho, causing trouble is a sure way to get stuff fixed ;-) | 17:15 |
*** mahadev has quit IRC | 17:16 | |
*** mustfg has joined #openstack | 17:16 | |
jaypipes | ewindisch_: ya, there were a few issues in your execvp patch, but all because we don't have good functional tests hooked up to Hudosn. | 17:18 |
*** cber2001 has quit IRC | 17:18 | |
ewindisch_ | jaypipes: yeah. Well, to be honest, I never expected it to go in so painlessly. | 17:19 |
ewindisch_ | its not the type of change that doesn't cause a little bit of fallout. | 17:19 |
jaypipes | :) | 17:20 |
*** mustfg2 has joined #openstack | 17:23 | |
ewindisch_ | If anything else pops up, I don't mind getting related bugs assigned to me. I'll be around. | 17:24 |
*** mustfg has quit IRC | 17:29 | |
*** ewindisch_ is now known as ewindisch | 17:32 | |
*** pvo has quit IRC | 17:34 | |
*** Nacx has quit IRC | 17:36 | |
*** gesse has joined #openstack | 17:36 | |
*** dprince has quit IRC | 17:36 | |
*** mRy has joined #openstack | 17:37 | |
*** mRy is now known as Guest17739 | 17:38 | |
_0x44 | jaypipes: Should the reviewer who gives a Disapprove or Needs Fixing set a merge prop back to WIP? | 17:39 |
*** gesse has quit IRC | 17:40 | |
*** Guest23691 has quit IRC | 17:41 | |
jaypipes | _0x44: no, the submitter should set it to WIP when they start making changes/fixes. | 17:43 |
_0x44 | jaypipes: k, thanks | 17:43 |
*** joearnold has joined #openstack | 17:49 | |
*** lamar has quit IRC | 17:50 | |
jaypipes | sirp_, cory: FYI: http://code.google.com/p/sqlalchemy-migrate/issues/detail?id=117 | 17:52 |
dubs | jaypipes: thanks for the heads up | 17:53 |
*** matclayton has joined #openstack | 17:54 | |
*** matclayton has joined #openstack | 17:54 | |
*** grapex has joined #openstack | 17:56 | |
*** dprince has joined #openstack | 17:56 | |
*** dendro-afk is now known as dendrobates | 17:58 | |
*** irahgel has left #openstack | 18:01 | |
*** pvo has joined #openstack | 18:08 | |
*** mahadev has joined #openstack | 18:12 | |
*** eikke has joined #openstack | 18:13 | |
*** allsystemsarego has quit IRC | 18:18 | |
sandywalsh | has anyone tried python-novaclient pip install on a mac yet? | 18:23 |
sandywalsh | sirp_ is getting this: http://paste.openstack.org/show/850/ | 18:23 |
*** mrsrikanth has quit IRC | 18:25 | |
*** hggdh has quit IRC | 18:26 | |
uvirtbot | New bug: #732756 in nova "Nova fails to boot Linux VHD images under XenServer" [Undecided,New] https://launchpad.net/bugs/732756 | 18:27 |
*** mray has joined #openstack | 18:29 | |
*** hggdh has joined #openstack | 18:33 | |
*** hggdh has joined #openstack | 18:33 | |
*** dendrobates is now known as dendro-afk | 18:33 | |
*** maple_bed has quit IRC | 18:34 | |
*** maplebed has joined #openstack | 18:36 | |
*** mRy has joined #openstack | 18:37 | |
*** mRy is now known as Guest31679 | 18:38 | |
justinsb | jaypipes, ttx, soren: Given that we all agree that we want real tests, what can I do to change my testing branch so that we can merge it? https://code.launchpad.net/~justin-fathomdb/nova/justinsb-openstack-api-volumes/+merge/50868 | 18:39 |
*** mustfg2 has quit IRC | 18:40 | |
JordanRi1ke | anyone have any thoughts on a swauth-add-user command just hanging, never returning anything? | 18:40 |
*** Guest17739 has quit IRC | 18:41 | |
jaypipes | justinsb: make it smaller. | 18:42 |
jaypipes | justinsb: and make it not add any non-test functionality. :) | 18:42 |
justinsb | jaypipes: I can make it smaller by pulling out the volume code. Should knock about 100 lines out. | 18:43 |
justinsb | jaypipes: But there is no added non-test functionality | 18:44 |
justinsb | jaypipes: Everything is gated on a testing-only flag | 18:44 |
justinsb | jaypipes: It's difficult to test something that doesn't actually work without making it work :-( | 18:45 |
*** MarcMorata has quit IRC | 18:45 | |
jaypipes | justinsb: the Keys.API is new functionality, no matter what you might say :) | 18:46 |
jaypipes | justinsb: but the 2 of us went over all of this yesterday. I'd like to hear from the others.. | 18:46 |
justinsb | jaypipes: OK, but it's not public, it's for testing only, it exposed a bug (that keys shouldn't even be required) | 18:46 |
justinsb | jaypipes: I'd like to hear from the others as well! | 18:47 |
kpepple | sandywalsh , sirp_ : no, python-novaclient installs fine on my mac -- http://paste.openstack.org/show/852/ | 18:49 |
sandywalsh | thanks kpepple I just pushed a little pypi distribution tweak. Glad to know it worked. | 18:50 |
*** enigma1 has joined #openstack | 18:55 | |
*** enigma1 has quit IRC | 18:56 | |
justinsb | jaypipes: Feeling a little unhappy about the consistency of your review policy, on my testing patch compared to the execvp patch :-( | 18:56 |
*** blueadept has joined #openstack | 18:56 | |
jaypipes | justinsb: the execvp patch was a) tied to a specific bug, b) reviewed by 3 reviewers, all with Approved, and b) didn't add any new functionality. What was inconsistent about it? | 18:57 |
*** enigma1 has joined #openstack | 18:58 | |
*** enigma has quit IRC | 18:58 | |
Vek | for cell phones, blame the FCC | 18:59 |
JordanRi1ke | swauth-add-user hangs anyone have any suggestions? | 18:59 |
*** mgoldmann has joined #openstack | 18:59 | |
Vek | heh | 19:00 |
kpepple | JordanRi1ke: got a paste ? | 19:00 |
JordanRi1ke | it literally doesn't do anything | 19:00 |
JordanRi1ke | just hangs | 19:00 |
JordanRi1ke | and I get nothing in /var/log/messages | 19:01 |
JordanRi1ke | http://paste.openstack.org/show/854/ | 19:02 |
JordanRi1ke | my proxy-server.conf | 19:02 |
*** enigma1 has quit IRC | 19:02 | |
Vek | bah, mix, sorry 'bout that. | 19:02 |
justinsb | jaypipes: Just want to clear the air. It changed hundreds of lines of code (and there are ways to avoid that), it wasn't agreed in the Cactus plan (whereas testing is), the bug was only opened recently and was only theoretical, it only had two reviewers (one of which was you), I found a bug in it almost immediately (so there were likely more), and yet it only took 140 minutes to merge. That seems inconsistent to me. | 19:02 |
kpepple | JordanRi1ke: can you telnet to port 8080 ? | 19:04 |
JordanRi1ke | i can curl it,and I get a 404 | 19:04 |
JordanRi1ke | trying to curl https:/ip:8080/v1 gives me a 412 precondition not met error | 19:05 |
JordanRi1ke | but I am not sure if that is normal or not | 19:05 |
jaypipes | justinsb: are you talking about the original branch or soren's branch that had fixes to that branch? | 19:05 |
kpepple | JordanRi1ke: you don't have the headers which is probably the 412 error | 19:05 |
justinsb | jaypipes: The original branch. The fixes obviously should have been rushed through! :-) | 19:05 |
JordanRi1ke | kpepple: I can give you login info if you just want to poke at it really quick? | 19:06 |
kpepple | JordanRi1ke: when you execute, swauth-add-user does it just come back ? or does it not even exit ? | 19:06 |
JordanRi1ke | never exits | 19:06 |
jaypipes | justinsb: well, the bugs weren't evident because we don't have any functional tests linked to Hudson. And, like I said, the branch was *linked to a bug report*. I take bug fixes as precedent over branches that aren't linked to either a blueprint or a bug report. Finally, the execvp branch did indeed changes lots of lines of code, but the changes to those lines of code were all almost the same. | 19:07 |
*** iRTermite has quit IRC | 19:07 | |
*** iRTermite has joined #openstack | 19:07 | |
*** bcwaldon has quit IRC | 19:07 | |
jaypipes | justinsb: if you want to get on me about treating you unfairly in review, you will have to bring up a review of another branch that is more similar to yours. | 19:08 |
*** enigma has joined #openstack | 19:08 | |
jaypipes | justinsb: in addition, your merge proposals are a tangled web of pre-req dependencies, and the execvp branch had no such complication. | 19:08 |
kpepple | JordanRi1ke: if you leave it, does it eventually time out ? | 19:09 |
jaypipes | justinsb: so while it may *seem* like I'm giving you a hard time, I'm actually trying *really* hard to give your branches the review time they deserve and to get others involved in the review. The fact that your branches are large and intricately interrelated does not make my job easy. | 19:09 |
JordanRi1ke | kpepple not sure how long I would have to leave it for that to happen | 19:10 |
JordanRi1ke | it has ben left alone for 2+ minutes before I killed it | 19:10 |
*** matclayton has quit IRC | 19:10 | |
kpepple | JordanRi1ke: hold on ... let me look on that -- we are using bufferedhttp in swift | 19:11 |
*** photron has joined #openstack | 19:11 | |
justinsb | jaypipes: I mean this in a non-confrontational manner: Should I submit a series of small patches that don't really individually do much / make a lot of sense? I feel my patches are bigger because they're achieving more. | 19:12 |
*** gaveen has quit IRC | 19:13 | |
gholt | JordanRi1ke: What's your swauth-add-user command line? Do you have -A https://127.0.0.1:8080/auth/ on it since you're using ssl and the -A value doesn't by default. (and i like run on sentences :) | 19:14 |
JordanRi1ke | gholt: well that returned right away at least, gave me a 500 error | 19:16 |
jaypipes | justinsb: on phone, sorry, hold on.. | 19:16 |
uvirtbot | New bug: #732777 in nova "Lockfile sync failing on Mac OS X" [Undecided,New] https://launchpad.net/bugs/732777 | 19:17 |
JordanRi1ke | gholt: http://paste.openstack.org/show/855/ | 19:18 |
gholt | Heheh, well now that you've tried many variations, did you check the logs to see why you were getting a 500 earlier? | 19:19 |
kpepple | JordanRi1ke: your swauth-user-add command is hanging on line 74 ... trying to do a conn = http_connect(parsed.hostname, parsed.port, 'PUT', path, headers, ssl=(parsed.scheme == 'https')) | 19:20 |
*** fabiand_ has joined #openstack | 19:20 | |
kpepple | JordanRi1ke: which is trying to ensure the admin account exists | 19:21 |
JordanRi1ke | kpepple: brb 15 | 19:21 |
JordanRi1ke | kpepple: how do I fix that? | 19:21 |
kpepple | JordanRi1ke: the admin account auth defaults to http://127.0.0.1:8080/auth/ ... can you use the -A https://50.56.93.69:8080/auth/ ? | 19:24 |
*** dendro-afk is now known as dendrobates | 19:26 | |
kpepple | JordanRi1ke: you see something about the 500 error in the logs | 19:27 |
kpepple | JordanRi1ke: that 500 should be a stack trace in the logs | 19:27 |
*** burris has quit IRC | 19:30 | |
*** jbryce has joined #openstack | 19:30 | |
*** mRy has joined #openstack | 19:38 | |
*** ewindisch has quit IRC | 19:38 | |
*** joearnold has quit IRC | 19:38 | |
*** mRy is now known as Guest46752 | 19:38 | |
*** ewindisch has joined #openstack | 19:38 | |
*** dfg has joined #openstack | 19:38 | |
btorch | is it normal for cloud-controller (who is also a compute-node) to pick whatever else compute-node to use when launching a vm ? | 19:40 |
*** Guest31679 has quit IRC | 19:41 | |
btorch | I just lunched a new vm from the cc and for my surprise is launched on another compute-node I just attached to the nova environment | 19:41 |
kpepple | btorch: that is controlled by nova-scheduler | 19:41 |
*** MotoMilind has joined #openstack | 19:41 | |
MotoMilind | My openvpn (cloudpipe) instance stays pending. How do I go about debugging that? Thanks. | 19:42 |
btorch | kpepple: should I have the nova-scheduler also running on the other nova compute-nodes or just on the cloud-controller ? | 19:43 |
kpepple | btorch: just cloud controller ... nova-scheduler decides what compute node (anything running nova-compute that is enabled and active in the db) to run the vm on | 19:44 |
*** dendrobates is now known as dendro-afk | 19:45 | |
btorch | kpepple: cool so in the nova compute-node I will only have the nova-compute running | 19:45 |
kpepple | btorch: probably ... not sure if you need nova-instancemonitor (for monitoring) on there or not | 19:46 |
btorch | MotoMilind: have you checked the nova logs ? /var/log/nova and perhaps the instace console log if there is one /var/lib/nova/instances | 19:46 |
btorch | kpepple: don't know either but that wasn't part of the installation proceedures so it's not installed anywhere | 19:47 |
MotoMilind | I only checked /var/log/nova/nova-api.log. It doesn't record anything. Not sure where cloudpipe is logging, I didn't see a log file specifically for the same. I will check if there is a instances' log file. Thanks. | 19:47 |
kpepple | btorch: it's not required ... but i think it gathers rrd stats for you (which might be nice) | 19:47 |
*** dendro-afk is now known as dendrobates | 19:48 | |
btorch | kpepple: true I'll check that out eventually | 19:48 |
btorch | kpepple: thanks | 19:48 |
kpepple | btorch: np | 19:48 |
*** mgoldmann has quit IRC | 19:48 | |
MotoMilind | Hmm, several log files have 'cloudpipe' in it. I will go through them now. | 19:49 |
*** joshuamckenty has joined #openstack | 19:52 | |
JordanRi1ke | kpepple: when I use the IP I get a 401 | 19:54 |
*** enigma has quit IRC | 19:55 | |
*** clauden has joined #openstack | 19:55 | |
kpepple | JordanRi1ke: where are your logs on that server ? i didn't see them anywhere (at least in /var/log/ and /root/swift/) | 19:55 |
JordanRi1ke | i havent set a specific location so they just got to /var/log/messages AFAIK | 19:55 |
JordanRi1ke | at least that is where I see some http request info | 19:56 |
*** joearnold has joined #openstack | 19:56 | |
*** enigma has joined #openstack | 19:56 | |
*** dendrobates is now known as dendro-afk | 19:56 | |
uvirtbot | New bug: #732801 in nova "Sporadic failure when creating XS vms from machine-images" [Undecided,New] https://launchpad.net/bugs/732801 | 19:56 |
letterj | JordanRi1ke: Was it a software version issues that ended up being the problem yesterday evening? | 19:57 |
*** gunga has left #openstack | 19:58 | |
JordanRi1ke | letterj: yeah, eventlet 9.12, upgraded to 9.13 and object-replicator started workin | 19:58 |
JordanRi1ke | interestingly, that install, and also this brand new install | 19:58 |
JordanRi1ke | seem to have the same issue, where swauth-add-user just hangs and returns nothing | 19:58 |
JordanRi1ke | both were installed from swift-core packages | 19:58 |
*** brd_from_italy has joined #openstack | 19:58 | |
*** dendro-afk is now known as dendrobates | 19:58 | |
letterj | JordanRi1ke: I wonder if there is an issue with package dependencies in the current ppas? | 19:59 |
JordanRi1ke | also, I used the swift multi node install doc, so if anything is wrong in that, that is what I have done | 19:59 |
JordanRi1ke | letterj: my new install today pulled the right version | 19:59 |
*** rcc has quit IRC | 19:59 | |
JordanRi1ke | in the packages it has a depend on .0.9.8 or greater, when it should be 0.9.13 or greater now | 20:00 |
JordanRi1ke | so, yes, there is :-D | 20:00 |
letterj | A bug needs to be created for that. I can create on if you want me to | 20:00 |
JordanRi1ke | i can do it this weekend, but if you want to, feel free | 20:01 |
letterj | I'll take care of it. | 20:01 |
JordanRi1ke | letterj = ? | 20:01 |
openstackhudson | Project swift build #216: SUCCESS in 29 sec: http://hudson.openstack.org/job/swift/216/ | 20:01 |
openstackhudson | Tarmac: Fixes to work with WebOb 1.0.1 and WebOb 1.0.3 | 20:01 |
*** scottsanchez has joined #openstack | 20:01 | |
scottsanchez | hi | 20:02 |
letterj | JordanRi1ke: ? | 20:02 |
*** jesse_ has joined #openstack | 20:02 | |
JordanRi1ke | who are you? I still don't know some peoples handles | 20:02 |
*** JordanRi1ke is now known as JordanRinke | 20:02 | |
letterj | I sat next to you at the OpenStack Dave & Busters Outing in San Antonio | 20:03 |
JordanRinke | Payne? | 20:04 |
letterj | yes | 20:04 |
JordanRinke | ahh cool, I thought it was you but I wasn't entirely sure | 20:04 |
JordanRinke | I know you, just didn't know your handle :-D | 20:04 |
*** vvuksan1 has joined #openstack | 20:08 | |
gholt | I made a note to myself to update the swift multinode docs. The swauth-add-user line should have -A https://<AUTH_HOSTNAME>:8080/auth/ added to them. | 20:08 |
gholt | Same with most any other swauth- call in that doc. | 20:09 |
*** jesse_ is now known as anotherjesse | 20:11 | |
JordanRinke | so when I try that swauth-add-user command I get a 500 error and this in the log | 20:11 |
JordanRinke | Mar 10 20:11:08 swift1 proxy-server - - 10/Mar/2011/20/11/08 HEAD /v1/AUTH_.auth/system HTTP/1.0 404 - - - - - - txde82ffdc-2610-4f44-a46b-2df9301f926a - 0.0038 | 20:11 |
JordanRinke | Mar 10 20:11:08 swift1 proxy-server - - 10/Mar/2011/20/11/08 PUT /v1/AUTH_.auth/system HTTP/1.0 404 - - - - - - txde82ffdc-2610-4f44-a46b-2df9301f926a - 0.0028 | 20:11 |
JordanRinke | Mar 10 20:11:08 swift1 proxy-server - - 10/Mar/2011/20/11/08 HEAD /v1/AUTH_.auth/system HTTP/1.0 404 - - - - - - tx2842832b-da01-426e-b939-d7634fe2d6c3 - 0.0028 | 20:11 |
gholt | Grrr | 20:12 |
gholt | JordanRinke: Sorry, looks like another step is missing from the doc. | 20:12 |
gholt | JordanRinke: Run swauth-prep -A https://<AUTH_HOSTNAME>:8080/auth/ -K swauthkey | 20:13 |
gholt | JordanRinke: Then continue with your add-user call | 20:13 |
JordanRinke | that command gives me a 500 server error too | 20:14 |
JordanRinke | Mar 10 20:13:51 swift1 proxy-server - - 10/Mar/2011/20/13/51 PUT /v1/AUTH_.auth HTTP/1.0 503 - - - - - - txf83f0615-4d41-4b19-9913-e3dd33ae0465 - 0.0036 | 20:14 |
gholt | There should be something else near that line indicating why it 503d | 20:15 |
JordanRinke | nope | 20:15 |
gholt | Okay, uhm. tail it, hit enter a bunch, and in another term run the swauth-prep and see what all it does | 20:16 |
JordanRinke | will it just posts one line | 20:16 |
JordanRinke | Mar 10 20:16:19 swift1 proxy-server - - 10/Mar/2011/20/16/19 PUT /v1/AUTH_.auth HTTP/1.0 503 - - - - - - tx2ee386b3-e5a2-4805-88bc-8fbcd8d58001 - 0.0066 | 20:17 |
*** daveiw has quit IRC | 20:17 | |
JordanRinke | gholt - want a login to this box? | 20:18 |
gholt | I'll have to get back to you some other time; doing a prod upgrade right now. | 20:18 |
jaypipes | justinsb: sorry man, been on phone with u-verse tech support for an hour... fucking morons. | 20:19 |
JordanRinke | ive tried adding "LOG_LEVEL = debug", "log_level = debug", "set log_level = debug" | 20:20 |
JordanRinke | all of which did not increase the logging, I just get that one line | 20:20 |
justinsb | jaypipes: No problem! At least you're online - can't be too bad! | 20:20 |
jaypipes | justinsb: oh yes it is. | 20:20 |
justinsb | jaypipes: So you wouldn't recommend u-verse? | 20:20 |
*** h0cin has quit IRC | 20:21 | |
uvirtbot | New bug: #732813 in swift "Possible eventlet dependency problem" [Undecided,New] https://launchpad.net/bugs/732813 | 20:21 |
jaypipes | justinsb: it's fine until you need to talk to a tech support person who has any clue what they are doing. It took an hour back and forth between 6 people to finally land on a level 2 tech that knew how to unblock port 25 outbound for me. retarded. | 20:22 |
justinsb | jaypipes: You're trying to run your own SMTP server from home? I thought there were too many IP blacklists to make that really practical these days.. | 20:23 |
jaypipes | justinsb: just trying to send an email from my rackspace account. no luck still. | 20:24 |
JordanRinke | jaypipes: I just use OWA at home :/ | 20:25 |
*** joearnold has quit IRC | 20:26 | |
jaypipes | JordanRinke: OWA is the reason I have spent the better part of 2 hours trying to solve this problem. It is by far the worst-written software ever. I mean, come on, reply to someone and it doesn't even show you the original text in a way that the receievr of your response can tell who said what. It's a complete piece of shit. | 20:27 |
letterj | JordanRinke: Can you take that transction number and search the logs on the account nodes? | 20:27 |
*** hadrian has quit IRC | 20:28 | |
justinsb | jaypipes: Auto-forward to gmail? | 20:28 |
jaypipes | JordanRinke: and now it seems that even though I've gotten port 25 opened up, I can't even log in to mail.rackspace.com to send an email via SMTP... grr. so friggin annoyed right now with RS IT's delusion that "Oh, everyone at Rackspace must be on a RS network... Where is your cube, I'll just come over and fix things." Argh. | 20:29 |
jaypipes | justinsb: it doesn't work. tried it. can't get gmail to accept the rackspace mail server because of a bad CA on it. | 20:29 |
justinsb | jaypipes: Sounds like a fun morning for you! | 20:30 |
jaypipes | justinsb: morning was fine. afternoon has been for shit. | 20:31 |
justinsb | jaypipes: Sorry, forgot you're not a left-coaster! | 20:32 |
jaypipes | justinsb: approved on test client. nice breakout. | 20:33 |
justinsb | jaypipes: Thanks, though it doesn't really do anything. Is this the way to go? | 20:33 |
jaypipes | justinsb: remember that I'm not some review dictator. I'm only one of more than a dozen reviewers in nova-core... | 20:34 |
jaypipes | justinsb: for this specific case, I believe breaking into these chunks is the way to go. For this specific case, yes. | 20:34 |
JordanRinke | no logs soo | 20:34 |
JordanRinke | I tried swift-account-replicator /etc/swift/account-server.conf | 20:34 |
justinsb | jaypipes: OK. More launchpad karma for me, I guess. | 20:35 |
JordanRinke | and I got an error saying | 20:35 |
jaypipes | justinsb: heh, I don't even think about the karma... :) | 20:35 |
JordanRinke | ConfigParser.NoSectionError: No section: 'account-server' | 20:35 |
JordanRinke | which looking at account-server.conf | 20:35 |
JordanRinke | there isnt a section specifically caled account-server but there is one called | 20:35 |
*** reldan has quit IRC | 20:36 | |
JordanRinke | app:account-server | 20:36 |
JordanRinke | via the doc here http://swift.openstack.org/howto_installmultinode.html | 20:36 |
*** lucas_ is now known as lucas | 20:36 | |
*** lucas has joined #openstack | 20:37 | |
JordanRinke | I changed that to just account-server and it looks like it is launching and binding on a port now | 20:37 |
JordanRinke | sed 's/app://g' *.conf | 20:37 |
JordanRinke | errr | 20:38 |
*** mRy has joined #openstack | 20:38 | |
*** mRy is now known as Guest46452 | 20:38 | |
*** joshuamckenty has quit IRC | 20:39 | |
*** johnpur has quit IRC | 20:40 | |
*** burris has joined #openstack | 20:40 | |
JordanRinke | ok | 20:41 |
JordanRinke | so that is the problem | 20:41 |
JordanRinke | the proxy-server.conf requires app:proxy-server | 20:42 |
*** Guest46752 has quit IRC | 20:42 | |
JordanRinke | the rest of the conf files, you can't have the app: prefix | 20:42 |
JordanRinke | so the multi node doc is wrong, and also that isnt consistent across services | 20:42 |
openstackhudson | Project nova build #624: SUCCESS in 1 min 53 sec: http://hudson.openstack.org/job/nova/624/ | 20:43 |
openstackhudson | Tarmac: Add a new IptablesManager that takes care of all uses of iptables. | 20:43 |
openstackhudson | Port all uses of iptables (in linux_net and libvirt_conn) over to use this new manager. | 20:43 |
openstackhudson | It wraps all uses of iptables so that each component can maintain its own set of rules without interfering with other components and/or existing system rules. | 20:43 |
openstackhudson | iptables-restore is an atomic interface to netfilter in the kernel. This means we can make a bunch of changes at a time, minimising the number of calls to iptables. | 20:43 |
JordanRinke | lol well at least that got swauth-prep to run | 20:43 |
JordanRinke | i still get a 500 trying to swauth-add-user | 20:43 |
nidO | Hi all, would anyone be able to point me in the direction of any obvious mistakes I may have made during swift setup that causes attempts to create containers return a 404 on each storage node? | 20:43 |
*** bkkrw has joined #openstack | 20:44 | |
JordanRinke | moving in the right direction now, I have logs on the nodes now | 20:45 |
* soren high-fives openstackhudson | 20:47 | |
*** bcwaldon has joined #openstack | 20:48 | |
JordanRinke | nid0 does your device actually exist? | 20:48 |
nidO | JordanRinke: each node is there and seems to be setup mostly correctly, i've an auth/proxy node and 3 storage nodes, all 3 nodes are setup in all 3 rings and balanced, i've got a user created through devauth and can auth the user fine | 20:49 |
JordanRinke | this is the error I am getting on the account nodes now: http://paste.openstack.org/show/858/ | 20:50 |
JordanRinke | show me the error you are getting nid0? | 20:50 |
nidO | trying to directly upload a file straight off as suggested in the docs just gives me a 400 error for creating the container, then a 404 for the file upload | 20:50 |
*** Guest46452 has quit IRC | 20:50 | |
nidO | JordanRinke, sec | 20:50 |
JordanRinke | also | 20:51 |
JordanRinke | if you run | 20:51 |
*** Guest46452 has joined #openstack | 20:51 | |
* jaypipes thinks he's finally figured out how to get around OWA... \o/ ty gmail and tr3buchet... | 20:51 | |
JordanRinke | netstat -tulpn do you have services listening on 6000,6001,6002 if you did a default mutli node install? | 20:51 |
* jaypipes very much looking forward to removing a particular bookmark from his browser toolbar... | 20:52 | |
JordanRinke | as i have just found out, part can be working while the other is failing | 20:52 |
*** mgoldmann has joined #openstack | 20:52 | |
_0x44 | soren: Your test fails on my machine | 20:53 |
nidO | yep, theyre listening on the ports - the syslogs on each storage nodes show the requests actually being received too, but then failing, just pastebinning the errors both ends | 20:54 |
*** anotherjesse has quit IRC | 20:54 | |
nidO | JordanRinke: proxy-side is http://pastebin.com/9G4Q3GWa | 20:56 |
nidO | all 3 storage nodes' syslogs show: http://pastebin.com/byWDaEsD | 20:56 |
jarrod | is anyne using vlan manager where the instance is assigned the public ip? | 20:57 |
*** MarcMorata has joined #openstack | 20:58 | |
btorch | kpepple: you mentioned that the nova-scheduler will pick the compute node when deploying an image .. I assume from the services table ? | 21:01 |
kpepple | btorch: depends on the scheduler that you choose (there are different ones), but yes | 21:02 |
kpepple | btorch: you can check the services table with nova-manage | 21:02 |
btorch | I no longer have the nova-scheduler running on the compute-node but the services table still shows it there | 21:02 |
kpepple | btorch: shows it enabled or alive ? | 21:03 |
btorch | kpepple: cool just saw it and will disable it | 21:03 |
jaypipes | worst warning ever in documentation (from sqlalchemy migrate docs): "Warning: test command executes actual script, be sure you are NOT doing this on production database." | 21:03 |
btorch | kpepple: shows disabled 0 | 21:03 |
*** lvaughn has quit IRC | 21:04 | |
kpepple | btorch: yeah, not sure why we use disabled or deleted in the negative form | 21:04 |
btorch | bash style error code maybe :) | 21:06 |
btorch | nevermind to that :) | 21:07 |
jaypipes | sirp_, dubs: I'm *really* tempted to rip out sqlalchemy-migrate and replace with a simple upgrade/downgrade mechanism that just takes straight SQL code and executes it against the database... | 21:07 |
jaypipes | notmyname, creiht, gholt: how do you guys handle database migrations, btw? I know you don't use SQLAlchemy, obviously, but how do you handle changes to the schema? | 21:08 |
JordanRinke | ok, so the app: issue has to do with the version | 21:08 |
*** ddumitriu has joined #openstack | 21:09 | |
*** dendrobates is now known as dendro-afk | 21:09 | |
JordanRinke | the doc doesn't tell you to add the ppa to the storage nodes, which is just a step I overlooked | 21:09 |
tr3buchet | jaypipes: anytime! | 21:09 |
JordanRinke | ok, upgraded everything to the same version again, added the app: stuff back and I am getting the same error :x | 21:13 |
*** ccustine has joined #openstack | 21:17 | |
*** MarkAtwood has quit IRC | 21:18 | |
dubs | jaypipes: are you still using 0.6-1 ? | 21:18 |
*** lvaughn has joined #openstack | 21:19 | |
btorch | kpepple: cool thanks, it seems like after disabling the scheduler on box2 it launched it on box1 | 21:19 |
*** joearnold has joined #openstack | 21:19 | |
JordanRinke | le sigh | 21:20 |
kpepple | btorch: np | 21:20 |
JordanRinke | kpepple: any guess on swauth-add-user not working after a good swauth-prep | 21:20 |
kpepple | JordanRinke: sorry, lost the thread on what you were doing. are you still getting the same hanging problem ? or is it returning 404/500 ? | 21:21 |
JordanRinke | it returns a 500 | 21:21 |
JordanRinke | we eventually found out my services werent binding | 21:21 |
JordanRinke | which i just figured out was a versioning issue and the conf file, based on the install doc | 21:21 |
JordanRinke | so that is all setup now, they bind etc | 21:21 |
JordanRinke | and I can swauth-prep properly | 21:21 |
JordanRinke | when I swauth-add-user I get a 500 | 21:22 |
JordanRinke | and this on the node: | 21:22 |
JordanRinke | Mar 10 21:20:50 swift2 account-server 50.56.93.69 - - [10/Mar/2011:21:20:50 +0000] "HEAD /sdb1/3715/AUTH_.auth" 204 - "tx3a11d1de-e6ff-49a1-ae2b-3bceeec2607b" "-" "-" 0.0018 "" | 21:22 |
JordanRinke | Mar 10 21:20:50 swift2 object-server 50.56.93.69 - - [10/Mar/2011:21:20:50 +0000] "GET /sdb1/29394/AUTH_.auth/.token_4/AUTH_itk9e47de099e294b3cbd0b810d991b9514" 404 - "-" "txbd6a8ebc-c63a-4c28-92bb-37ded7233028" "-" 0.0003 | 21:22 |
JordanRinke | Mar 10 21:20:50 swift2 account-server 50.56.93.69 - - [10/Mar/2011:21:20:50 +0000] "HEAD /sdb1/3715/AUTH_.auth" 204 - "tx45e1ed69-8638-4ee1-b0d3-65591b6782a2" "-" "-" 0.0017 "" | 21:22 |
kpepple | JordanRinke: does it give you account creation error or user creation error ? | 21:23 |
JordanRinke | 500 on both | 21:24 |
*** photron has quit IRC | 21:24 | |
markwash | eday: I'm not sure quite what you mean by exceptions with data limits | 21:24 |
JordanRinke | Account creation failed: 500 Server Error | 21:24 |
JordanRinke | User creation failed: 500 Server Error | 21:24 |
markwash | eday: right now I'm working on adding stuff like "OnsetFileContentQuotaError" which could then be caught explicitly | 21:25 |
*** bkkrw has quit IRC | 21:25 | |
markwash | eday: but it feels like a noob move to add a different exception type for every way a function can fail | 21:25 |
kpepple | JordanRinke: okay, swauth-user-add tries to ensure that the admin account exists and that is probably the 204 or 404 error you are seeing in the log. | 21:26 |
MotoMilind | @btorch: Aaha, found this in nova-compute.log: | 21:26 |
larissa | MotoMilind: Error: "btorch:" is not a valid command. | 21:26 |
*** mgoldmann has quit IRC | 21:26 | |
MotoMilind | btorch: Aaha, found this in nova-compute.log: | 21:26 |
JordanRinke | the admin account? | 21:26 |
*** bkkrw has joined #openstack | 21:26 | |
btorch | larissa: hehe :) | 21:27 |
larissa | btorch: Error: "hehe" is not a valid command. | 21:27 |
btorch | :( | 21:27 |
eday | markwash: I'm just saying return int's of limits needed to construct a proper message | 21:27 |
MotoMilind | 2011-03-10 12:20:51,248 INFO nova.virt.libvirt_conn [-] instance instance-0000001d: Creating image | 21:28 |
MotoMilind | 2011-03-10 12:20:51,698 ERROR nova.exception [-] Uncaught exception | 21:28 |
MotoMilind | (nova.exception): TRACE: Traceback (most recent call last): | 21:28 |
MotoMilind | (nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 116, in _wrap | 21:28 |
MotoMilind | (nova.exception): TRACE: return f(*args, **kw) | 21:28 |
*** MotoMilind has quit IRC | 21:28 | |
btorch | MotoMilind: what did u find ? | 21:28 |
kpepple | JordanRinke: yes, it checks the admin user and admin key with the call that is erroring with "account creation failed" | 21:28 |
*** MotoMilind has joined #openstack | 21:28 | |
MotoMilind | 2011-03-10 12:20:51,248 INFO nova.virt.libvirt_conn [-] instance instance-0000001d: Creating image | 21:28 |
MotoMilind | 2011-03-10 12:20:51,698 ERROR nova.exception [-] Uncaught exception | 21:28 |
MotoMilind | (nova.exception): TRACE: Traceback (most recent call last): | 21:28 |
MotoMilind | (nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 116, in _wrap | 21:28 |
MotoMilind | (nova.exception): TRACE: return f(*args, **kw) | 21:28 |
*** MotoMilind has quit IRC | 21:28 | |
eday | markwash: I guess I don't see it as a noob move, it's just detail handling needed when you have multiple APIs with different naming conventions | 21:28 |
*** MotoMilind has joined #openstack | 21:29 | |
JordanRinke | i haven't created any users | 21:29 |
btorch | MotoMilind http://paste.openstack.org/ | 21:29 |
JordanRinke | im just trying to do the first command, to make my first user | 21:29 |
markwash | eday: okay, so you won't mind if I make three new quota error classes? | 21:29 |
markwash | eday: they'll end up being meaningful in themselves and allow easy catching | 21:29 |
eday | markwash: I won't, but I can't speak for others :) | 21:30 |
kpepple | JordanRinke: can you swauth-list ? | 21:31 |
justinsb | markwash, eday: How about one quota error class which has a parameter of the quota that's been violated? | 21:31 |
MotoMilind | Just pasted my stack trace | 21:31 |
*** reldan has joined #openstack | 21:31 | |
uvirtbot | New bug: #732866 in nova "OpenStack API authentication information leakage" [Undecided,New] https://launchpad.net/bugs/732866 | 21:31 |
MotoMilind | Turns out that I don't have ami-cloudpipe image | 21:32 |
JordanRinke | root@swift1:/etc/swift# swauth-list -A https://localhost:8080/auth/ -K sosecure | 21:32 |
JordanRinke | {"accounts": [{"name": "system"}]} | 21:32 |
markwash | justinsb: that sounds feasible as well, though I don't want to accidentally mess up any kwargs that are already part of Exception::__init__ | 21:32 |
JordanRinke | kpepple: nothing listed | 21:32 |
MotoMilind | in '/var/lib/nova/images/ami-cloudpipe' | 21:32 |
kpepple | JordanRinke: are you following http://swift.openstack.org/howto_installmultinode.html under "Create Swift admin account and test" | 21:33 |
JordanRinke | yes | 21:33 |
markwash | justinsb, eday: there might actually already be a good way to do with with the existing quotaerror class, researching now | 21:33 |
btorch | MotoMilind: there you go .. I never used those cloudpipe images. so you have it installed somewhere else I guess | 21:33 |
*** dprince has quit IRC | 21:34 | |
JordanRinke | kpepple: and I am failing on step 1 | 21:34 |
JordanRinke | I am using swauth, so I added the -A http://localhost:8080/auth/ | 21:34 |
kpepple | JordanRinke: and what super_admin key ? | 21:34 |
JordanRinke | the one I set in proxy-server.conf | 21:35 |
JordanRinke | I did the missing swauth-prep step too | 21:35 |
JordanRinke | using that password, and that ran with no errors | 21:35 |
MotoMilind | Actually, find / -name ami-cloudpipe returns empty. I don't have this image | 21:35 |
MotoMilind | I guess I need to locate it and download it | 21:35 |
eday | justinsb, markwash: whatever is easiest, although some core Python modules do have a convention of creating multiple class types for detailed exception handling | 21:36 |
* kpepple looks through swift/proxy/server.py for hints | 21:36 | |
MotoMilind | Does anyone know where I can find this image, ami-cloudpipe? | 21:37 |
*** nerens has quit IRC | 21:37 | |
justinsb | eday, markwash: I think that best practice is to have a base exception class that contains all the information (i.e. the violated quota, and ideally the relevant values). They, if you believe that people will want to catch specific subsets, you create and throw a derived class for convenience | 21:38 |
justinsb | eday, markwash: At least that's what you do in languages where people worry about this sort of thing :-) | 21:38 |
*** nRy has joined #openstack | 21:38 | |
btorch | MotoMilind: what OS are u using for the host ? | 21:39 |
MotoMilind | ubuntu | 21:39 |
eday | justinsb: sure, base quota class makes the most sense if there are common values | 21:39 |
btorch | MotoMilind: version ? | 21:39 |
MotoMilind | yep, checking, sorry | 21:39 |
MotoMilind | root@openstack01:~# uname -a | 21:40 |
MotoMilind | Linux openstack01 2.6.32-28-generic #55-Ubuntu SMP Mon Jan 10 23:42:43 UTC 2011 x86_64 GNU/Linux | 21:40 |
markwash | justinsb: right now, there is a code in ApiError which QuotaError blindly inherits | 21:40 |
markwash | justinsb: I think I'm going to overload that with my own string codes | 21:41 |
markwash | justinsb: that seems somewhat consistent with the approach you were talking about | 21:41 |
*** Guest46452 has quit IRC | 21:41 | |
justinsb | markwash: That sounds good; it looks like the EC2 API then exposes that in a separate element to avoid string parsing | 21:42 |
*** miclorb has joined #openstack | 21:43 | |
justinsb | markwash: The OpenStack API approach looks like it's totally fubar | 21:43 |
justinsb | markwash: So no point worrying about the OS API! | 21:43 |
markwash | justinsb: lol unfortunately agree on the fubar aspect | 21:44 |
MotoMilind | Even on my OpenStack CentOS install, I don't see the ami-cloudpipe image | 21:44 |
kpepple | JordanRinke: it looks like the 204 response in the logs indicate that the user was added but the 404 that follows says that the returned storage url is bad | 21:44 |
MotoMilind | I wonder if it is just named differently | 21:44 |
markwash | justinsb: but also unfortunately I still worry | 21:44 |
justinsb | markwash: Good luck with that | 21:45 |
JordanRinke | kpepple: so, suggestion? | 21:46 |
JordanRinke | i looked on the device, under accounts and I do see stuff there | 21:47 |
MotoMilind | Here you go! https://answers.launchpad.net/nova/+question/142822 There is no image available, I will have to create one | 21:49 |
*** hadrian has joined #openstack | 21:51 | |
kpepple | JordanRinke: i'm a bit baffled ... | 21:54 |
*** Anon349 has joined #openstack | 21:55 | |
*** Tino has left #openstack | 21:56 | |
JordanRinke | i see the account/object/container folders | 21:56 |
*** kashyap has quit IRC | 21:56 | |
JordanRinke | and they have stuff in them | 21:56 |
Anon349 | what? | 21:56 |
Anon349 | someone here with asperger? | 21:56 |
*** kashyap has joined #openstack | 21:57 | |
*** Anon349 has left #openstack | 21:58 | |
*** fabiand_ has quit IRC | 21:59 | |
kpepple | JordanRinke: i'm putting together my small cluster ... see if i have the same problem | 21:59 |
btorch | MotoMilind: sorry got busy ... have you tried just using an ubuntu UEC image to see if things work there ? | 22:02 |
soren | _0x44: The metadata one? | 22:03 |
soren | _0x44: Can you pastebin the error? | 22:03 |
_0x44 | soren: One sec | 22:03 |
_0x44 | soren: http://paste.openstack.org/show/861/ | 22:03 |
soren | callable? Why would it have to callable?!? | 22:04 |
soren | Oddness. | 22:04 |
_0x44 | soren: It's in the DirectCloudTestCase, so I'd imagine it's something around self.cloud.compute_api = proxy.compute | 22:05 |
*** lvaughn has quit IRC | 22:05 | |
*** lvaughn has joined #openstack | 22:05 | |
_0x44 | in test_direct.py on line 101 | 22:05 |
* soren wanders back to the ol'e drawing board | 22:05 | |
*** ivan has quit IRC | 22:07 | |
_0x44 | soren: The test succeeds in the regular CloudTestCase, but not in thedirect one | 22:07 |
soren | _0x44: Ah. | 22:07 |
soren | _0x44: Ok. | 22:07 |
justinsb | jaypipes: Well, the good news is that by breaking up the tests I'm finding more bugs. Another test, another bug - I'm probably batting about .600 | 22:08 |
nidO | JordanRinke: Just to compare my setup to yours as my problem's still baffling me and im only a couple of steps further than you - did you say you have accounts/container/object folders on the storage partitions of each node? | 22:09 |
JordanRinke | yeah | 22:09 |
nidO | hum, any idea at what point they got created for you? I have an accounts folder and can create/manipulate accounts and auth to them correctly, but there's nothing else on my storage partitions | 22:10 |
nidO | no objects or containers folders | 22:10 |
*** bcwaldon has quit IRC | 22:10 | |
JordanRinke | yeah | 22:10 |
JordanRinke | there is a missing command | 22:11 |
JordanRinke | did you run | 22:11 |
*** MarkAtwood has joined #openstack | 22:11 | |
JordanRinke | swauth-prep ? | 22:11 |
nidO | i'm using devauth | 22:11 |
JordanRinke | also, if your services arent actually running they won't be there | 22:11 |
JordanRinke | ah ok | 22:11 |
JordanRinke | are you doing a single box, or multi node setup? | 22:11 |
nidO | its multi node in virtual machines - i've got 1 proxy/auth box and 3 storage boxes - when i create a user it replicates out to the storage partitions fine | 22:12 |
JordanRinke | ok, so on your node | 22:13 |
JordanRinke | what version does | 22:13 |
JordanRinke | aptitude show swift-object | 22:13 |
*** ctennis has quit IRC | 22:13 | |
JordanRinke | show ? | 22:13 |
*** ivan has joined #openstack | 22:13 | |
nidO | on the proxy/auth node it shows as not installed and optional, on the storage nodes its 1.2.0-0ubuntu1~lucid0 | 22:14 |
Ryan_Lane | with nova 2011.1 there is supposed to be some kind of console service installed, right? | 22:16 |
Ryan_Lane | that was part of the missing files? | 22:17 |
Ryan_Lane | or is that 2011.1.1? | 22:18 |
Ryan_Lane | and if so, is that supposed to be in the release ppa now? | 22:18 |
*** grapex has left #openstack | 22:19 | |
dubs | ewindisch: you around? | 22:19 |
ewindisch | hey | 22:19 |
*** lvaughn_ has joined #openstack | 22:20 | |
dubs | hi there. quick question for you. i'm about to start work on some changes around the vif_rules.py script. i noticed you committed a change to it recently. | 22:20 |
JordanRinke | kpepple: fixed it | 22:20 |
ewindisch | dubs: yes | 22:20 |
kpepple | JordanRinke: how ? | 22:20 |
JordanRinke | kpepple: the IP in my proxy-server.conf was an intneral IP | 22:20 |
kpepple | JordanRinke: ahhhhh | 22:21 |
JordanRinke | so, when the node tried to reach out to it, it failed | 22:21 |
JordanRinke | i really wish there was WAY better logging | 22:21 |
dubs | i was wondering if there was a reason that the first two apply_ methods pass the params dict to the commands, but in the third you one pass individual elements from the dictionary. | 22:21 |
dubs | not really a huge deal, just searching for a reason since it looks sort of deliberate | 22:22 |
ewindisch | dubs: brb.. new computer, checking out the code | 22:22 |
*** brd_from_italy has quit IRC | 22:23 | |
*** lvaughn has quit IRC | 22:23 | |
*** ctennis has joined #openstack | 22:24 | |
*** ctennis has joined #openstack | 22:24 | |
ewindisch | dubs: because I was being quick about it. I went back and tried to clean it up, but clearly missed some. | 22:27 |
*** ppetraki has quit IRC | 22:27 | |
ewindisch | and I see a bug in one of the calls to ebtables, which I'll have to now patch... thanks ;-) | 22:28 |
dubs | ewindisch: ah ok. thanks :) | 22:28 |
*** sebastianstadil has joined #openstack | 22:28 | |
*** adjohn has joined #openstack | 22:29 | |
*** joshuamckenty has joined #openstack | 22:34 | |
kpepple | justinsb: not sure if i agree with your 732907 bug. remember that nova-manage zipfile is an admin command not an end user command. the admin should be handing out credentials and it should fall to them to give out the correct ones. | 22:36 |
justinsb | kpepple: So you're in favor of having different sets of credentials for EC2 and OpenStack? Though I disagree with you, I'm glad somebodies finally had the guts to stand up for the other side! | 22:37 |
*** cascone has joined #openstack | 22:38 | |
justinsb | kpepple: I don't see why we would want two sets of credentials, even if we can mitigate it using a zipfile | 22:38 |
*** mRy has joined #openstack | 22:38 | |
justinsb | kpepple: I can just imagine the support calls! | 22:38 |
*** mRy is now known as Guest81466 | 22:38 | |
justinsb | kpepple: I would imagine that many installations would run both EC2 and OpenStack | 22:38 |
justinsb | kpepple: If we're going to rely on a 'magic file', we should do the right thing and use a secure client certificate! | 22:39 |
kpepple | justinsb: hehe ... not sure if i am in favor of it per se. i'm just saying that the end user shouldn't be getting the raw zipfile ... i imagine that they will use some kind of portal to sign up and then the portal will give them the proper credentials ... depending on what the operator decides to offer. | 22:40 |
kpepple | justinsb: no, i don't believe most installs will run both APIs supported | 22:40 |
kpepple | justinsb: maybe for small private clouds, but operators will use the OpenStack API since they probably can't use EC2 API | 22:40 |
kpepple | justinsb: not against the certificate idea | 22:41 |
justinsb | kpepple: We can hope :-) | 22:41 |
justinsb | kpepple: On both client certificates and OS API winning :-) | 22:41 |
kpepple | justinsb: for operators, OpenStack API is probably the only choice unless they want to be sued by Amazon for copyright ... | 22:42 |
*** nRy has quit IRC | 22:42 | |
justinsb | kpepple: There's also the point that the Amazon API simply isn't very good | 22:42 |
justinsb | kpepple: But anyway, I had to file a bug so we could track the issue | 22:43 |
kpepple | justinsb: i was always in favor of the Sun Cloud API but then ... i'm ex-sun | 22:43 |
justinsb | kpepple: It may be resolved as 'not a bug' - feel free to comment there that you expect operators to sort it out | 22:43 |
kpepple | justinsb: true, good to get it on the agenda for the summit | 22:43 |
justinsb | kpepple: But I think this is still not an ideal solution :-) | 22:44 |
justinsb | kpepple: I have a Sun Cloud sticker on my laptop :-) | 22:44 |
justinsb | kpepple: It's holding up much better than the OpenStack one! | 22:44 |
*** jero has quit IRC | 22:44 | |
eday | justinsb: re: user bug in APIs, it's not really relevant since the OS API won't even have a single auth option long term. Right now we have one, but what about when we support token based, oauth, and simple user/pass? Apps need to know how to pass credentials of any type, which can also include ec2 format | 22:44 |
kpepple | justinsb: sigh ... probably the only sun thing still holding up well :) | 22:44 |
justinsb | kpepple: Did you see the support for Solaris hosted iSCSI volumes? :-) | 22:45 |
justinsb | eday: I know that everyone's saying that everything's going to be pluggable etc etc, but I wanted to file the bugs in today's code because today's code has a nasty habit of being copied-and-pasted into tomorrow's :-) | 22:46 |
*** markwash has quit IRC | 22:47 | |
justinsb | eday: It's also a pretty serious bug! | 22:47 |
kpepple | justinsb: no, will have to look at that ... i actually still have a solaris boxen here | 22:47 |
justinsb | kpepple: I keep checking to see if I can get an Illumos/Nexenta install to actually work so that I can get OpenStack working on it | 22:48 |
justinsb | eday: Ooops... which bug did you mean? bug 732866 or bug 732907? | 22:50 |
uvirtbot | Launchpad bug 732866 in nova "OpenStack API authentication information leakage" [Undecided,New] https://launchpad.net/bugs/732866 | 22:50 |
uvirtbot | Launchpad bug 732907 in nova "OpenStack and EC2 APIs use different usernames and passwords" [Undecided,New] https://launchpad.net/bugs/732907 | 22:50 |
eday | justinsb: 732907 | 22:50 |
kpepple | justinsb: i have nova running partially on OpenSolaris ... but the lack of current libvirt seriously hampers that. i should check illumos ... | 22:50 |
justinsb | eday: Ah - sorry! | 22:50 |
justinsb | eday: Yes, I'm hoping this will just get fixed as part of the auth rework | 22:51 |
justinsb | eday: This = 732907 | 22:51 |
justinsb | eday: I opened it so that I could put a reference to the issue in the unit tests which worked around it | 22:51 |
justinsb | kpepple: Targeting Xen? I thought the Xen driver didn't use libvirt? | 22:52 |
Ryan_Lane | 732907 isn't too big of a deal if the front-end is using a system with an LDAP backend | 22:53 |
Ryan_Lane | it would be able to choose the correct credentials for the end-user | 22:53 |
jarrod | xen and openstack ughh | 22:53 |
jarrod | xen and using xenapi directly is awesum | 22:53 |
Ryan_Lane | and would also give the user a user-friendly way to log in. I think logging in with some rediculously long string for username and password is silly | 22:54 |
justinsb | I'm not very familiar with the LDAP stuff, but I honestly just don't get why we want different credentials | 22:54 |
Ryan_Lane | I should be able to log in with a username and password of my choosing, and the front-end should handle the ec2 and openstack parts for me | 22:54 |
*** jc_smith has joined #openstack | 22:54 | |
justinsb | I fixed it as a bug fix, but broke a few people, so it's a sore point! | 22:54 |
Ryan_Lane | I should only ever see the ec2/openstack credentials if I request them so that I can use the api directly | 22:55 |
justinsb | I definitely agree that if we're using a front-end then a lot of the complexity should go away | 22:55 |
justinsb | But if we're using a front-end, we should probably do something truly secure | 22:55 |
Ryan_Lane | well, that's what https is for :) | 22:56 |
Ryan_Lane | I've been only slightly reading the email discussions on this | 22:56 |
justinsb | Let's just leave it for the design summit | 22:56 |
Ryan_Lane | since there is a ton of them, and they are incredibly long | 22:56 |
justinsb | I'm thinking we can get one represenative from each side and put them in a ring | 22:57 |
Ryan_Lane | :D | 22:57 |
justinsb | With those comedy huge boxing gloves | 22:57 |
uvirtbot | New bug: #732902 in nova "uninformative error when invalid instance id is entered" [Undecided,New] https://launchpad.net/bugs/732902 | 22:57 |
uvirtbot | New bug: #732907 in nova "OpenStack and EC2 APIs use different usernames and passwords" [Undecided,New] https://launchpad.net/bugs/732907 | 22:57 |
uvirtbot | New bug: #732924 in nova "DescribeVolume returns owner as the user that created the volume, but not the project" [Undecided,New] https://launchpad.net/bugs/732924 | 22:57 |
Ryan_Lane | 732924 is currently causing me heartaches :) | 22:58 |
kpepple | justinsb: no not targeting xen. didn't have installed yet. | 22:58 |
Ryan_Lane | do we not return certain information as separate attributes so that we can match the EC2 api? | 22:59 |
Ryan_Lane | because I'd really love all resources to return the project as the owner, or as a project attribute | 22:59 |
justinsb | Ryan_Lane: This is likely to be almost as controversial as the password debate. I'm taking cover :-) | 23:01 |
Ryan_Lane | I don't see why | 23:01 |
Ryan_Lane | most things already do this | 23:01 |
Ryan_Lane | instances return project as owner | 23:01 |
Ryan_Lane | addresses show it via status | 23:01 |
Ryan_Lane | volumes show an owner, but it's the user, not the project | 23:01 |
*** joshuamckenty has quit IRC | 23:02 | |
Ryan_Lane | I guess adding an attribute could cause compatibility issues with ec2 in the future though | 23:02 |
justinsb | That seems fair. The very notion of projects is pretty deeply tied into the auth stuff. | 23:02 |
Ryan_Lane | yeah | 23:02 |
Ryan_Lane | that's why I need it :) | 23:03 |
Ryan_Lane | when a user logs in my web interface, they aren't tied to a specific project | 23:03 |
Ryan_Lane | they are logged in to all of their projects | 23:03 |
Ryan_Lane | the web interface handles switching between the projects for the user | 23:03 |
Ryan_Lane | so if I can't tell which project a volume belongs to, I have problems | 23:03 |
Ryan_Lane | I'm also using the project information other places too, so that it can be queried | 23:05 |
justinsb | I agree that this sounds completely reasonable, but let's wait and see what the guys that are working on auth say... | 23:05 |
Ryan_Lane | does this depends on auth? | 23:05 |
Ryan_Lane | I guess so... | 23:06 |
Ryan_Lane | can you give me ideas on how to work around this for now? | 23:07 |
Ryan_Lane | when a user goes to attach a volume, how can I see which volumes he should be allowed to attach? | 23:07 |
Ryan_Lane | it should really be any volume in the project, right? | 23:07 |
Ryan_Lane | not just ones he created | 23:07 |
justinsb | You're using the OpenStack or EC2 api? | 23:08 |
Ryan_Lane | ec2 | 23:08 |
*** gondoi has quit IRC | 23:08 | |
justinsb | Let me poke around the code and better understand what's going on | 23:08 |
Ryan_Lane | I guess I can add the attach dialog to the instances, and use the user's credentials based on the project of the instance | 23:09 |
justinsb | (Oh yeah, of course you're using EC2, because OS API doesn't have volumes!!) | 23:09 |
Ryan_Lane | then all returned volumes would be in that project, right? | 23:09 |
Ryan_Lane | what happens if the user is an admin? | 23:09 |
Ryan_Lane | would they see all volumes? | 23:09 |
justinsb | Looking at the code - but really I think how the UI behaves is up to you! | 23:10 |
*** jc_smith has quit IRC | 23:10 | |
justinsb | I do like the idea of per-project separation though | 23:10 |
*** jc_smith has joined #openstack | 23:11 | |
Ryan_Lane | well, the way the UI behaves can only work if it knows which project a volume belongs to | 23:11 |
Ryan_Lane | otherwise it'll show volumes attachable to instances in other projects | 23:11 |
Ryan_Lane | I guess this is due to the way I set the UI up though | 23:12 |
justinsb | You're parsing out the status field to get the owner? | 23:13 |
Ryan_Lane | for instances, no, since ownerId is the project | 23:13 |
Ryan_Lane | for addresses, yes | 23:13 |
justinsb | For volumes? | 23:13 |
Ryan_Lane | for volumes, I would, yes | 23:13 |
justinsb | The issue is that only appears for admins | 23:13 |
nidO | JordanRinke: managed to fix my snafu too now \o/ | 23:13 |
Ryan_Lane | I'm doing the search as an admin | 23:13 |
Ryan_Lane | so that isn't an issue | 23:14 |
Ryan_Lane | the web interface does searches as an admin | 23:14 |
Ryan_Lane | it does user actions as the user | 23:14 |
*** ddumitriu has quit IRC | 23:14 | |
*** bcwaldon has joined #openstack | 23:15 | |
Ryan_Lane | my web interface is meant to be very open in what it displays | 23:15 |
justinsb | Errr... OK. Sounds a bit risky, but it's none of my business :-) | 23:15 |
Ryan_Lane | I know it's somewhat out of the ordinary :) | 23:15 |
*** ddumitriu has joined #openstack | 23:15 | |
justinsb | So good news... the volume entity has both a user_id and a project_id | 23:16 |
Ryan_Lane | we're using this to give volunteers the ability to modify our production clusters. we document our entire architecture in the open in a wiki. ;) | 23:16 |
Ryan_Lane | yep | 23:16 |
justinsb | The question is how to return it through the EC2 API... | 23:16 |
justinsb | Off to find the EC2 API spec! | 23:16 |
Ryan_Lane | I know how to return it :) | 23:16 |
Ryan_Lane | I just wanted to make sure the bug would be OK'd before I relied on it. | 23:17 |
Ryan_Lane | heh | 23:17 |
*** rnirmal has quit IRC | 23:17 | |
justinsb | Well, I think the status thing is a bit of a hack, and if we just added something that did it 'the right way' we wouldn't risk breaking anything | 23:17 |
Ryan_Lane | ahhh | 23:17 |
Ryan_Lane | yeah | 23:17 |
Ryan_Lane | that would be really nice | 23:17 |
*** spectorclan_ has joined #openstack | 23:18 | |
spectorclan_ | OpenStack Design Summit Registration is now live - http://www.openstack.org/blog/2011/03/openstack-conferencedesign-summit-registration-is-open/ | 23:19 |
justinsb | Oh great... We could return it in the tagSet attribute | 23:19 |
justinsb | But then I'll probably get in even more trouble | 23:19 |
Ryan_Lane | can you give me a link to the spec you are looking at? | 23:19 |
Ryan_Lane | finding amazon's documentation is a pain :) | 23:19 |
justinsb | http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ | 23:20 |
Ryan_Lane | thanks | 23:20 |
justinsb | Specifically http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ApiReference-ItemType-DescribeVolumesSetItemResponseType.html | 23:20 |
*** adjohn has quit IRC | 23:22 | |
justinsb | Is Jorge around? Anyone know his callsign? | 23:22 |
*** azneita has joined #openstack | 23:22 | |
*** azneita has joined #openstack | 23:22 | |
*** eikke has quit IRC | 23:23 | |
justinsb | Ryan_Lane: Hold I ... I have a hack around this one maybe... do you want to just get the volumes for a specific project? You don't actually want to know the project from the volume, do you? | 23:23 |
Ryan_Lane | I want to know the project from the volume | 23:24 |
Ryan_Lane | I could do the former though | 23:24 |
Ryan_Lane | but the latter would also be very helpful | 23:24 |
justinsb | We could do the former with a filter, without risking breaking anything | 23:24 |
Ryan_Lane | when I create resources, I also automatically create wiki pages with the information in templates. the templates have semantic information attached to them, so that queries can access the information | 23:25 |
Ryan_Lane | but for my immediate need, I could use filters | 23:25 |
justinsb | The latter will likely involve putting system data into the tags, which gives the purists a coronary | 23:25 |
*** ddumitriu has quit IRC | 23:25 | |
Ryan_Lane | heh | 23:25 |
Ryan_Lane | well, it would also be fine to change the status | 23:26 |
Ryan_Lane | I don't mind parsing | 23:26 |
Ryan_Lane | it's a hack, but it works | 23:26 |
justinsb | I don't know the impact of changing the status | 23:26 |
Ryan_Lane | yeah, me either | 23:26 |
justinsb | Whereas adding filtering is just an addition | 23:26 |
Ryan_Lane | let me email the list | 23:26 |
justinsb | And helps us with EC2 compatability | 23:27 |
Ryan_Lane | yeah | 23:27 |
justinsb | I'll look at adding filtering | 23:27 |
Ryan_Lane | I'd love to have filters | 23:27 |
justinsb | And if you want to email the list, please do | 23:27 |
*** hadrian has quit IRC | 23:27 | |
*** matiu_ has joined #openstack | 23:29 | |
matiu_ | Hey guys, is there a c++ client to current RS Cloud servers aPI ? | 23:29 |
matiu_ | I know there's that awesome ipad app .. I guess I should look inside there to see what he did | 23:30 |
*** MarcMorata has quit IRC | 23:30 | |
*** enigma has quit IRC | 23:31 | |
*** mray has quit IRC | 23:31 | |
uvirtbot | New bug: #732939 in nova "nova lacks cpu_arch field in instance_types" [Undecided,New] https://launchpad.net/bugs/732939 | 23:31 |
nidO | heya all, are there any known issues with uploading to swift using cyberduck? I can upload content fine using st, and cyberduck can display folder listings, download content, create files, and edit files + upload changes, but it's failing when trying to just straight upload files | 23:36 |
*** mRy has joined #openstack | 23:38 | |
*** mRy is now known as Guest68332 | 23:38 | |
*** cascone has quit IRC | 23:38 | |
*** Guest81466 has quit IRC | 23:42 | |
*** bcwaldon_ has joined #openstack | 23:43 | |
*** miclorb has quit IRC | 23:43 | |
*** spectorclan_ has quit IRC | 23:44 | |
*** joshuamckenty has joined #openstack | 23:44 | |
kpepple | nidO: don't you have to http://swift.openstack.org/howto_cyberduck.html | 23:45 |
creiht | kpepple: I think that might be a bit out of date now | 23:46 |
*** bcwaldon has quit IRC | 23:46 | |
kpepple | creiht: on the cyberduck side or the swift side ? | 23:46 |
creiht | nidO: I would start by looking at the proxy logs to see what error is being returned | 23:46 |
nidO | kpepple: everything in there that needs to be done is - The modifications of cyberduck's source arent needed as of v4 as it has an option directly for connecting to swift | 23:46 |
*** bcwaldon_ has quit IRC | 23:46 | |
creiht | keekz: on the swift side | 23:46 |
creiht | swift support is in the latest releases | 23:46 |
*** bcwaldon has joined #openstack | 23:47 | |
nidO | creiht: its throwing out some random GET that doesnt mean a great deal to me - trying to run a file upload from cyberduck results in http://pastebin.com/LsuiCti7 | 23:47 |
*** dendro-afk is now known as dendrobates | 23:48 | |
creiht | well, from the proxy, those are all good responses | 23:48 |
creiht | 204, 200, 200 | 23:48 |
*** Guest68332 has quit IRC | 23:48 | |
creiht | does cyberduck give any reasonable error? | 23:49 |
*** clauden has quit IRC | 23:49 | |
creiht | sorry, I don't have access to a windows or mac machine to try :/ | 23:49 |
nidO | but, the actual urls dont seem to make a great deal of sense - when I upload a fresh file via st, the proxy's log shows a HEAD request for the filename in question to see whether it exists, then a PUT for the filename to actually upload it | 23:50 |
nidO | cyberduck returns nothing useful, authenticates, gets a directory listing, then just fails with "transfer incomplete" | 23:50 |
creiht | hrm | 23:51 |
creiht | Of the top of my head, I'm not sure what to say :/ | 23:54 |
creiht | nidO: did you look at http://trac.cyberduck.ch/wiki/help/en/howto/openstack | 23:55 |
creiht | ? | 23:55 |
nidO | yep, nothing on there seems relevant though, i'm using devauth rather than swauth so there's no need to change the pathing, and the rest is just connection info to access rackspace/internap | 23:56 |
nidO | I'd assume this must be a fault at the cyberduck end then really, but the fact it's able to correctly upload changes when modifying an existing file, but not upload a new file, has thrown me a bit | 23:57 |
creiht | hrm | 23:58 |
creiht | that is odd | 23:58 |
creiht | nidO: maybe try this: http://trac.cyberduck.ch/wiki/help/en/faq#Enabledebuglogging | 23:59 |
creiht | and see if you get anything reasonable | 23:59 |
nidO | ah ta, now taking a look | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!