*** joearnold has joined #openstack | 00:15 | |
*** joearnold has quit IRC | 00:16 | |
*** rlucio has quit IRC | 00:28 | |
*** charlvn has quit IRC | 00:52 | |
*** dubsquared has joined #openstack | 01:11 | |
*** rlucio_ has joined #openstack | 01:14 | |
*** jfluhmann has quit IRC | 01:23 | |
dubsquared | random question, but does anyone have a location they are getting working images from? it seems the UEC images are not liking the most recent updates to the packges... | 01:27 |
---|---|---|
anotherjesse | dubsquared: there are images on images.openstack.org | 01:29 |
anotherjesse | We use http://images.ansolabs.com/tty.tgz http://images.ansolabs.com/maverick.tgz internally | 01:31 |
dubsquared | images isn't loading for me? | 01:31 |
anotherjesse | I know someone at rackspace is working on a directory of images | 01:31 |
dubsquared | that is jordan and I :D | 01:31 |
anotherjesse | images.openstack.org is ftp | 01:31 |
dubsquared | the few places were were grabbing images from is all done, so just looking for more now…(and the UEC ones aren't working now :/) | 01:32 |
*** jfluhmann has joined #openstack | 01:36 | |
jt_zg | Can anyone help me out with a 503 error when adding users, please? | 01:37 |
*** rlucio_ has quit IRC | 01:38 | |
*** joearnold has joined #openstack | 01:43 | |
*** anotherjesse has quit IRC | 02:03 | |
*** joearnold has quit IRC | 02:05 | |
uvirtbot | New bug: #700867 in nova "Metadata is broken in a default install" [Undecided,New] https://launchpad.net/bugs/700867 | 02:06 |
*** miclorb has quit IRC | 02:36 | |
*** littleidea has joined #openstack | 02:55 | |
*** miclorb has joined #openstack | 03:13 | |
creiht | jt_zg: I can take a stab :) | 03:16 |
jt_zg | sounds good :D | 03:16 |
creiht | are you still around? | 03:16 |
jt_zg | What a nightmare! | 03:16 |
creiht | hah | 03:16 |
jt_zg | always, and forever | 03:16 |
creiht | sorry about that :/ | 03:16 |
creiht | I skimmed over what you had done before | 03:16 |
jt_zg | No worries. I knew it would be challenging. They don't pay me the big bucks for nothing...eventually | 03:16 |
jt_zg | Awesome, that puts you ahead of where I am! | 03:17 |
creiht | ok | 03:17 |
creiht | so you are on the auth server correct? | 03:17 |
jt_zg | yup | 03:17 |
creiht | is it on the same server as the proxy? | 03:17 |
jt_zg | I have all 7 terminals open actually | 03:17 |
jt_zg | no | 03:17 |
creiht | ahh | 03:17 |
creiht | ok | 03:17 |
jt_zg | 1 proxy, 1 auth, 5 storage | 03:17 |
creiht | give me a sec, I think I know what your problem is | 03:18 |
jt_zg | Absolutely | 03:18 |
creiht | jt_zg: in your proxy server config | 03:18 |
creiht | under the [auth] section | 03:19 |
creiht | add a line that says | 03:19 |
creiht | ip = AUTH_IP | 03:19 |
creiht | by default it assumes 127.0.0.1 unless you set it | 03:19 |
jt_zg | On it! | 03:19 |
creiht | I need to add that to the multiserver docs as you are the second one to have this issue | 03:20 |
jt_zg | wow | 03:20 |
jt_zg | You sir... | 03:20 |
jt_zg | complete me | 03:20 |
jt_zg | Thanks! | 03:20 |
creiht | haha | 03:20 |
jt_zg | I restarted the proxy server, ran my adduser again and automagic | 03:20 |
creiht | np | 03:20 |
creiht | awesome | 03:20 |
jt_zg | Now my next question...what do I do with these 7 servers? :P Practically I mean | 03:21 |
creiht | lol | 03:21 |
jt_zg | I know what my company wants to do with them | 03:21 |
creiht | Well there are a couple of things off the bat | 03:21 |
jt_zg | eventually, but I want to do some real world testing before I demonstrate | 03:21 |
creiht | there is the st command line that you can do to experiment putting things in and out of the system | 03:21 |
jt_zg | stupid question.. | 03:22 |
creiht | there is the swift-bench command that can be used to do some benchmarking | 03:22 |
jt_zg | can I sftp into one of the servers and drop a random doc and monitor the replication? | 03:22 |
jt_zg | That would seal the deal with my boss, "Can I copy my spreadsheets to it and see it work?" | 03:23 |
creiht | hehe | 03:23 |
creiht | jt_zg: what os does your boss use? | 03:23 |
jt_zg | ubuntu | 03:23 |
jt_zg | bless his soul | 03:23 |
creiht | hehe | 03:23 |
creiht | hrm | 03:23 |
creiht | bummer | 03:24 |
jt_zg | Did that example make sense, relative to what Openstack does? | 03:24 |
creiht | :) | 03:24 |
creiht | definately | 03:24 |
jt_zg | I want to avoid working with api apps for awhile | 03:24 |
creiht | there is a nice gui called cyberduck | 03:24 |
creiht | ahh | 03:24 |
creiht | I see | 03:24 |
creiht | so the st command is standalone, you can copy that anywhere that has python installed | 03:24 |
creiht | so you could upload a spreadsheet using that | 03:25 |
creiht | list the container to see it there | 03:25 |
creiht | then download it | 03:25 |
jt_zg | that'll work | 03:25 |
creiht | to show replication is a little more difficult | 03:25 |
jt_zg | Were you going to suggest something, re: Windows? | 03:25 |
creiht | for windows/os x there is cyberduck | 03:25 |
creiht | and it works with swift | 03:25 |
creiht | so if you want to see replication work | 03:26 |
jt_zg | I'll see if I can't get it to run in Wine/Ubuntu | 03:26 |
creiht | actually the easiest is to upload the file | 03:26 |
creiht | use swift-get-nodes to find the nodes where it is located | 03:26 |
creiht | (and the dir on that node where it is located | 03:26 |
jt_zg | very cool | 03:26 |
creiht | go to one of the nodes, and delete it | 03:26 |
creiht | then wait a bit, and it should show back up | 03:27 |
creiht | oh wait that wont exactly work like that :/ | 03:27 |
creiht | forgot we changed that a little | 03:27 |
creiht | so new idea | 03:27 |
creiht | before you upload a file, know what path you want to upload it to | 03:27 |
jt_zg | *nod* | 03:28 |
creiht | say you are going to upload to MY_ACCT/container1/myfile.txt | 03:28 |
creiht | then you could use get-nodes like: | 03:28 |
creiht | swift-get-nodes /etc/swift/object.ring.gz MY_ACCT container1 myfile.txt | 03:28 |
creiht | that will show what nodes it will be on when it uploads | 03:28 |
creiht | go to one of the nodes and stop services | 03:28 |
creiht | then upload the file | 03:29 |
creiht | (this will also show that it can still upload the file even though the node is down) | 03:29 |
creiht | after the upload, start the services back up on that node | 03:29 |
creiht | and wait a bit | 03:29 |
jt_zg | and automagic? | 03:29 |
creiht | the data should get replicated to that node | 03:29 |
creiht | yup | 03:29 |
jt_zg | very cool, thanks! | 03:29 |
creiht | replication is pushed based | 03:29 |
jt_zg | Is there a specific schedule? | 03:30 |
jt_zg | or mechanism you can tweak | 03:30 |
creiht | so the other two nodes that have the file will be trying to rsync to the node that was down | 03:30 |
creiht | it runs continually | 03:30 |
creiht | in the background | 03:30 |
creiht | there are some things that you can use to control how much incoming data a node will accept through the max_clients setting in the rsyncd.conf file | 03:31 |
jt_zg | gotcha, I'm going to go looking for that bugger. So, I'm curious...why 5 storage nodes in the article if only 3 are used? Is that to prove the durability of the group in a failure scenario? | 03:32 |
creiht | If replication is running too much, there is also a config to cause it to delay an amount of time between runs | 03:32 |
jt_zg | Ah, that's what I was hoping to hear! :) | 03:32 |
creiht | 5 is our recommended minimum to best handle failure scenarios | 03:32 |
jt_zg | We're hoping to have 1-2 hosts per data-centre, but replicating across 3 DCs for a max of 3-6 hosts. Reason I ask | 03:33 |
creiht | hrm | 03:33 |
creiht | what type of connectivity between datacenters? | 03:33 |
creiht | and how much usage would it see? | 03:33 |
creiht | we haven't done a lot of tuning yet to handle mult-region setups | 03:33 |
jt_zg | 10Gb links to and between | 03:33 |
creiht | but that is one of the next items on the roadmaps | 03:33 |
creiht | oh nice | 03:33 |
jt_zg | not sure of usage | 03:34 |
jt_zg | We're fielding inquiries that are putting us at ~350Tb/month per client, across ~100 servers | 03:34 |
creiht | Well when you get to the point of planning it out, make sure we talk :) | 03:34 |
jt_zg | Absolutely! | 03:34 |
jt_zg | That's what I'm working on now. Finding a viable solution to replicate across DC's and span within and to different DCs | 03:35 |
creiht | if you have 10g, it is feasible | 03:35 |
jt_zg | We're starting with 10g links | 03:35 |
jt_zg | We're probably looking at several once we ramp up | 03:35 |
creiht | there will be a little bit of latency | 03:36 |
jt_zg | Fair enough | 03:36 |
jeremyb | speaking of replication and failure: i was wondering what happens when the node comes back up... does the eventual consistency window come into play here? how long are tombstones kept for? | 03:36 |
jt_zg | If we have a client that can't tolerate some form of latency when a DC gets nuked(or a host dies...the more likely scenario, I suppose) then we probably don't want them :P | 03:36 |
creiht | jeremyb: when the node comes back up, any data that it hadn't gotten yet, will be pushed to it by the other servers who have that data | 03:37 |
creiht | And yeah there can be a bit of an eventual consistency window there | 03:37 |
creiht | Usually it isn't that long though | 03:37 |
creiht | by default tombstones are kept for 7 days (but it is tunable as a config) | 03:38 |
jeremyb | creiht: even if it's been a week? (granted i know at that point it should have been removed from the ring) | 03:38 |
jeremyb | hrmm | 03:38 |
creiht | if it is going to be out that long, then yeah, it should be taken out of the ring | 03:38 |
creiht | a good rule of thumb is if a server is going to be down more than a day, take it out of the ring, unless you have a really good reason not to | 03:39 |
jeremyb | does a node get up to speed first before going back into service? | 03:39 |
creiht | not really | 03:39 |
jeremyb | is that even possible? | 03:39 |
jt_zg | creiht, are all these configs you mentioned documented in the swift documentation? | 03:39 |
creiht | as soon as a device has the services started, it will answer all the requests | 03:40 |
jeremyb | replication is rsync so you could block http? | 03:40 |
creiht | jt_zg: http://swift.openstack.org/deployment_guide.html | 03:40 |
creiht | jeremyb: yeah you could leave the services down while rsync is going on | 03:40 |
jt_zg | creiht, my mistake. Sorry about that, and thanks! | 03:40 |
creiht | well I take that back | 03:40 |
creiht | jt_zg: np | 03:40 |
creiht | replication no has to make a http request first to see if things need to be replicated | 03:41 |
creiht | We've been through several iterations of replication now, and sometimes I forget where we are at :) | 03:41 |
creiht | Actually the above goes for all the services | 03:42 |
creiht | :) | 03:42 |
*** miclorb has quit IRC | 03:42 | |
jeremyb | so if all your services and rsync nodes are on a subnet and all your clients are not on that subnet block by subnet? | 03:42 |
jeremyb | or middleware maybe? | 03:42 |
jeremyb | seems like a thing many would want and could easily be switched on/off | 03:43 |
creiht | possibly | 03:43 |
creiht | yeah, we have talked about the possibility of something like that a bit | 03:43 |
*** mdomsch has joined #openstack | 03:43 | |
creiht | but it hasn't come up yet that it has really been needed | 03:43 |
creiht | We've discussed several possibilites for how we could optimize the "bootstrapping" | 03:44 |
jeremyb | you mean a brand new node? | 03:44 |
creiht | if there is a sufficient amount of new data to be pushed, then it is about the same | 03:45 |
creiht | or are you more worried about sending out stale data? | 03:45 |
jeremyb | i guess because 404's aren't authoritative it's not a big deal | 03:45 |
creiht | oh | 03:45 |
creiht | well if the proxy gets a 404 from a storage node, it will try the other storage nodes as well | 03:45 |
jeremyb | (i hadn't thought about the authoritativeness...) | 03:46 |
jeremyb | i thought it tries all 3 concurrently? | 03:46 |
creiht | right now it tries them in succession | 03:46 |
jeremyb | huh | 03:46 |
creiht | we've also talked about making that concurrent :) | 03:46 |
jeremyb | i thought there was something about fastest respondent | 03:46 |
creiht | for a GET/HEAD it will return the first reasonable response that it gets | 03:47 |
creiht | and the timeouts are very aggressive | 03:47 |
creiht | so if one machine is hanging, it will not wait around too long for it | 03:47 |
*** miclorb has joined #openstack | 03:49 | |
uvirtbot | New bug: #700893 in nova "volume ids are messed up in ec2_api" [High,In progress] https://launchpad.net/bugs/700893 | 03:51 |
*** Glaurung has joined #openstack | 04:00 | |
uvirtbot | New bug: #700894 in swift "Add note to multi server docs to set the ip in proxy config if on a different server" [Low,Confirmed] https://launchpad.net/bugs/700894 | 04:01 |
*** jimbaker has joined #openstack | 04:05 | |
notmyname | creiht: thanks for helping with the auth problem. jt_zg, glad it got worked out | 04:06 |
jt_zg | notmyname, I appreciate your help! I'm pretty excited about this. Thanks again creiht | 04:07 |
* jeremyb wonders if any of these people are here: http://www.meetup.com/OpenStack-New-York-Meetup/calendar/15634525/ | 04:09 | |
jt_zg | What I would do for an Ottawa, Canada group! | 04:10 |
jeremyb | also, is it mostly a swift or nova or mixed group? sounds like both will be covered? | 04:11 |
*** littleidea has quit IRC | 04:11 | |
*** littleidea has joined #openstack | 04:11 | |
*** littleidea has quit IRC | 04:12 | |
*** littleidea has joined #openstack | 04:14 | |
*** jimbaker has quit IRC | 04:30 | |
*** anotherjesse has joined #openstack | 04:30 | |
*** joearnold has joined #openstack | 04:30 | |
*** miclorb has quit IRC | 04:32 | |
*** leted has joined #openstack | 04:34 | |
*** kashyapc has joined #openstack | 04:35 | |
*** mdomsch has quit IRC | 05:04 | |
*** dubsquared has quit IRC | 05:24 | |
*** f4m8_ is now known as f4m8 | 05:50 | |
*** damon__ has quit IRC | 05:56 | |
*** EdwinGrubbs has quit IRC | 06:11 | |
vishy | soren: are you awake yet? | 06:19 |
*** EdwinGrubbs has joined #openstack | 06:22 | |
*** ramkrsna has joined #openstack | 06:40 | |
*** leted has quit IRC | 06:42 | |
*** trin_cz has quit IRC | 06:51 | |
*** leted has joined #openstack | 06:53 | |
*** guigui1 has joined #openstack | 06:55 | |
soren | vishy: Yes. | 07:09 |
soren | vishy: What's up? | 07:09 |
soren | vishy: Other than me, of course. | 07:09 |
vishy | soren: so packaging dependencies, nova-volume depends on iscsitarget, but the service isn't enabled by default. What is the best way in the packaging to enable the service | 07:09 |
anotherjesse | vishy: does nova-volume fail if iscsitarget isn't enabled? | 07:10 |
anotherjesse | in a nice way? | 07:10 |
vishy | soren: also we need to add a udev rule | 07:10 |
vishy | anotherjesse: nope it goes boom when you try to create a volume | 07:11 |
*** ibarrera has joined #openstack | 07:11 | |
anotherjesse | vishy: it seems that then we should add the test that iscsitarget is enabled (if you are using that driver) | 07:11 |
soren | vishy: What's the udev rule? | 07:12 |
anotherjesse | that way even if /etc/default/iscsitarget is enabled if it isn't running we catch that | 07:12 |
vishy | anotherjesse: sure | 07:12 |
soren | vishy: The best way to fix it is to fix the iscsitarget package. Is there any particular reason to leave it disabled? | 07:13 |
soren | vishy: It doesn't expose anything by default anyway. | 07:13 |
vishy | soren: i don't know that is just the default | 07:13 |
vishy | soren: we also need to disable the default dnsmasq that runs | 07:14 |
vishy | soren: udev rule is the one in tools/iscsidev.sh | 07:15 |
uvirtbot | New bug: #700920 in nova "ec2 describe_instances doesn't filter properly" [Medium,In progress] https://launchpad.net/bugs/700920 | 07:16 |
vishy | soren: so we should stick a fixed iscsitarget package in our ppa? | 07:18 |
anotherjesse | soren: should we ask the package manager of iscsitarget why the default is disabled? | 07:18 |
soren | anotherjesse: There isn't really a "package manager". | 07:18 |
soren | anotherjesse: In Ubuntu, that is. | 07:18 |
soren | anotherjesse: Everyone takes care of everything. | 07:18 |
soren | So I can just fix it. | 07:18 |
anotherjesse | hmm, well, we aren't experts at iscsitarget ... so while we always enable it I wonder why it was disabled | 07:19 |
soren | vishy: The fix seems to belong in iscsitarget, so that seems like a good start. Then we can get that change into Ubuntu proper afterwards. | 07:19 |
soren | anotherjesse: Some people just tend to do that for their packages. I hate that. | 07:19 |
anotherjesse | soren: what about libvirt having a bridge by default? | 07:20 |
soren | wrt to dnsmasq, it should be a matter of installing dnsmasq-base rather than dnsmasq. | 07:20 |
anotherjesse | that is enabled - which adds a dnsmasq | 07:20 |
vishy | soren: ok, what about the udev rule | 07:20 |
soren | vishy: Not sure yet. | 07:20 |
soren | vishy: What exactly does it do? | 07:20 |
soren | vishy: It provides a friendlier name for iscsi shares? | 07:21 |
vishy | soren: a consistent name | 07:21 |
soren | Ok. | 07:22 |
vishy | soren: without it if you rediscover the volumes (from a reboot or whatever) the names could be different | 07:22 |
soren | vishy: Ok, so it provides a consistent name based on target hostname and share name? (sorry if I get the terminology wrong. It's been a while since I've messed with iSCSI) | 07:23 |
soren | vishy: That very much sounds like something openiscsi should be doing on its own anyway, so I'd shove that fix into open-iscsi. | 07:24 |
vishy | it actually is just based on targetname | 07:24 |
soren | How likely are they to collide? | 07:25 |
soren | I forget what they look like. | 07:25 |
vishy | they are unique | 07:25 |
vishy | based on vol-id | 07:25 |
soren | Well... | 07:25 |
soren | Yes, so the ones created by Nova are unique. | 07:25 |
vishy | right | 07:25 |
soren | I'm thinking more generally. | 07:25 |
soren | Since, if I want to stick the fix in the open-iscsi package, I need to think more broadly. | 07:25 |
vishy | quite a few things will break if you have non-nova iscsi | 07:26 |
soren | Things in Nova, I suspect? | 07:26 |
vishy | this is the default path /sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/session*/targetname | 07:27 |
vishy | if we did /dev/iscsi/host/target we might still get conflicts from multiple sessions? | 07:28 |
vishy | that statement wasn't totally correct, but you get the idea | 07:28 |
soren | What does a targetname look like? | 07:29 |
soren | As an example? | 07:29 |
*** joearnold has quit IRC | 07:31 | |
vishy | in our case it is just the volume id | 07:31 |
vishy | vol-xxxxxx | 07:31 |
soren | vishy: We could provide a /dev/iscsi/host/session/target symlink. Nova could make the assumption that there won't be collisions in different sessions on same host, and just do glob('/dev/iscsi/*/*/<vol_id>') | 07:32 |
soren | How does that sound? | 07:33 |
soren | To me, it looks like the same sort of risks, only contained in Nova. | 07:33 |
vishy | soren: although if we are globbing, we may just be able to do it without the udev rule | 07:34 |
soren | vishy: True. | 07:34 |
* soren takes a quick break | 07:37 | |
* ttx waves | 07:38 | |
*** littleidea has quit IRC | 07:39 | |
vishy | soren: we should probably just change it to glob, it will be easier to install | 07:49 |
*** brd_from_italy has joined #openstack | 07:54 | |
ttx | mtaylor: re: "bzr bd -S --builder='debuild -S -sa'", you can actually do "bzr bd -S -- -sa" | 07:57 |
mtaylor | ttx: oh yeah? that must be reasonably new... | 07:58 |
mtaylor | ttx: but that makes me happy | 07:58 |
* ttx uses bzr bd -S -- -k$DEBKEY all the time in sponsoring | 07:58 | |
mtaylor | sweet | 07:58 |
soren | vishy: Can you access that information as non-root? | 08:00 |
soren | vishy: (/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/session*/targetname that is) | 08:01 |
* ttx finished IRC backlog, now dives into the email pile | 08:05 | |
vishy | soren: not sure we need to test this, there may be one we can glob somewhere | 08:07 |
anotherjesse | soren: about to write an email about moving to migrations for schema changes... this changing the schema is causing issues with deployed nova(s). | 08:08 |
anotherjesse | plus it gives a place to add some indexes | 08:09 |
anotherjesse | any thoughts? | 08:09 |
ttx | anotherjesse: Daviey wanted to work on that: https://blueprints.launchpad.net/nova/+spec/db-versioning-and-migration | 08:10 |
anotherjesse | ttx: do we have an eta on that? | 08:10 |
ttx | anotherjesse: last time I talked to him, cactus | 08:11 |
anotherjesse | ttx: I think that is way too late | 08:11 |
ttx | anotherjesse: I agree it's starting to become painful | 08:11 |
anotherjesse | ttx: since if we are saying that people should deploy bexar, we are already running into issues with the changes to ids | 08:11 |
anotherjesse | ttx: I know termie is interested in implementing it | 08:11 |
anotherjesse | ttx: perhaps if they talk together it can get done soon? | 08:12 |
anotherjesse | ttx: is Daviey working on other stuff? | 08:12 |
anotherjesse | ttx: the name of BexarDBMigrations is a little misleading too then ;) | 08:12 |
ttx | anotherjesse: sounds like a good idea. I wouldn't be opposed to the idea of landing enough of it in bexar so that we can have migrations working in early cactus | 08:12 |
ttx | (if you see what I mean) | 08:13 |
ttx | anotherjesse: I do not control daviey's assignments anymore. We'll have to ask him | 08:13 |
ttx | he certainly was interested in seeing that implemented | 08:13 |
*** calavera has joined #openstack | 08:13 | |
ttx | but I don't think he would mind collaborating on impl | 08:13 |
anotherjesse | ttx: cool - we can do migrate up for bexar | 08:13 |
anotherjesse | and then add down for cactus | 08:14 |
ttx | right, people deploying Bexar should be able to migrate up to early cactus | 08:14 |
ttx | even if that just means adding version number 0 somewhere | 08:14 |
anotherjesse | yeah - schema table with a value of 0 | 08:15 |
ttx | we have to start somewhere anyway | 08:15 |
anotherjesse | it might help with the issue of the fighting auto-creation of tables when you launch all the workers at the same time | 08:15 |
anotherjesse | I think there has been some work on that but I didn't read that patch :( | 08:15 |
ttx | Daviey will be on central time this week, that might help in getting higher bandwidth with termie | 08:16 |
ttx | cool, newlog2 was merged | 08:19 |
*** rcc has joined #openstack | 08:23 | |
*** adiantum has joined #openstack | 08:41 | |
*** adiantum has quit IRC | 08:47 | |
ttx | soren: Didn't we fix https://bugs.launchpad.net/bugs/700867 already ? | 08:53 |
uvirtbot | Launchpad bug 700867 in nova "Metadata is broken in a default install" [Medium,In progress] | 08:53 |
*** leted has quit IRC | 08:58 | |
soren | ttx: I thought you did. | 09:02 |
anotherjesse | ttx: if only we had a cluster that we could test against :( | 09:02 |
anotherjesse | in our tests it didn't work | 09:02 |
* ttx searches | 09:03 | |
ttx | https://bugs.launchpad.net/nova/+bug/683541 | 09:03 |
uvirtbot | Launchpad bug 683541 in nova "Metadata service unreachable from instance" [Medium,Fix committed] | 09:03 |
ttx | soren: In fact, you fixed it: https://code.launchpad.net/~soren/nova/lp683541 | 09:03 |
soren | Oh, I thought this was the one about the metadata response being a traceback due to missing stuff. | 09:04 |
ttx | no, it's the 127.0.0.1 unrouteable | 09:04 |
ttx | wtf | 09:05 |
soren | Yeah, /me should read things more carefully | 09:05 |
ttx | hm, vish's branch looks a bit outdated | 09:06 |
ttx | or... | 09:07 |
ttx | aaaaah | 09:07 |
ttx | the newlog2 branch reintroduced the bug | 09:10 |
ttx | http://bazaar.launchpad.net/~vishvananda/nova/lp700867/revision/515.4.1#nova/flags.py | 09:11 |
* ttx checks if nothing else was accidentally undone in that branch | 09:13 | |
*** fabiand_ has joined #openstack | 09:13 | |
ttx | no, the rest looks ok | 09:17 |
ttx | nova-core: priority reviews: | 09:21 |
ttx | https://code.launchpad.net/~nttdata/nova/live-migration/+merge/44940 | 09:21 |
ttx | https://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/45228 | 09:22 |
*** zykes- has joined #openstack | 09:22 | |
*** skrusty has quit IRC | 09:29 | |
*** skrusty has joined #openstack | 09:41 | |
*** reldan has joined #openstack | 09:48 | |
vishy | ttx: it probably was done because of a circular import problem | 09:48 |
ttx | vishy: oh! | 09:48 |
vishy | ttx: adding import utils to flags goes boom | 09:48 |
ttx | vishy: well, your branch works around that well | 09:49 |
vishy | ttx: by the way trunk was completely hosed, we've been trying to fix all of the bugs | 09:49 |
*** anotherjesse has quit IRC | 09:50 | |
ttx | vishy: I saw that... I'll run a few tests today to see where we stand | 09:50 |
vishy | there are a few we haven't proposed yet | 09:51 |
vishy | trunk_safe is us trying to fix all of them | 09:51 |
ttx | vishy: did you report them as bugs yet ? | 09:51 |
ttx | or are busy fixing them all as you go ? | 09:52 |
vishy | fixing as we go...there are a couple we haven't reported yet | 09:52 |
ttx | vishy: ok, I'll keep that in mind in my testing | 09:52 |
ttx | vishy/soren: we need to review the japanese branches early this week, since the TZ difference will trigger longer fixes | 09:53 |
ttx | It will be easy to rush the Ozone or Anso branches through at the end... but the Japanese ones will take longer to get fixes in, due to lack of -core devs in that TZ | 09:54 |
vishy | ttx: sure, we're trying to get a stable trunk so testing the branches actually means something | 09:55 |
ttx | vishy: right :) | 09:55 |
ttx | vishy: isn't it Sunday night where you live ? | 09:57 |
vishy | ttx: yes | 09:59 |
*** reldan has quit IRC | 09:59 | |
*** anotherjesse has joined #openstack | 10:02 | |
*** reldan has joined #openstack | 10:03 | |
*** arthurc has joined #openstack | 10:06 | |
*** reldan has quit IRC | 10:10 | |
*** aimon has quit IRC | 10:12 | |
*** aimon has joined #openstack | 10:12 | |
*** reldan has joined #openstack | 10:13 | |
*** trin_cz has joined #openstack | 10:15 | |
*** aimon_ has joined #openstack | 10:15 | |
*** aimon has quit IRC | 10:18 | |
*** aimon_ is now known as aimon | 10:18 | |
*** anotherjesse has quit IRC | 10:22 | |
*** anotherjesse has joined #openstack | 10:24 | |
*** reldan has quit IRC | 10:31 | |
*** allsystemsarego has joined #openstack | 10:38 | |
*** aimon has quit IRC | 10:38 | |
soren | Odd. I feel somewhat reluctant to approve https://code.launchpad.net/~soren/nova/iptables-security-groups/+merge/43767, because it fixes some bugs that I think some people may be depending on. | 10:40 |
soren | Meh. It's still early. We can fix whatever comes up. | 10:42 |
*** reldan has joined #openstack | 10:43 | |
anotherjesse | soren: ++ | 10:44 |
*** reldan has quit IRC | 10:44 | |
*** miclorb has joined #openstack | 10:46 | |
*** anotherjesse has quit IRC | 10:49 | |
openstackhudson | Project nova build #373: SUCCESS in 1 min 20 sec: http://hudson.openstack.org/job/nova/373/ | 10:49 |
soren | \o/ | 10:49 |
*** MarkAtwood has quit IRC | 10:54 | |
uvirtbot | New bug: #700974 in nova "_describe_availability_zones_verbose calls db.service_get_all and db.get_time. Neither exist." [Undecided,New] https://launchpad.net/bugs/700974 | 10:56 |
*** dizz has joined #openstack | 11:16 | |
*** befreax has joined #openstack | 11:24 | |
*** dizz has quit IRC | 11:24 | |
*** aimon has joined #openstack | 11:27 | |
*** aimon has quit IRC | 11:37 | |
*** miclorb has quit IRC | 11:48 | |
*** befreax has quit IRC | 11:54 | |
*** dizz has joined #openstack | 12:07 | |
*** aimon has joined #openstack | 12:12 | |
soren | Man, it's taking a long time to review this stuff. | 12:14 |
openstackhudson | Project nova build #374: SUCCESS in 1 min 19 sec: http://hudson.openstack.org/job/nova/374/ | 12:19 |
soren | Ooh, what landed now? | 12:21 |
soren | Ah. Bug fixes. Boring :) | 12:21 |
*** trin_cz has quit IRC | 12:24 | |
*** aimon has quit IRC | 12:30 | |
*** hggdh has quit IRC | 12:34 | |
*** ctennis has quit IRC | 12:36 | |
*** aimon has joined #openstack | 12:42 | |
soren | Phew. | 12:42 |
*** littleidea has joined #openstack | 12:44 | |
*** dizz is now known as dizz|away | 12:44 | |
*** ctennis has joined #openstack | 12:55 | |
*** ctennis has joined #openstack | 12:55 | |
*** kashyapc has quit IRC | 13:06 | |
*** ramkrsna has quit IRC | 13:15 | |
*** trin_cz has joined #openstack | 13:15 | |
*** gaveen has joined #openstack | 13:17 | |
*** jfluhmann__ has joined #openstack | 13:20 | |
*** jfluhmann has quit IRC | 13:24 | |
*** hadrian has joined #openstack | 13:36 | |
*** adiantum has joined #openstack | 13:42 | |
*** jfluhmann__ has quit IRC | 13:46 | |
*** cron0 has joined #openstack | 13:47 | |
*** adiantum has quit IRC | 13:56 | |
*** westmaas has joined #openstack | 13:57 | |
*** cron0 has left #openstack | 13:58 | |
ttx | soren: you might want to unfuck trunk by approving https://code.launchpad.net/~vishvananda/nova/lp699814/+merge/45627 | 13:59 |
* ttx kinda dislikes the new default logging format, makes our logs almost look like Eucalyptus ones. | 14:01 | |
soren | ttx: Approved. | 14:04 |
*** westmaas has quit IRC | 14:05 | |
*** westmaas has joined #openstack | 14:08 | |
ttx | soren: with current trunk I get an instance numbered "i-1" | 14:08 |
ttx | not sure that's by design | 14:09 |
*** nelson__ has quit IRC | 14:09 | |
*** nelson__ has joined #openstack | 14:10 | |
openstackhudson | Project nova build #375: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/375/ | 14:14 |
*** ppetraki has joined #openstack | 14:14 | |
*** befreax has joined #openstack | 14:18 | |
*** befreax has quit IRC | 14:22 | |
*** gondoi has joined #openstack | 14:26 | |
*** westmaas has quit IRC | 14:30 | |
*** deadestchicken_ has joined #openstack | 14:33 | |
*** mdomsch has joined #openstack | 14:36 | |
*** zul has joined #openstack | 14:39 | |
ttx | zul: o/ | 14:44 |
zul | hey ttx | 14:44 |
zul | you are missing the sprint/rally | 14:44 |
ttx | zul: how is snowy Dallas ? | 14:44 |
zul | ttx: its armageddon according to the local tv news | 14:45 |
ttx | zul: if only it could make Dallas fun | 14:45 |
zul | ttx: yeah people are apparently freaking out with the driving...which is fun to watch | 14:46 |
*** nelson__ has quit IRC | 14:46 | |
*** f4m8 is now known as f4m8_ | 14:47 | |
*** tteikhua has joined #openstack | 14:47 | |
openstackhudson | Project nova build #376: SUCCESS in 1 min 19 sec: http://hudson.openstack.org/job/nova/376/ | 14:49 |
*** troytoman has joined #openstack | 14:51 | |
*** dendro-afk is now known as dendrobates | 14:53 | |
*** troytoman has quit IRC | 14:53 | |
*** rnirmal has joined #openstack | 15:00 | |
*** gaveen has quit IRC | 15:06 | |
*** sparkycollier has joined #openstack | 15:09 | |
jaypipes | zul: there is a sprint in Dallas? | 15:13 |
zul | jaypipes: yep | 15:13 |
*** kashyapc has joined #openstack | 15:14 | |
jaypipes | zul: /me feels left out :( | 15:15 |
annegentle | zul: snowmageddon! send some down to Austin, it'll be cold enough tomorrow | 15:15 |
* soren calls it a day... at least until I get bored this evening and come back | 15:15 | |
jaypipes | heh | 15:15 |
zul | jaypipes: heh come to work for canonical then ;) | 15:16 |
*** kashyapc has quit IRC | 15:16 | |
*** kashyapc has joined #openstack | 15:16 | |
jaypipes | zul: well, we have enough snow up here in C-bus ;) | 15:16 |
zul | annegentle: this is nothing compared to what im use to | 15:17 |
annegentle | zul: oh yeah, where's your winter experience from? I moved to Texas from Ohio, but it was southern Ohio. Much more snow when I was a kid in northern Indiana. :) | 15:20 |
zul | annegentle: im a canadian...2 seasons winter and construction :) | 15:20 |
annegentle | zul: you win :) | 15:21 |
uvirtbot | New bug: #701055 in nova ""No instance for id X" error terminating instances, borks nova-compute" [Undecided,New] https://launchpad.net/bugs/701055 | 15:22 |
zul | annegentle: i had to drop my son off at daycare, we walked there it was -30C with the windchill ;) | 15:22 |
*** hazmat has joined #openstack | 15:25 | |
*** johnpur has joined #openstack | 15:26 | |
*** ChanServ sets mode: +v johnpur | 15:26 | |
*** zul has quit IRC | 15:36 | |
*** tteikhua has quit IRC | 15:47 | |
*** zul has joined #openstack | 15:49 | |
*** hggdh has joined #openstack | 15:50 | |
*** brd_from_italy has quit IRC | 15:50 | |
ttx | nova-core: please give some review love in priority to: | 15:57 |
ttx | https://code.launchpad.net/~morita-kazutaka/nova/sheepdog/+merge/45093 | 15:57 |
ttx | https://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/45228 | 15:58 |
*** fabiand_ has quit IRC | 16:03 | |
*** jimbaker has joined #openstack | 16:04 | |
*** guigui1 has quit IRC | 16:06 | |
*** hggdh has quit IRC | 16:08 | |
*** dragondm has joined #openstack | 16:12 | |
*** jdarcy has joined #openstack | 16:13 | |
*** rnirmal has quit IRC | 16:15 | |
*** glenc_ has quit IRC | 16:21 | |
*** dubsquared has joined #openstack | 16:21 | |
xtoddx | Can I get a review? https://code.launchpad.net/~anso/nova/wsgirouter/+merge/45330 | 16:21 |
*** glenc has joined #openstack | 16:22 | |
*** calavera has quit IRC | 16:32 | |
*** kashyapc has quit IRC | 16:34 | |
*** ibarrera has quit IRC | 16:43 | |
*** westmaas has joined #openstack | 16:45 | |
*** hggdh has joined #openstack | 16:46 | |
*** kashyapc has joined #openstack | 16:51 | |
*** Lcfseth has joined #openstack | 16:51 | |
*** arreyder has quit IRC | 16:56 | |
creiht | mtaylor: around? | 17:00 |
mtaylor | creiht: you know it | 17:00 |
creiht | letterj is reporting some issues trying to build packages with the debian branch for swift | 17:00 |
creiht | he says he gets a conflict if he tries to merge with trunk | 17:01 |
mtaylor | ok. looking | 17:01 |
creiht | and that he also sees some hudson errors? | 17:01 |
mtaylor | hrm | 17:01 |
*** WonTu has joined #openstack | 17:04 | |
*** WonTu has left #openstack | 17:04 | |
*** dfg_ has joined #openstack | 17:06 | |
mtaylor | creiht: ok. hudson job fixed | 17:06 |
mtaylor | creiht: me no see problems merging with trunk | 17:07 |
creiht | k | 17:07 |
creiht | thx | 17:07 |
* mtaylor upgrading hudson... | 17:09 | |
dubsquared | morning/afternoon everyone! i have been keeping up the second with the latest bug reports, but is this being addressed/any one know the fix? http://paste.openstack.org/show/453/ | 17:10 |
*** elasticdog has quit IRC | 17:11 | |
dubsquared | s/have/havent | 17:11 |
*** openstackhudson has quit IRC | 17:13 | |
*** openstackhudson has joined #openstack | 17:14 | |
*** openstackhudson has quit IRC | 17:21 | |
*** openstackhudson has joined #openstack | 17:22 | |
*** elasticdog has joined #openstack | 17:23 | |
dabo | can someone who's familiar with the sqlalchemy layer explain line 748 of the instance_get_by_id() method of nova/db/sqlalchemy/api.py? http://paste.openstack.org/show/454/ | 17:25 |
dabo | That line appears to say that if you have access to deleted records, you will *only* get back deleted records. That certainly seems wrong to me; if you have access to deleted records, you should be querying all records, whether deleted or not. | 17:25 |
dabo | I have an instance in the db that's not deleted, but I can't get it using this method. | 17:25 |
jaypipes | dabo: any idea on this? http://paste.openstack.org/show/455/ | 17:27 |
dabo | jaypipes: looking... | 17:28 |
jaypipes | dabo: as for the deleted thing... no, that code looks right to me. users should only see non-deleted records, and non-users should have to be checked to see whether they can see deleted records... | 17:28 |
dabo | jaypipes: re: ecua2ool - I don't see a problem. Any chance you have multiple copies installed? | 17:29 |
jaypipes | dabo: but I do see what you mean.... it should really be: if can_read_deleted(context): thequery.filter(deleted=deleted)... | 17:29 |
dabo | jaypipes: can you do 'import euca2ools' from within python | 17:30 |
jaypipes | dabo: nope. isn't this lovely :) | 17:30 |
jaypipes | (.nova-venv)jpipes@serialcoder:~/repos/nova/bug699654$ euca-version | 17:30 |
jaypipes | Traceback (most recent call last): | 17:30 |
jaypipes | File "/usr/bin/euca-version", line 36, in <module> | 17:30 |
jaypipes | from euca2ools import Euca2ool, Util | 17:30 |
jaypipes | ImportError: No module named euca2ools | 17:30 |
*** rlucio has joined #openstack | 17:31 | |
dabo | jaypipes: looks like either a) you have conflicting versions installed or b) your python pathing is hosed. | 17:31 |
dabo | try: import sys; print sys.path | 17:31 |
dabo | jaypipes: re: query - the way it's written, if I'm an admin and I can see deleted records, it generates a filter of 'deleted = True' | 17:32 |
dabo | that's not right | 17:32 |
jaypipes | dabo: or it could be that euca2ools is crap. :) http://paste.openstack.org/show/456/ | 17:32 |
jaypipes | dabo: yes, you are right. that's incorrect, which is why I said it should be if can_read_deleted(context): blahj... | 17:32 |
dabo | jaypipes: here's what I get on maverick: http://paste.openstack.org/show/457/ | 17:34 |
dabo | jaypipes: I'll enter a bug for the query stuff | 17:34 |
jaypipes | dabo: you're not in a virtualenv. | 17:34 |
*** adiantum has joined #openstack | 17:34 | |
dabo | jaypipes: no, I'm not | 17:34 |
jaypipes | dabo: it's messed up when in virtualenv. | 17:34 |
* jaypipes hates software that assumes something about the installed environment... grr. | 17:35 | |
dabo | jaypipes: ah, didn't realize that | 17:35 |
dabo | yeah, looks like euca is crap | 17:35 |
jaypipes | dabo: FYI, '/usr/lib/python2.6/dist-packages' is in my python path, and that is exactly where euc2ools is located :) | 17:36 |
dabo | jaypipes: then it's something wonky with the way virtualenv is segmenting the installed packages | 17:37 |
jaypipes | dabo: it picks up other packages in that dir... | 17:38 |
dabo | jaypipes: virtualenv does some 'magic' so that when you're in one env, you don't see stuff installed into other envs. | 17:39 |
jaypipes | dabo: not technically. what it does is allow you to *install* stuff into the virtualenv without affecting other envs. Your locally-installed stuff is still accessible, though. And euca2ools is locally installed...since it can't be either easy_installed or pip installed into the virtualenv (because it's apparently not packaged properly...) | 17:41 |
dabo | jaypipes: that's correct. I wasn't sure if you had installed it into a venv or not | 17:41 |
jaypipes | dabo: no, I would have to do it manually, and I don't install anything into a virtualenv that cannot be pip or easy_installed. | 17:42 |
dabo | jaypipes: yeah, I didn't realize that euca couldn't be installed properly. | 17:42 |
*** reldan has joined #openstack | 17:46 | |
*** zul has quit IRC | 17:46 | |
*** zul has joined #openstack | 17:46 | |
*** jdurgin has joined #openstack | 17:47 | |
uvirtbot | New bug: #701121 in nova "Getting instances by ID when admin only returns deleted instances" [Undecided,New] https://launchpad.net/bugs/701121 | 17:51 |
*** openstackhudson has quit IRC | 17:56 | |
*** openstackhudson has joined #openstack | 17:57 | |
*** joearnold has joined #openstack | 18:01 | |
*** adiantum has quit IRC | 18:01 | |
*** reldan has quit IRC | 18:03 | |
*** reldan has joined #openstack | 18:04 | |
*** maplebed has joined #openstack | 18:04 | |
jaypipes | dabo: euca2ools' Makefile *hardcodes* PREFIX as /usr/local. :( | 18:07 |
jaypipes | dabo: had to download the tarball and edit the Makefile by hand to point PREFIX to my virtualenv... | 18:07 |
dabo | jaypipes: hardcoding is teh AWESOME!! | 18:08 |
*** adiantum has joined #openstack | 18:14 | |
*** reldan has quit IRC | 18:14 | |
*** Charlie__ has joined #openstack | 18:26 | |
*** dendrobates is now known as dendro-afk | 18:27 | |
*** deadestchicken_ has quit IRC | 18:28 | |
*** daleolds has joined #openstack | 18:38 | |
*** trin_cz has quit IRC | 18:41 | |
*** hggdh has quit IRC | 18:42 | |
*** arreyder has joined #openstack | 18:44 | |
*** deadestchicken_ has joined #openstack | 18:46 | |
*** opengeard has joined #openstack | 18:46 | |
*** deadestchicken_ has quit IRC | 18:46 | |
*** Charlie__ has quit IRC | 18:48 | |
*** dendro-afk is now known as dendrobates | 18:50 | |
sandywalsh | hey guys, question about nova/utils.py LoopingCall | 18:55 |
sandywalsh | I see that, if an exception occurs, it does a send_exception() via the eventlet Event | 18:56 |
*** Lcfseth has left #openstack | 18:56 | |
sandywalsh | but, in other places, like nova/virt/xenapi_conn.py _poll_task | 18:56 |
sandywalsh | the inner function (the function called by LoopingCall) handles the exception | 18:57 |
sandywalsh | and calls send_exception | 18:57 |
sandywalsh | but this is a problem | 18:57 |
*** adiantum has quit IRC | 18:57 | |
sandywalsh | you can't do two send_exceptions on an Event | 18:57 |
sandywalsh | I think we should remove the send_exception from utils | 18:57 |
sandywalsh | and assume the inner function will deal with problems | 18:57 |
sandywalsh | (raising the highest fidelity exception) | 18:58 |
sandywalsh | thoughts? | 18:58 |
vishy | sandywalsh: looking at the code, since i'm not quite following how two exceptions could be sent | 19:02 |
sandywalsh | vishy, I'm still investigating as well, but it appears that LoopingCall catches all Exceptions, so anything thrown lower will get caught and re-raised. | 19:03 |
sandywalsh | for example, the _poll_task method of xenapi_conn | 19:04 |
*** adiantum has joined #openstack | 19:04 | |
*** mdomsch has quit IRC | 19:04 | |
sandywalsh | which throws a XenAPI.Failure | 19:04 |
sandywalsh | hmm, | 19:04 |
sandywalsh | wait now, I could be wrong. send_exception doesn't raise | 19:05 |
sandywalsh | hang on ... I'll get back to you :) | 19:05 |
vishy | _poll task seems a bit strange | 19:05 |
vishy | termie wrote LoopingCall and can probably shed some light when he gets on | 19:06 |
vishy | sandywalsh: are you actually seeing an error? | 19:06 |
uvirtbot | New bug: #701164 in nova "Can't change project manager after creation of project." [Undecided,New] https://launchpad.net/bugs/701164 | 19:06 |
sandywalsh | vishy, yup, I'll get you a paste | 19:07 |
sandywalsh | vishy, http://paste.openstack.org/show/458/ | 19:07 |
sandywalsh | vishy, LoopingCall calls _poll_task periodically, but when an error occurs, we see the event.send() getting called twice (or more) | 19:09 |
*** adiantum has quit IRC | 19:10 | |
sandywalsh | vishy, I think the LoopingCall is not terminating as it should | 19:10 |
sandywalsh | vishy, and the loop is continuing | 19:11 |
vishy | yes | 19:11 |
sandywalsh | vishy, gonna try something (stand back, I'm going to try science) | 19:11 |
*** ewanmellor has joined #openstack | 19:12 | |
vishy | perhaps it should send self._running = False when an exception is hit? | 19:12 |
sandywalsh | vishy, yup ... call stop() | 19:12 |
ewanmellor | Is jaypipes here? | 19:12 |
ewanmellor | Or anyone who understands the venv that nova's run_tests.py uses? | 19:13 |
jaypipes | ewanmellor: you betcah. | 19:13 |
jaypipes | ewanmellor: betcha. | 19:13 |
ewanmellor | Schweet. | 19:13 |
jaypipes | ewanmellor: ./run_tests.sh -V -f | 19:13 |
vishy | jaypipes: ewanmellor is mot definitely not a betcah | 19:13 |
jaypipes | ewanmellor: will clear the venv and run tests in it... | 19:13 |
jaypipes | vishy: :) | 19:14 |
jaypipes | ewanmellor: use -f when you change, say, the tools/pip-requires file. | 19:14 |
ewanmellor | I want to write a unit test for our Nova->Glance integration. The Nova code uses the Glance client code. | 19:14 |
ewanmellor | Yeah, I know the basics, more or less ;-) | 19:14 |
jaypipes | ewanmellor: k, and you want to install glance into the venv, right? | 19:14 |
ewanmellor | I need to cross-reference the Glance client code from the Nova run_tests. | 19:14 |
ewanmellor | Yeah, basically. | 19:14 |
*** rnirmal has joined #openstack | 19:14 | |
jaypipes | ewanmellor: well, tough luck. | 19:15 |
jaypipes | ewanmellor: hehe, just kidding :) | 19:15 |
ewanmellor | :-) | 19:15 |
jaypipes | ewanmellor: kinda funny, I've been struggling with the same today (for i18n and other stuff) | 19:15 |
ewanmellor | I wondered if you've done this already, as part of your glance-client-in-nova blueprint. | 19:15 |
jaypipes | ewanmellor: we need to get glance packaged in the same way nova is.. or verify if it is packaged at all... | 19:15 |
jaypipes | ewanmellor: I'm going to ask mtaylor and soren for some assistance there. Technically, the step should just be: | 19:16 |
jaypipes | source .nova-venv/bin/activate; easy_install glance | 19:16 |
jaypipes | ewanmellor: oh, and welcome back from "vacation", too ;) | 19:16 |
ewanmellor | So you're proposing using a prepackaged version of Glance when testing Nova? | 19:17 |
*** adiantum has joined #openstack | 19:17 | |
ewanmellor | I was thinking about having both source trees next to each other, and then referencing one from the other. | 19:17 |
jaypipes | ewanmellor: testing the nova-glance integration, yes. | 19:17 |
*** rnirmal_ has joined #openstack | 19:17 | |
jaypipes | ewanmellor: that would be icky I think... | 19:17 |
ewanmellor | Yeah, that's why I came on to IRC -- because I knew *someone* would say that. | 19:18 |
jaypipes | ewanmellor: and not something that could be easily automated through, say, Hudson. | 19:18 |
jaypipes | ewanmellor: the easiest solution is just to ask mtaylor to do it. that solution is a good one. | 19:18 |
ewanmellor | My way though, if you need to make parallel changes to the Glance client and Nova, you can test them together. | 19:18 |
ewanmellor | I'm not claiming to like the idea, BTW. | 19:19 |
ewanmellor | My branch is late for Bexar already. | 19:20 |
jaypipes | ewanmellor: I think there should be a way to basically, in your local glance branch, pack up a glance egg, then install that egg into the local nova venv via pip install /path/to/egg, but I haven't tested it yet... | 19:20 |
*** rnirmal has quit IRC | 19:21 | |
*** rnirmal_ is now known as rnirmal | 19:21 | |
jaypipes | ewanmellor: something like: cd glance-local; python setup.py build; cd ../nova-local; source .nova-venv/bin/activate; pip install ../glance-local/glance-0.1.egg | 19:21 |
jaypipes | ewanmellor: that's my thought, of course, completely untested... | 19:22 |
ewanmellor | So we would make it use easy_install by default, but then someone could install their own egg if they need to. | 19:22 |
jaypipes | ewanmellor: ya. | 19:22 |
jaypipes | ewanmellor: eventually, would be best to get to a point where we can have completely optional installations of things like Glance into a venv... | 19:22 |
ewanmellor | So all we need is for some kind soul to put a Glance package wherever it is that they go for easy_install to find them. | 19:23 |
jaypipes | ewanmellor: ya, PyPI. | 19:24 |
mtaylor | aroo? | 19:27 |
ewanmellor | mtaylor: Just the man! | 19:27 |
mtaylor | jaypipes: oh - you need some deb packaging? | 19:27 |
jaypipes | mtaylor: kindly. | 19:27 |
mtaylor | jaypipes: /me puts on list | 19:27 |
jaypipes | hehe, indeed :) | 19:27 |
ewanmellor | mtaylor: Can I help? | 19:28 |
ewanmellor | mtaylor: I really just need the Glance client code to end up on PyPI. | 19:28 |
mtaylor | ewanmellor: well, PyPI is an easier thing than .debs | 19:29 |
mtaylor | although we really should get both going | 19:29 |
mtaylor | looking | 19:30 |
mtaylor | ewanmellor: for pypi, you want to do: | 19:30 |
mtaylor | python setup.py register | 19:30 |
mtaylor | (you only need to do this one, it registers the project) | 19:30 |
mtaylor | and then when you want to make a release, you do: | 19:30 |
*** rcc has quit IRC | 19:31 | |
mtaylor | python setup.py sdist bdist bdist_egg upload | 19:31 |
mtaylor | ewanmellor: you have to do the sdist/bdist commands on the same invocation as the upload, as upload will only upload artifacts created in the current iteration | 19:31 |
rlucio | anyone seen this greenthreads error before? http://paste.openstack.org/show/459/ | 19:32 |
ewanmellor | mtaylor: Is there some "OpenStack LLC" username that I should be using, or can anyone just release anything? | 19:32 |
rlucio | related to eventlet i guess (austin release) | 19:32 |
dubsquared | Lots of folks about now, I'll post this again if anyone cares to take a look: http://paste.openstack.org/show/453/ :D | 19:32 |
creiht | JordanRinke: btw, a doc bug has already been entered for that issue | 19:33 |
mtaylor | ewanmellor: pretty much anyone can release anything | 19:33 |
mtaylor | ewanmellor: there should probably be more consolidation of that related to pypi at some point | 19:33 |
mtaylor | ttx: ^^^^ | 19:33 |
rlucio | dubsquared: did you try running iptables-restore -h ? | 19:33 |
ewanmellor | mtaylor: Is it normal just to release the whole thing as one package, or should I consider having a glance-client package separate from glance itself? | 19:34 |
rlucio | dubsquared: it looks like you have an old version of iptables-restore or something, that doesnt have the option nova needs (--icmp_type) | 19:34 |
dubsquared | Yeah, 'iptables -h' spits out a usage howto | 19:35 |
mtaylor | ewanmellor: it's pretty normal for pypi to just have one thing | 19:35 |
mtaylor | ewanmellor: when we make .debs, we'll split things into other packages | 19:35 |
ewanmellor | mtaylor: OK, consider it done. Thanks for your help. Very useful. | 19:35 |
dubsquared | rlucio: —icmp_type isn't an option that it lists | 19:35 |
ewanmellor | jaypipes: Thanks to you, too. | 19:35 |
mtaylor | ewanmellor: by glance-client, which bits are you wanting? | 19:35 |
dubsquared | rlucio: ii iptables 1.4.4-2ubuntu2 | 19:36 |
uvirtbot | New bug: #701176 in swift "Multi node doc missing install param" [Undecided,New] https://launchpad.net/bugs/701176 | 19:36 |
mtaylor | ewanmellor: glance/client.py ? | 19:36 |
ewanmellor | mtaylor: The client-facing SDK. | 19:36 |
mtaylor | ewanmellor: ok. good. just wanted to make sure we were both talking about the same thing :) | 19:36 |
ewanmellor | mtaylor: Yeah, client.py, and whatever its dependencies are. | 19:36 |
sandywalsh | vishy, that was it. | 19:36 |
mtaylor | ewanmellor: I realized there was a small chance that you were wanting the files in bin/ | 19:36 |
mtaylor | :) | 19:36 |
jaypipes | mtaylor: glance.client | 19:36 |
mtaylor | great. that should fix you up then | 19:36 |
ewanmellor | mtaylor: Yeah, glance-dev might have been a better way to put it. | 19:37 |
mtaylor | jaypipes: if you want to file a bug about making debs and assign it to me, there's less chance I'll forget about it | 19:37 |
jaypipes | mtaylor: will do. cheers, and thx for all your help. | 19:37 |
rlucio | dubsquared: yea, you on lucid then? | 19:37 |
mtaylor | jaypipes: my pleasure! | 19:37 |
dubsquared | rlucio: im going to try maverick... | 19:37 |
dubsquared | rlucio: haha, that is correct | 19:37 |
rlucio | dubsquared: i just saw the same version on my machine.. looks like a bug then, unless there is some backported version of iptables on the PPA we are supposed to use for lucid | 19:38 |
*** pandemicsyn has quit IRC | 19:39 | |
dubsquared | interesting…haven't filed a bug yet..have a few oddities ive run into…should look into that | 19:40 |
*** adiantum has quit IRC | 19:42 | |
uvirtbot | New bug: #701180 in glance ".debs need to be created for glance.client and glance" [Low,Confirmed] https://launchpad.net/bugs/701180 | 19:46 |
jt_zg | Are there any advantages to running Swift on a specific distro? I know Gluster requires 64bit and seems to play nicer on Ubuntu. | 19:47 |
*** adiantum has joined #openstack | 19:49 | |
*** dubs has left #openstack | 19:50 | |
jaypipes | creiht: see jt_zg ^^ | 19:51 |
*** dubs has joined #openstack | 19:51 | |
creiht | jt_zg: well we run 64bit ubuntu server for cloud files | 19:55 |
jt_zg | Is that due to ram considerations? | 19:56 |
jt_zg | My personal testing environment nodes have ~512Mb-1024, so I figured there was no sense in using the 64bit option | 19:57 |
jt_zg | Just wondering if its a glaring oversight | 19:57 |
jt_zg | *On my part :D | 19:57 |
creiht | Yeah it is mostly due to memory | 19:58 |
openstackhudson | Project nova build #377: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/377/ | 19:59 |
openstackhudson | Tarmac: Adds the requisite infrastructure for automating translation templates import/export to Launchpad. | 19:59 |
jt_zg | Gotcha. So Swift really doesn't care what its installed on...within reason. That's great to know | 19:59 |
jt_zg | I just don't want to start scripting for automatic deployment on Debian then realize I take a performance hit unless I use CentOS, or some other crazy scenario | 20:00 |
creiht | right | 20:00 |
creiht | heh | 20:00 |
jt_zg | I have nightmares! | 20:00 |
creiht | yeah it is pretty agnostic, as long as you can get the dependencies | 20:00 |
jt_zg | makes sense | 20:00 |
jt_zg | thanks again! | 20:01 |
ewanmellor | mtaylor: That glance packaging hasn't quite worked. The tarballs all look good, but when I easy_install glance inside my nova venv, it decides to download the .linux-i686.tar.gz, not the plain .tar.gz, and then is surprised when setup.py isn't in there. | 20:02 |
*** adiantum has quit IRC | 20:03 | |
mtaylor | ewanmellor: oh - hrm | 20:03 |
mtaylor | ewanmellor: perhaps we should only be doing setup.py sdist upload | 20:03 |
*** trin_cz has joined #openstack | 20:04 | |
ewanmellor | mtaylor: I can delete the binary if you think that's the right thing to do. | 20:04 |
mtaylor | ewanmellor: do that | 20:04 |
mtaylor | ewanmellor: and the egg | 20:04 |
*** pandemicsyn has joined #openstack | 20:04 | |
mtaylor | ewanmellor: and then next time leave the bdist references out of the command | 20:05 |
*** adiantum has joined #openstack | 20:08 | |
*** ChanServ sets mode: +v pandemicsyn | 20:10 | |
*** adiantum has quit IRC | 20:15 | |
*** daleolds has quit IRC | 20:18 | |
*** adiantum has joined #openstack | 20:21 | |
creiht | jt_zg: everything said, we haven't done any testing on 32bit linux, so take all the above with a grain of salt :) | 20:23 |
jt_zg | Absolutely. I'm just testing in 32bit. We'll be the real dev/production on 64bit systems | 20:24 |
sandywalsh | Someone have 5 minutes for a simple review? https://code.launchpad.net/~sandy-walsh/nova/lp698123/+merge/45745 | 20:25 |
*** joearnold has quit IRC | 20:28 | |
*** nelson__ has joined #openstack | 20:28 | |
*** adiantum has quit IRC | 20:28 | |
nelson__ | annegentle: the help message for 'st' says "Cloud Files general documentation". Is that right? | 20:29 |
openstackhudson | Project nova build #378: SUCCESS in 1 min 21 sec: http://hudson.openstack.org/job/nova/378/ | 20:29 |
openstackhudson | Tarmac: Bugfix. | 20:29 |
annegentle | nelson__: nope, that looks like an oversight | 20:32 |
*** hggdh has joined #openstack | 20:33 | |
nelson__ | Okay, I'll fix it in my docfixes. | 20:33 |
annegentle | nelson__: awesome, thanks. It could either say Swift general doc or OpenStack Object Storage general doc (which is a bit much) | 20:33 |
dubsquared | rlucio: same issue in maverick, iptables 1.4.4-2ubuntu3 | 20:34 |
*** adiantum has joined #openstack | 20:34 | |
xtoddx | sandywalsh: can you put # TODO(sandywalsh): instead of just # TODO | 20:35 |
*** littleidea has quit IRC | 20:35 | |
xtoddx | sandywalsh: also a # NOTE(sandywalsh) might be good for the comment about breaking out the stop loop method as well | 20:35 |
nelson__ | annegentle: grep -r 'Cloud Files' . | wc -l # gives me a count of 14. | 20:35 |
xtoddx | sandywalsh: it looks good to me though | 20:35 |
nelson__ | annegentle: maybe replacing that name should be a separate project/patch? | 20:36 |
*** brd_from_italy has joined #openstack | 20:36 | |
annegentle | nelson__: ah yes perhaps so. Log a bug indicating 14 count | 20:37 |
sandywalsh | xtoddx, where's the todo? | 20:37 |
annegentle | nelson__: and if you want a lot of info on st, see http://jbplab.com/post/1697289751/a-utility-for-the-openstack-object-store-swift if you haven't seen that already | 20:37 |
sandywalsh | xtoddx, will fix NOTE, thx | 20:37 |
annegentle | nelson__: I have yet to fold that into the docs | 20:37 |
nelson__ | cool, thank, no, I hadn't. BTW, if Rackspace claims a trademark on Cloud Files, it should be a little more careful with it. | 20:38 |
xtoddx | sandywalsh: the TODO is on line 18 of the diff in launchpad | 20:38 |
* nelson__ takes off my trademark legal eagle hat. | 20:38 | |
annegentle | nelson__: you're quite right. | 20:38 |
xtoddx | sandywalsh: "create fake SR record" | 20:38 |
sandywalsh | xtoddx, oh, bzr screwed up. That was old code. My code in that block ends @14 | 20:38 |
*** pandemicsyn has quit IRC | 20:39 | |
*** fitzdsl has quit IRC | 20:39 | |
*** fitzdsl has joined #openstack | 20:39 | |
xtoddx | yea, it looked like a ninja-patch | 20:39 |
xtoddx | but i was willing to let it slide | 20:40 |
sandywalsh | :) ... thanks for the review! | 20:41 |
sandywalsh | change pushed | 20:41 |
*** littleidea has joined #openstack | 20:43 | |
*** miclorb_ has joined #openstack | 20:44 | |
*** littleidea has quit IRC | 20:47 | |
*** ctennis has quit IRC | 20:52 | |
*** littleidea has joined #openstack | 20:53 | |
*** adiantum has quit IRC | 20:54 | |
*** pothos_ has joined #openstack | 20:57 | |
*** adiantum has joined #openstack | 20:59 | |
*** pothos has quit IRC | 20:59 | |
*** pothos_ is now known as pothos | 20:59 | |
uvirtbot | New bug: #701216 in nova "When a floating IP is associated with an instance, describe_instances for ec2 fails" [Undecided,New] https://launchpad.net/bugs/701216 | 21:01 |
creiht | nelson__: yeah that must have been an area we missed when getting everything ready for open sourcing | 21:04 |
*** ctennis has joined #openstack | 21:05 | |
*** damon__ has joined #openstack | 21:06 | |
nelson__ | I figured. Since it's Rackspace's problem, I'll let them (you) fix it. | 21:10 |
creiht | hehe | 21:11 |
*** miclorb_ has quit IRC | 21:22 | |
*** fabiand_ has joined #openstack | 21:25 | |
*** miclorb has joined #openstack | 21:26 | |
openstackhudson | Project nova build #379: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/379/ | 21:29 |
openstackhudson | Tarmac: xenapi_conn was not terminating utils/LoopingCall when an exception was occurring. This was causing the eventlet Event to have send_exception() called more than once (a no-no). | 21:29 |
openstackhudson | This would have affected more than just pause/unpause, but any XenApi call that raised an exception. | 21:29 |
*** joearnold has joined #openstack | 21:30 | |
sandywalsh | \o/ thank you | 21:30 |
*** jaypipes is now known as jaypipes-afk | 21:38 | |
*** westmaas has quit IRC | 21:43 | |
*** adiantum has quit IRC | 21:43 | |
*** littleidea has quit IRC | 21:48 | |
*** littleidea has joined #openstack | 21:49 | |
*** adiantum has joined #openstack | 21:49 | |
*** critch has joined #openstack | 21:53 | |
*** littleidea has quit IRC | 21:53 | |
*** allsystemsarego has quit IRC | 22:00 | |
uvirtbot | New bug: #701248 in swift "Refactor unit tests to us a fake logging class" [Low,Confirmed] https://launchpad.net/bugs/701248 | 22:01 |
*** adiantum has quit IRC | 22:03 | |
*** skrusty has quit IRC | 22:03 | |
*** adiantum has joined #openstack | 22:04 | |
*** jarrod has joined #openstack | 22:09 | |
*** skrusty has joined #openstack | 22:09 | |
*** MarkAtwood has joined #openstack | 22:19 | |
dragondm | hey'all, is vishy about? | 22:22 |
* vishy is lurking | 22:22 | |
dragondm | ah. good. Just to let you know, I renamed that api class in the xs-console branch as you suggested. If you could take a look at the xs-console merge prop whence ya get a moment, that'd be good. | 22:24 |
vishy | ok cool | 22:24 |
dragondm | thanks | 22:24 |
*** adiantum has quit IRC | 22:25 | |
*** littleidea has joined #openstack | 22:27 | |
*** adiantum has joined #openstack | 22:30 | |
*** kainam has joined #openstack | 22:30 | |
*** arcane has quit IRC | 22:30 | |
*** adiantum has quit IRC | 22:37 | |
*** fabiand_ has quit IRC | 22:40 | |
tr3buchet | https://blueprints.launchpad.net/nova/+spec/instance-state-arbiter/ | 22:42 |
*** littleidea has quit IRC | 22:42 | |
tr3buchet | please give feedback | 22:42 |
tr3buchet | i could use it :D | 22:42 |
*** brd_from_italy has quit IRC | 22:47 | |
*** adiantum has joined #openstack | 22:50 | |
*** hggdh has quit IRC | 22:50 | |
*** rossij has quit IRC | 22:53 | |
*** littleidea has joined #openstack | 22:58 | |
*** adiantum has quit IRC | 23:00 | |
*** littleidea has left #openstack | 23:05 | |
*** adiantum has joined #openstack | 23:05 | |
*** spectorclan has joined #openstack | 23:07 | |
*** schisamo has joined #openstack | 23:10 | |
*** mray has joined #openstack | 23:11 | |
*** adiantum has quit IRC | 23:12 | |
uvirtbot | New bug: #701278 in nova "iptables is failing when lauching instances " [Undecided,New] https://launchpad.net/bugs/701278 | 23:12 |
*** Glaurung has quit IRC | 23:12 | |
*** ppetraki has quit IRC | 23:13 | |
*** adiantum has joined #openstack | 23:16 | |
*** adiantum has quit IRC | 23:24 | |
*** adiantum has joined #openstack | 23:29 | |
*** gondoi has quit IRC | 23:33 | |
*** rnirmal has quit IRC | 23:38 | |
*** mray has quit IRC | 23:38 | |
spectorclan | OpenStack Design Summit - Program Committee Announcement; thanks to all that volunteered - http://www.openstack.org/blog/2011/01/openstack-conferencedesign-summit-program-committee/ | 23:38 |
*** adiantum has quit IRC | 23:40 | |
*** adiantum has joined #openstack | 23:47 | |
*** spectorclan has quit IRC | 23:48 | |
*** mray has joined #openstack | 23:53 | |
*** adiantum has quit IRC | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!