Monday, 2011-01-10

*** joearnold has joined #openstack00:15
*** joearnold has quit IRC00:16
*** rlucio has quit IRC00:28
*** charlvn has quit IRC00:52
*** dubsquared has joined #openstack01:11
*** rlucio_ has joined #openstack01:14
*** jfluhmann has quit IRC01:23
dubsquaredrandom question, but does anyone have a location they are getting working images from?  it seems the UEC images are not liking the most recent updates to the packges...01:27
anotherjessedubsquared: there are images on images.openstack.org01:29
anotherjesseWe use http://images.ansolabs.com/tty.tgz http://images.ansolabs.com/maverick.tgz internally01:31
dubsquaredimages isn't loading for me?01:31
anotherjesseI know someone at rackspace is working on a directory of images01:31
dubsquaredthat is jordan and I :D01:31
anotherjesseimages.openstack.org is ftp01:31
dubsquaredthe few places were were grabbing images from is all done, so just looking for more now…(and the UEC ones aren't working now :/)01:32
*** jfluhmann has joined #openstack01:36
jt_zgCan anyone help me out with a 503 error when adding users, please?01:37
*** rlucio_ has quit IRC01:38
*** joearnold has joined #openstack01:43
*** anotherjesse has quit IRC02:03
*** joearnold has quit IRC02:05
uvirtbotNew bug: #700867 in nova "Metadata is broken in a default install" [Undecided,New] https://launchpad.net/bugs/70086702:06
*** miclorb has quit IRC02:36
*** littleidea has joined #openstack02:55
*** miclorb has joined #openstack03:13
creihtjt_zg: I can take  a stab :)03:16
jt_zgsounds good :D03:16
creihtare you still around?03:16
jt_zgWhat a nightmare!03:16
creihthah03:16
jt_zgalways, and forever03:16
creihtsorry about that :/03:16
creihtI skimmed over what you had done before03:16
jt_zgNo worries. I knew it would be challenging. They don't pay me the big bucks for nothing...eventually03:16
jt_zgAwesome, that puts you ahead of where I am!03:17
creihtok03:17
creihtso you are on the auth server correct?03:17
jt_zgyup03:17
creihtis it on the same server as the proxy?03:17
jt_zgI have all 7 terminals open actually03:17
jt_zgno03:17
creihtahh03:17
creihtok03:17
jt_zg1 proxy, 1 auth, 5 storage03:17
creihtgive me a sec, I think I know what your problem is03:18
jt_zgAbsolutely03:18
creihtjt_zg: in your proxy server config03:18
creihtunder the [auth] section03:19
creihtadd a line that says03:19
creihtip = AUTH_IP03:19
creihtby default it assumes 127.0.0.1 unless you set it03:19
jt_zgOn it!03:19
creihtI need to add that to the multiserver docs as you are the second one to have this issue03:20
jt_zgwow03:20
jt_zgYou sir...03:20
jt_zgcomplete me03:20
jt_zgThanks!03:20
creihthaha03:20
jt_zgI restarted the proxy server, ran my adduser again and automagic03:20
creihtnp03:20
creihtawesome03:20
jt_zgNow my next question...what do I do with these 7 servers? :P Practically I mean03:21
creihtlol03:21
jt_zgI know what my company wants to do with them03:21
creihtWell there are a couple of things off the bat03:21
jt_zgeventually, but I want to do some real world testing before I demonstrate03:21
creihtthere is the st command line that you can do to experiment putting things in and out of the system03:21
jt_zgstupid question..03:22
creihtthere is the swift-bench command that can be used to do some benchmarking03:22
jt_zgcan I sftp into one of the servers and drop a random doc and monitor the replication?03:22
jt_zgThat would seal the deal with my boss, "Can I copy my spreadsheets to it and see it work?"03:23
creihthehe03:23
creihtjt_zg: what os does your boss use?03:23
jt_zgubuntu03:23
jt_zgbless his soul03:23
creihthehe03:23
creihthrm03:23
creihtbummer03:24
jt_zgDid that example make sense, relative to what Openstack does?03:24
creiht:)03:24
creihtdefinately03:24
jt_zgI want to avoid working with api apps for awhile03:24
creihtthere is a nice gui called cyberduck03:24
creihtahh03:24
creihtI see03:24
creihtso the st command is standalone, you can copy that anywhere that has python installed03:24
creihtso you could upload a spreadsheet using that03:25
creihtlist the container to see it there03:25
creihtthen download it03:25
jt_zgthat'll work03:25
creihtto show replication is a little more difficult03:25
jt_zgWere you going to suggest something, re: Windows?03:25
creihtfor windows/os x there is cyberduck03:25
creihtand it works with swift03:25
creihtso if you want to see replication work03:26
jt_zgI'll see if I can't get it to run in Wine/Ubuntu03:26
creihtactually the easiest is to upload the file03:26
creihtuse swift-get-nodes to find the nodes where it is located03:26
creiht(and the dir on that node where it is located03:26
jt_zgvery cool03:26
creihtgo to one of the nodes, and delete it03:26
creihtthen wait a bit, and it should show back up03:27
creihtoh wait that wont exactly work like that :/03:27
creihtforgot we changed that a little03:27
creihtso new idea03:27
creihtbefore you upload a file, know what path you want to upload it to03:27
jt_zg*nod*03:28
creihtsay you are going to upload to MY_ACCT/container1/myfile.txt03:28
creihtthen you could use get-nodes like:03:28
creihtswift-get-nodes /etc/swift/object.ring.gz MY_ACCT container1 myfile.txt03:28
creihtthat will show what nodes it will be on when it uploads03:28
creihtgo to one of the nodes and stop services03:28
creihtthen upload the file03:29
creiht(this will also show that it can still upload the file even though the node is down)03:29
creihtafter the upload, start the services back up on that node03:29
creihtand wait a bit03:29
jt_zgand automagic?03:29
creihtthe data should get replicated to that node03:29
creihtyup03:29
jt_zgvery cool, thanks!03:29
creihtreplication is pushed based03:29
jt_zgIs there a specific schedule?03:30
jt_zgor mechanism you can tweak03:30
creihtso the other two nodes that have the file will be trying to rsync to the node that was down03:30
creihtit runs continually03:30
creihtin the background03:30
creihtthere are some things that you can use to control how much incoming data a node will accept through the max_clients setting in the rsyncd.conf file03:31
jt_zggotcha, I'm going to go looking for that bugger. So, I'm curious...why 5 storage nodes in the article if only 3 are used? Is that to prove the durability of the group in a failure scenario?03:32
creihtIf replication is running too much, there is also a config to cause it to delay an amount of time between runs03:32
jt_zgAh, that's what I was hoping to hear! :)03:32
creiht5 is our recommended minimum to best handle failure scenarios03:32
jt_zgWe're hoping to have 1-2 hosts per data-centre, but replicating across 3 DCs for a max of 3-6 hosts. Reason I ask03:33
creihthrm03:33
creihtwhat type of connectivity between datacenters?03:33
creihtand how much usage would it see?03:33
creihtwe haven't done a lot of tuning yet to handle mult-region setups03:33
jt_zg10Gb links to and between03:33
creihtbut that is one of the next items on the roadmaps03:33
creihtoh nice03:33
jt_zgnot sure of usage03:34
jt_zgWe're fielding inquiries that are putting us at ~350Tb/month per client, across ~100 servers03:34
creihtWell when you get to the point of planning it out, make sure we talk :)03:34
jt_zgAbsolutely!03:34
jt_zgThat's what I'm working on now. Finding a viable solution to replicate across DC's and span within and to different DCs03:35
creihtif you have 10g, it is feasible03:35
jt_zgWe're starting with 10g links03:35
jt_zgWe're probably looking at several once we ramp up03:35
creihtthere will be a little bit of latency03:36
jt_zgFair enough03:36
jeremybspeaking of replication and failure: i was wondering what happens when the node comes back up... does the eventual consistency window come into play here? how long are tombstones kept for?03:36
jt_zgIf we have a client that can't tolerate some form of latency when a DC gets nuked(or a host dies...the more likely scenario, I suppose) then we probably don't want them :P03:36
creihtjeremyb: when the node comes back up, any data that it hadn't gotten yet, will be pushed to it by the other servers who have that data03:37
creihtAnd yeah there can be a bit of an eventual consistency window there03:37
creihtUsually it isn't that long though03:37
creihtby default tombstones are kept for 7 days (but it is tunable as a config)03:38
jeremybcreiht: even if it's been a week? (granted i know at that point it should have been removed from the ring)03:38
jeremybhrmm03:38
creihtif it is going to be out that long, then yeah, it should be taken out of the ring03:38
creihta good rule of thumb is if a server is going to be down more than a day, take it out of the ring, unless you have a really good reason not to03:39
jeremybdoes a node get up to speed first before going back into service?03:39
creihtnot really03:39
jeremybis that even possible?03:39
jt_zgcreiht, are all these configs you mentioned documented in the swift documentation?03:39
creihtas soon as a device has the services started, it will answer all the requests03:40
jeremybreplication is rsync so you could block http?03:40
creihtjt_zg: http://swift.openstack.org/deployment_guide.html03:40
creihtjeremyb: yeah you could leave the services down while rsync is going on03:40
jt_zgcreiht, my mistake. Sorry about that, and thanks!03:40
creihtwell I take that back03:40
creihtjt_zg: np03:40
creihtreplication no has to make a http request first to see if things need to be replicated03:41
creihtWe've been through several iterations of replication now, and sometimes I forget where we are at :)03:41
creihtActually the above goes for all the services03:42
creiht:)03:42
*** miclorb has quit IRC03:42
jeremybso if all your services and rsync nodes are on a subnet and all your clients are not on that subnet block by subnet?03:42
jeremybor middleware maybe?03:42
jeremybseems like a thing many would want and could easily be switched on/off03:43
creihtpossibly03:43
creihtyeah, we have talked about the possibility of something like that a bit03:43
*** mdomsch has joined #openstack03:43
creihtbut it hasn't come up yet that it has really been needed03:43
creihtWe've discussed several possibilites for how we could optimize the "bootstrapping"03:44
jeremybyou mean a brand new node?03:44
creihtif there is a sufficient amount of new data to be pushed, then it is about the same03:45
creihtor are you more worried about sending out stale data?03:45
jeremybi guess because 404's aren't authoritative it's not a big deal03:45
creihtoh03:45
creihtwell if the proxy gets a 404 from a storage node, it will try the other storage nodes as well03:45
jeremyb(i hadn't thought about the authoritativeness...)03:46
jeremybi thought it tries all 3 concurrently?03:46
creihtright now it tries them in succession03:46
jeremybhuh03:46
creihtwe've also talked about making that concurrent :)03:46
jeremybi thought there was something about fastest respondent03:46
creihtfor a GET/HEAD it will return the first reasonable response that it gets03:47
creihtand the timeouts are very aggressive03:47
creihtso if one machine is hanging, it will not wait around too long for it03:47
*** miclorb has joined #openstack03:49
uvirtbotNew bug: #700893 in nova "volume ids are messed up in ec2_api" [High,In progress] https://launchpad.net/bugs/70089303:51
*** Glaurung has joined #openstack04:00
uvirtbotNew bug: #700894 in swift "Add note to multi server docs to set the ip in proxy config if on a different server" [Low,Confirmed] https://launchpad.net/bugs/70089404:01
*** jimbaker has joined #openstack04:05
notmynamecreiht: thanks for helping with the auth problem. jt_zg, glad it got worked out04:06
jt_zgnotmyname, I appreciate your help! I'm pretty excited about this. Thanks again creiht04:07
* jeremyb wonders if any of these people are here: http://www.meetup.com/OpenStack-New-York-Meetup/calendar/15634525/04:09
jt_zgWhat I would do for an Ottawa, Canada group!04:10
jeremybalso, is it mostly a swift or nova or mixed group? sounds like both will be covered?04:11
*** littleidea has quit IRC04:11
*** littleidea has joined #openstack04:11
*** littleidea has quit IRC04:12
*** littleidea has joined #openstack04:14
*** jimbaker has quit IRC04:30
*** anotherjesse has joined #openstack04:30
*** joearnold has joined #openstack04:30
*** miclorb has quit IRC04:32
*** leted has joined #openstack04:34
*** kashyapc has joined #openstack04:35
*** mdomsch has quit IRC05:04
*** dubsquared has quit IRC05:24
*** f4m8_ is now known as f4m805:50
*** damon__ has quit IRC05:56
*** EdwinGrubbs has quit IRC06:11
vishysoren: are you awake yet?06:19
*** EdwinGrubbs has joined #openstack06:22
*** ramkrsna has joined #openstack06:40
*** leted has quit IRC06:42
*** trin_cz has quit IRC06:51
*** leted has joined #openstack06:53
*** guigui1 has joined #openstack06:55
sorenvishy: Yes.07:09
sorenvishy: What's up?07:09
sorenvishy: Other than me, of course.07:09
vishysoren: so packaging dependencies, nova-volume depends on iscsitarget, but the service isn't enabled by default.  What is the best way in the packaging to enable the service07:09
anotherjessevishy: does nova-volume fail if iscsitarget isn't enabled?07:10
anotherjessein a nice way?07:10
vishysoren: also we need to add a udev rule07:10
vishyanotherjesse: nope it goes boom when you try to create a volume07:11
*** ibarrera has joined #openstack07:11
anotherjessevishy: it seems that then we should add the test that iscsitarget is enabled (if you are using that driver)07:11
sorenvishy: What's the udev rule?07:12
anotherjessethat way even if /etc/default/iscsitarget is enabled if it isn't running we catch that07:12
vishyanotherjesse: sure07:12
sorenvishy: The best way to fix it is to fix the iscsitarget package. Is there any particular reason to leave it disabled?07:13
sorenvishy: It doesn't expose anything by default anyway.07:13
vishysoren: i don't know that is just the default07:13
vishysoren: we also need to disable the default dnsmasq that runs07:14
vishysoren: udev rule is the one in tools/iscsidev.sh07:15
uvirtbotNew bug: #700920 in nova "ec2 describe_instances doesn't filter properly" [Medium,In progress] https://launchpad.net/bugs/70092007:16
vishysoren: so we should stick a fixed iscsitarget package in our ppa?07:18
anotherjessesoren: should we ask the package manager of iscsitarget why the default is disabled?07:18
sorenanotherjesse: There isn't really a "package manager".07:18
sorenanotherjesse: In Ubuntu, that is.07:18
sorenanotherjesse: Everyone takes care of everything.07:18
sorenSo I can just fix it.07:18
anotherjessehmm, well, we aren't experts at iscsitarget  ... so while we always enable it I wonder why it was disabled07:19
sorenvishy: The fix seems to belong in iscsitarget, so that seems like a good start. Then we can get that change into Ubuntu proper afterwards.07:19
sorenanotherjesse: Some people just tend to do that for their packages. I hate that.07:19
anotherjessesoren: what about libvirt having a bridge by default?07:20
sorenwrt to dnsmasq, it should be a matter of installing dnsmasq-base rather than dnsmasq.07:20
anotherjessethat is enabled - which adds a dnsmasq07:20
vishysoren: ok, what about the udev rule07:20
sorenvishy: Not sure yet.07:20
sorenvishy: What exactly does it do?07:20
sorenvishy: It provides a friendlier name for iscsi shares?07:21
vishysoren: a consistent name07:21
sorenOk.07:22
vishysoren: without it if you rediscover the volumes (from a reboot or whatever) the names could be different07:22
sorenvishy: Ok, so it provides a consistent name based on target hostname and share name? (sorry if I get the terminology wrong. It's been a while since I've messed with iSCSI)07:23
sorenvishy: That very much sounds like something openiscsi should be doing on its own anyway, so I'd shove that fix into open-iscsi.07:24
vishyit actually is just based on targetname07:24
sorenHow likely are they to collide?07:25
sorenI forget what they look like.07:25
vishythey are unique07:25
vishybased on vol-id07:25
sorenWell...07:25
sorenYes, so the ones created by Nova are unique.07:25
vishyright07:25
sorenI'm thinking more generally.07:25
sorenSince, if I want to stick the fix in the open-iscsi package, I need to think more broadly.07:25
vishyquite a few things will break if you have non-nova iscsi07:26
sorenThings in Nova, I suspect?07:26
vishythis is the default path /sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/session*/targetname07:27
vishyif we did /dev/iscsi/host/target we might still get conflicts from multiple sessions?07:28
vishythat statement wasn't totally correct, but you get the idea07:28
sorenWhat does a targetname look like?07:29
sorenAs an example?07:29
*** joearnold has quit IRC07:31
vishyin our case it is just the volume id07:31
vishyvol-xxxxxx07:31
sorenvishy: We could provide a /dev/iscsi/host/session/target symlink. Nova could make the assumption that there won't be collisions in different sessions on same host, and just do glob('/dev/iscsi/*/*/<vol_id>')07:32
sorenHow does that sound?07:33
sorenTo me, it looks like the same sort of risks, only contained in Nova.07:33
vishysoren: although if we are globbing, we may just be able to do it without the udev rule07:34
sorenvishy: True.07:34
* soren takes a quick break07:37
* ttx waves07:38
*** littleidea has quit IRC07:39
vishysoren: we should probably just change it to glob, it will be easier to install07:49
*** brd_from_italy has joined #openstack07:54
ttxmtaylor: re: "bzr bd -S --builder='debuild -S -sa'", you can actually do "bzr bd -S -- -sa"07:57
mtaylorttx: oh yeah? that must be reasonably new...07:58
mtaylorttx: but that makes me happy07:58
* ttx uses bzr bd -S -- -k$DEBKEY all the time in sponsoring07:58
mtaylorsweet07:58
sorenvishy: Can you access that information as non-root?08:00
sorenvishy: (/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/session*/targetname that is)08:01
* ttx finished IRC backlog, now dives into the email pile08:05
vishysoren: not sure we need to test this, there may be one we can glob somewhere08:07
anotherjessesoren: about to write an email about moving to migrations for schema changes... this changing the schema is causing issues with deployed nova(s).08:08
anotherjesseplus it gives a place to add some indexes08:09
anotherjesseany thoughts?08:09
ttxanotherjesse: Daviey wanted to work on that: https://blueprints.launchpad.net/nova/+spec/db-versioning-and-migration08:10
anotherjessettx: do we have an eta on that?08:10
ttxanotherjesse: last time I talked to him, cactus08:11
anotherjessettx: I think that is way too late08:11
ttxanotherjesse: I agree it's starting to become painful08:11
anotherjessettx: since if we are saying that people should deploy bexar, we are already running into issues with the changes to ids08:11
anotherjessettx: I know termie is interested in implementing it08:11
anotherjessettx: perhaps if they talk together it can get done soon?08:12
anotherjessettx: is Daviey working on other stuff?08:12
anotherjessettx: the name of BexarDBMigrations is a little misleading too then ;)08:12
ttxanotherjesse: sounds like a good idea. I wouldn't be opposed to the idea of landing enough of it in bexar so that we can have migrations working in early cactus08:12
ttx(if you see what I mean)08:13
ttxanotherjesse: I do not control daviey's assignments anymore. We'll have to ask him08:13
ttxhe certainly was interested in seeing that implemented08:13
*** calavera has joined #openstack08:13
ttxbut I don't think he would mind collaborating on impl08:13
anotherjessettx: cool - we can do migrate up for bexar08:13
anotherjesseand then add down for cactus08:14
ttxright, people deploying Bexar should be able to migrate up to early cactus08:14
ttxeven if that just means adding version number 0 somewhere08:14
anotherjesseyeah - schema table with a value of 008:15
ttxwe have to start somewhere anyway08:15
anotherjesseit might help with the issue of the fighting auto-creation of tables when you launch all the workers at the same time08:15
anotherjesseI think there has been some work on that but I didn't read that patch :(08:15
ttxDaviey will be on central time this week, that might help in getting higher bandwidth with termie08:16
ttxcool, newlog2 was merged08:19
*** rcc has joined #openstack08:23
*** adiantum has joined #openstack08:41
*** adiantum has quit IRC08:47
ttxsoren: Didn't we fix https://bugs.launchpad.net/bugs/700867 already ?08:53
uvirtbotLaunchpad bug 700867 in nova "Metadata is broken in a default install" [Medium,In progress]08:53
*** leted has quit IRC08:58
sorenttx: I thought you did.09:02
anotherjessettx: if only we had a cluster that we could test against :(09:02
anotherjessein our tests it didn't work09:02
* ttx searches09:03
ttxhttps://bugs.launchpad.net/nova/+bug/68354109:03
uvirtbotLaunchpad bug 683541 in nova "Metadata service unreachable from instance" [Medium,Fix committed]09:03
ttxsoren: In fact, you fixed it: https://code.launchpad.net/~soren/nova/lp68354109:03
sorenOh, I thought this was the one about the metadata response being a traceback due to missing stuff.09:04
ttxno, it's the 127.0.0.1 unrouteable09:04
ttxwtf09:05
sorenYeah, /me should read things more carefully09:05
ttxhm, vish's branch looks a bit outdated09:06
ttxor...09:07
ttxaaaaah09:07
ttxthe newlog2 branch reintroduced the bug09:10
ttxhttp://bazaar.launchpad.net/~vishvananda/nova/lp700867/revision/515.4.1#nova/flags.py09:11
* ttx checks if nothing else was accidentally undone in that branch09:13
*** fabiand_ has joined #openstack09:13
ttxno, the rest looks ok09:17
ttxnova-core: priority reviews:09:21
ttxhttps://code.launchpad.net/~nttdata/nova/live-migration/+merge/4494009:21
ttxhttps://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/4522809:22
*** zykes- has joined #openstack09:22
*** skrusty has quit IRC09:29
*** skrusty has joined #openstack09:41
*** reldan has joined #openstack09:48
vishyttx: it probably was done because of a circular import problem09:48
ttxvishy: oh!09:48
vishyttx: adding import utils to flags goes boom09:48
ttxvishy: well, your branch works around that well09:49
vishyttx: by the way trunk was completely hosed, we've been trying to fix all of the bugs09:49
*** anotherjesse has quit IRC09:50
ttxvishy: I saw that... I'll run a few tests today to see where we stand09:50
vishythere are a few we haven't proposed yet09:51
vishytrunk_safe is us trying to fix all of them09:51
ttxvishy: did you report them as bugs yet ?09:51
ttxor are busy fixing them all as you go ?09:52
vishyfixing as we go...there are a couple we haven't reported yet09:52
ttxvishy: ok, I'll keep that in mind in my testing09:52
ttxvishy/soren: we need to review the japanese branches early this week, since the TZ difference will trigger longer fixes09:53
ttxIt will be easy to rush the Ozone or Anso branches through at the end... but the Japanese ones will take longer to get fixes in, due to lack of -core devs in that TZ09:54
vishyttx: sure, we're trying to get a stable trunk so testing the branches actually means something09:55
ttxvishy: right :)09:55
ttxvishy: isn't it Sunday night where you live ?09:57
vishyttx: yes09:59
*** reldan has quit IRC09:59
*** anotherjesse has joined #openstack10:02
*** reldan has joined #openstack10:03
*** arthurc has joined #openstack10:06
*** reldan has quit IRC10:10
*** aimon has quit IRC10:12
*** aimon has joined #openstack10:12
*** reldan has joined #openstack10:13
*** trin_cz has joined #openstack10:15
*** aimon_ has joined #openstack10:15
*** aimon has quit IRC10:18
*** aimon_ is now known as aimon10:18
*** anotherjesse has quit IRC10:22
*** anotherjesse has joined #openstack10:24
*** reldan has quit IRC10:31
*** allsystemsarego has joined #openstack10:38
*** aimon has quit IRC10:38
sorenOdd. I feel somewhat reluctant to approve  https://code.launchpad.net/~soren/nova/iptables-security-groups/+merge/43767, because it fixes some bugs that I think some people may be depending on.10:40
sorenMeh. It's still early. We can fix whatever comes up.10:42
*** reldan has joined #openstack10:43
anotherjessesoren: ++10:44
*** reldan has quit IRC10:44
*** miclorb has joined #openstack10:46
*** anotherjesse has quit IRC10:49
openstackhudsonProject nova build #373: SUCCESS in 1 min 20 sec: http://hudson.openstack.org/job/nova/373/10:49
soren\o/10:49
*** MarkAtwood has quit IRC10:54
uvirtbotNew bug: #700974 in nova "_describe_availability_zones_verbose calls db.service_get_all and db.get_time. Neither exist." [Undecided,New] https://launchpad.net/bugs/70097410:56
*** dizz has joined #openstack11:16
*** befreax has joined #openstack11:24
*** dizz has quit IRC11:24
*** aimon has joined #openstack11:27
*** aimon has quit IRC11:37
*** miclorb has quit IRC11:48
*** befreax has quit IRC11:54
*** dizz has joined #openstack12:07
*** aimon has joined #openstack12:12
sorenMan, it's taking a long time to review this stuff.12:14
openstackhudsonProject nova build #374: SUCCESS in 1 min 19 sec: http://hudson.openstack.org/job/nova/374/12:19
sorenOoh, what landed now?12:21
sorenAh. Bug fixes. Boring :)12:21
*** trin_cz has quit IRC12:24
*** aimon has quit IRC12:30
*** hggdh has quit IRC12:34
*** ctennis has quit IRC12:36
*** aimon has joined #openstack12:42
sorenPhew.12:42
*** littleidea has joined #openstack12:44
*** dizz is now known as dizz|away12:44
*** ctennis has joined #openstack12:55
*** ctennis has joined #openstack12:55
*** kashyapc has quit IRC13:06
*** ramkrsna has quit IRC13:15
*** trin_cz has joined #openstack13:15
*** gaveen has joined #openstack13:17
*** jfluhmann__ has joined #openstack13:20
*** jfluhmann has quit IRC13:24
*** hadrian has joined #openstack13:36
*** adiantum has joined #openstack13:42
*** jfluhmann__ has quit IRC13:46
*** cron0 has joined #openstack13:47
*** adiantum has quit IRC13:56
*** westmaas has joined #openstack13:57
*** cron0 has left #openstack13:58
ttxsoren: you might want to unfuck trunk by approving https://code.launchpad.net/~vishvananda/nova/lp699814/+merge/4562713:59
* ttx kinda dislikes the new default logging format, makes our logs almost look like Eucalyptus ones.14:01
sorenttx: Approved.14:04
*** westmaas has quit IRC14:05
*** westmaas has joined #openstack14:08
ttxsoren: with current trunk I get an instance numbered "i-1"14:08
ttxnot sure that's by design14:09
*** nelson__ has quit IRC14:09
*** nelson__ has joined #openstack14:10
openstackhudsonProject nova build #375: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/375/14:14
*** ppetraki has joined #openstack14:14
*** befreax has joined #openstack14:18
*** befreax has quit IRC14:22
*** gondoi has joined #openstack14:26
*** westmaas has quit IRC14:30
*** deadestchicken_ has joined #openstack14:33
*** mdomsch has joined #openstack14:36
*** zul has joined #openstack14:39
ttxzul: o/14:44
zulhey ttx14:44
zulyou are missing the sprint/rally14:44
ttxzul: how is snowy Dallas ?14:44
zulttx: its armageddon according to the local tv news14:45
ttxzul: if only it could make Dallas fun14:45
zulttx: yeah people are apparently freaking out with the driving...which is fun to watch14:46
*** nelson__ has quit IRC14:46
*** f4m8 is now known as f4m8_14:47
*** tteikhua has joined #openstack14:47
openstackhudsonProject nova build #376: SUCCESS in 1 min 19 sec: http://hudson.openstack.org/job/nova/376/14:49
*** troytoman has joined #openstack14:51
*** dendro-afk is now known as dendrobates14:53
*** troytoman has quit IRC14:53
*** rnirmal has joined #openstack15:00
*** gaveen has quit IRC15:06
*** sparkycollier has joined #openstack15:09
jaypipeszul: there is a sprint in Dallas?15:13
zuljaypipes: yep15:13
*** kashyapc has joined #openstack15:14
jaypipeszul: /me feels left out :(15:15
annegentlezul: snowmageddon! send some down to Austin, it'll be cold enough tomorrow15:15
* soren calls it a day... at least until I get bored this evening and come back15:15
jaypipesheh15:15
zuljaypipes: heh come to work for canonical then ;)15:16
*** kashyapc has quit IRC15:16
*** kashyapc has joined #openstack15:16
jaypipeszul: well, we have enough snow up here in C-bus ;)15:16
zulannegentle: this is nothing compared to what im use to15:17
annegentlezul: oh yeah, where's your winter experience from? I moved to Texas from Ohio, but it was southern Ohio. Much more snow when I was a kid in northern Indiana. :)15:20
zulannegentle: im a canadian...2 seasons winter and construction :)15:20
annegentlezul: you win :)15:21
uvirtbotNew bug: #701055 in nova ""No instance for id X" error terminating instances, borks nova-compute" [Undecided,New] https://launchpad.net/bugs/70105515:22
zulannegentle: i had to drop my son off at daycare, we walked there it was -30C with the windchill ;)15:22
*** hazmat has joined #openstack15:25
*** johnpur has joined #openstack15:26
*** ChanServ sets mode: +v johnpur15:26
*** zul has quit IRC15:36
*** tteikhua has quit IRC15:47
*** zul has joined #openstack15:49
*** hggdh has joined #openstack15:50
*** brd_from_italy has quit IRC15:50
ttxnova-core: please give some review love in priority to:15:57
ttxhttps://code.launchpad.net/~morita-kazutaka/nova/sheepdog/+merge/4509315:57
ttxhttps://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/4522815:58
*** fabiand_ has quit IRC16:03
*** jimbaker has joined #openstack16:04
*** guigui1 has quit IRC16:06
*** hggdh has quit IRC16:08
*** dragondm has joined #openstack16:12
*** jdarcy has joined #openstack16:13
*** rnirmal has quit IRC16:15
*** glenc_ has quit IRC16:21
*** dubsquared has joined #openstack16:21
xtoddxCan I get a review?  https://code.launchpad.net/~anso/nova/wsgirouter/+merge/4533016:21
*** glenc has joined #openstack16:22
*** calavera has quit IRC16:32
*** kashyapc has quit IRC16:34
*** ibarrera has quit IRC16:43
*** westmaas has joined #openstack16:45
*** hggdh has joined #openstack16:46
*** kashyapc has joined #openstack16:51
*** Lcfseth has joined #openstack16:51
*** arreyder has quit IRC16:56
creihtmtaylor: around?17:00
mtaylorcreiht: you know it17:00
creihtletterj is reporting some issues trying to build packages with the debian branch for swift17:00
creihthe says he gets a conflict if he tries to merge with trunk17:01
mtaylorok. looking17:01
creihtand that he also sees some hudson errors?17:01
mtaylorhrm17:01
*** WonTu has joined #openstack17:04
*** WonTu has left #openstack17:04
*** dfg_ has joined #openstack17:06
mtaylorcreiht: ok. hudson job fixed17:06
mtaylorcreiht: me no see problems merging with trunk17:07
creihtk17:07
creihtthx17:07
* mtaylor upgrading hudson...17:09
dubsquaredmorning/afternoon everyone!  i have been keeping up the second with the latest bug reports, but is this being addressed/any one know the fix?  http://paste.openstack.org/show/453/17:10
*** elasticdog has quit IRC17:11
dubsquareds/have/havent17:11
*** openstackhudson has quit IRC17:13
*** openstackhudson has joined #openstack17:14
*** openstackhudson has quit IRC17:21
*** openstackhudson has joined #openstack17:22
*** elasticdog has joined #openstack17:23
dabocan someone who's familiar with the sqlalchemy layer explain line 748 of the instance_get_by_id() method of nova/db/sqlalchemy/api.py? http://paste.openstack.org/show/454/17:25
daboThat line appears to say that if you have access to deleted records, you will *only* get back deleted records. That certainly seems wrong to me; if you have access to deleted records, you should be querying all records, whether deleted or not.17:25
daboI have an instance in the db that's not deleted, but I can't get it using this method.17:25
jaypipesdabo: any idea on this? http://paste.openstack.org/show/455/17:27
dabojaypipes: looking...17:28
jaypipesdabo: as for the deleted thing... no, that code looks right to me. users should only see non-deleted records, and non-users should have to be checked to see whether they can see deleted records...17:28
dabojaypipes: re: ecua2ool - I don't see a problem. Any chance you have multiple copies installed?17:29
jaypipesdabo: but I do see what you mean.... it should really be: if can_read_deleted(context): thequery.filter(deleted=deleted)...17:29
dabojaypipes: can you do 'import euca2ools' from within python17:30
jaypipesdabo: nope. isn't this lovely :)17:30
jaypipes(.nova-venv)jpipes@serialcoder:~/repos/nova/bug699654$ euca-version17:30
jaypipesTraceback (most recent call last):17:30
jaypipes  File "/usr/bin/euca-version", line 36, in <module>17:30
jaypipes    from euca2ools import Euca2ool, Util17:30
jaypipesImportError: No module named euca2ools17:30
*** rlucio has joined #openstack17:31
dabojaypipes: looks like either a) you have conflicting versions installed or b) your python pathing is hosed.17:31
dabotry: import sys; print sys.path17:31
dabojaypipes: re: query - the way it's written, if I'm an admin and I can see deleted records, it generates a filter of 'deleted = True'17:32
dabothat's not right17:32
jaypipesdabo: or it could be that euca2ools is crap. :) http://paste.openstack.org/show/456/17:32
jaypipesdabo: yes, you are right. that's incorrect, which is why I said it should be if can_read_deleted(context): blahj...17:32
dabojaypipes: here's what I get on maverick: http://paste.openstack.org/show/457/17:34
dabojaypipes: I'll enter a bug for the query stuff17:34
jaypipesdabo: you're not in a virtualenv.17:34
*** adiantum has joined #openstack17:34
dabojaypipes: no, I'm not17:34
jaypipesdabo: it's messed up when in virtualenv.17:34
* jaypipes hates software that assumes something about the installed environment... grr.17:35
dabojaypipes: ah, didn't realize that17:35
daboyeah, looks like euca is crap17:35
jaypipesdabo: FYI, '/usr/lib/python2.6/dist-packages' is in my python path, and that is exactly where euc2ools is located :)17:36
dabojaypipes: then it's something wonky with the way virtualenv is segmenting the installed packages17:37
jaypipesdabo: it picks up other packages in that dir...17:38
dabojaypipes: virtualenv does some 'magic' so that when you're in one env, you don't see stuff installed into other envs.17:39
jaypipesdabo: not technically. what it does is allow you to *install* stuff into the virtualenv without affecting other envs. Your locally-installed stuff is still accessible, though. And euca2ools is locally installed...since it can't be either easy_installed or pip installed into the virtualenv (because it's apparently not packaged properly...)17:41
dabojaypipes: that's correct. I wasn't sure if you had installed it into a venv or not17:41
jaypipesdabo: no, I would have to do it manually, and I don't install anything into a virtualenv that cannot be pip or easy_installed.17:42
dabojaypipes: yeah, I didn't realize that euca couldn't be installed properly.17:42
*** reldan has joined #openstack17:46
*** zul has quit IRC17:46
*** zul has joined #openstack17:46
*** jdurgin has joined #openstack17:47
uvirtbotNew bug: #701121 in nova "Getting instances by ID when admin only returns deleted instances" [Undecided,New] https://launchpad.net/bugs/70112117:51
*** openstackhudson has quit IRC17:56
*** openstackhudson has joined #openstack17:57
*** joearnold has joined #openstack18:01
*** adiantum has quit IRC18:01
*** reldan has quit IRC18:03
*** reldan has joined #openstack18:04
*** maplebed has joined #openstack18:04
jaypipesdabo: euca2ools' Makefile *hardcodes* PREFIX as /usr/local. :(18:07
jaypipesdabo: had to download the tarball and edit the Makefile by hand to point PREFIX to my virtualenv...18:07
dabojaypipes: hardcoding is teh AWESOME!!18:08
*** adiantum has joined #openstack18:14
*** reldan has quit IRC18:14
*** Charlie__ has joined #openstack18:26
*** dendrobates is now known as dendro-afk18:27
*** deadestchicken_ has quit IRC18:28
*** daleolds has joined #openstack18:38
*** trin_cz has quit IRC18:41
*** hggdh has quit IRC18:42
*** arreyder has joined #openstack18:44
*** deadestchicken_ has joined #openstack18:46
*** opengeard has joined #openstack18:46
*** deadestchicken_ has quit IRC18:46
*** Charlie__ has quit IRC18:48
*** dendro-afk is now known as dendrobates18:50
sandywalshhey guys, question about nova/utils.py LoopingCall18:55
sandywalshI see that, if an exception occurs, it does a send_exception() via the eventlet Event18:56
*** Lcfseth has left #openstack18:56
sandywalshbut, in other places, like nova/virt/xenapi_conn.py _poll_task18:56
sandywalshthe inner function (the function called by LoopingCall) handles the exception18:57
sandywalshand calls send_exception18:57
sandywalshbut this is a problem18:57
*** adiantum has quit IRC18:57
sandywalshyou can't do two send_exceptions on an Event18:57
sandywalshI think we should remove the send_exception from utils18:57
sandywalshand assume the inner function will deal with problems18:57
sandywalsh(raising the highest fidelity exception)18:58
sandywalshthoughts?18:58
vishysandywalsh: looking at the code, since i'm not quite following how two exceptions could be sent19:02
sandywalshvishy, I'm still investigating as well, but it appears that LoopingCall catches all Exceptions, so anything thrown lower will get caught and re-raised.19:03
sandywalshfor example, the _poll_task method of xenapi_conn19:04
*** adiantum has joined #openstack19:04
*** mdomsch has quit IRC19:04
sandywalshwhich throws a XenAPI.Failure19:04
sandywalshhmm,19:04
sandywalshwait now, I could be wrong. send_exception doesn't raise19:05
sandywalshhang on ... I'll get back to you :)19:05
vishy_poll task seems a bit strange19:05
vishytermie wrote LoopingCall and can probably shed some light when he gets on19:06
vishysandywalsh: are you actually seeing an error?19:06
uvirtbotNew bug: #701164 in nova "Can't change project manager after creation of project." [Undecided,New] https://launchpad.net/bugs/70116419:06
sandywalshvishy, yup, I'll get you a paste19:07
sandywalshvishy, http://paste.openstack.org/show/458/19:07
sandywalshvishy, LoopingCall calls _poll_task periodically, but when an error occurs, we see the event.send() getting called twice (or more)19:09
*** adiantum has quit IRC19:10
sandywalshvishy, I think the LoopingCall is not terminating as it should19:10
sandywalshvishy, and the loop is continuing19:11
vishyyes19:11
sandywalshvishy, gonna try something (stand back, I'm going to try science)19:11
*** ewanmellor has joined #openstack19:12
vishyperhaps it should send self._running = False when an exception is hit?19:12
sandywalshvishy, yup ... call stop()19:12
ewanmellorIs jaypipes here?19:12
ewanmellorOr anyone who understands the venv that nova's run_tests.py uses?19:13
jaypipesewanmellor: you betcah.19:13
jaypipesewanmellor: betcha.19:13
ewanmellorSchweet.19:13
jaypipesewanmellor: ./run_tests.sh -V -f19:13
vishyjaypipes: ewanmellor is mot definitely not a betcah19:13
jaypipesewanmellor: will clear the venv and run tests in it...19:13
jaypipesvishy: :)19:14
jaypipesewanmellor: use -f when you change, say, the tools/pip-requires file.19:14
ewanmellorI want to write a unit test for our Nova->Glance integration.  The Nova code uses the Glance client code.19:14
ewanmellorYeah, I know the basics, more or less ;-)19:14
jaypipesewanmellor: k, and you want to install glance into the venv, right?19:14
ewanmellorI need to cross-reference the Glance client code from the Nova run_tests.19:14
ewanmellorYeah, basically.19:14
*** rnirmal has joined #openstack19:14
jaypipesewanmellor: well, tough luck.19:15
jaypipesewanmellor: hehe, just kidding :)19:15
ewanmellor:-)19:15
jaypipesewanmellor: kinda funny, I've been struggling with the same today (for i18n and other stuff)19:15
ewanmellorI wondered if you've done this already, as part of your glance-client-in-nova blueprint.19:15
jaypipesewanmellor: we need to get glance packaged in the same way nova is.. or verify if it is packaged at all...19:15
jaypipesewanmellor: I'm going to ask mtaylor and soren for some assistance there.  Technically, the step should just be:19:16
jaypipessource .nova-venv/bin/activate; easy_install glance19:16
jaypipesewanmellor: oh, and welcome back from "vacation", too ;)19:16
ewanmellorSo you're proposing using a prepackaged version of Glance when testing Nova?19:17
*** adiantum has joined #openstack19:17
ewanmellorI was thinking about having both source trees next to each other, and then referencing one from the other.19:17
jaypipesewanmellor: testing the nova-glance integration, yes.19:17
*** rnirmal_ has joined #openstack19:17
jaypipesewanmellor: that would be icky I think...19:17
ewanmellorYeah, that's why I came on to IRC -- because I knew *someone* would say that.19:18
jaypipesewanmellor: and not something that could be easily automated through, say, Hudson.19:18
jaypipesewanmellor: the easiest solution is just to ask mtaylor to do it. that solution is a good one.19:18
ewanmellorMy way though, if you need to make parallel changes to the Glance client and Nova, you can test them together.19:18
ewanmellorI'm not claiming to like the idea, BTW.19:19
ewanmellorMy branch is late for Bexar already.19:20
jaypipesewanmellor: I think there should be a way to basically, in your local glance branch, pack up a glance egg, then install that egg into the local nova venv via pip install /path/to/egg, but I haven't tested it yet...19:20
*** rnirmal has quit IRC19:21
*** rnirmal_ is now known as rnirmal19:21
jaypipesewanmellor: something like: cd glance-local; python setup.py build; cd ../nova-local; source .nova-venv/bin/activate; pip install ../glance-local/glance-0.1.egg19:21
jaypipesewanmellor: that's my thought, of course, completely untested...19:22
ewanmellorSo we would make it use easy_install by default, but then someone could install their own egg if they need to.19:22
jaypipesewanmellor: ya.19:22
jaypipesewanmellor: eventually, would be best to get to a point where we can have completely optional installations of things like Glance into a venv...19:22
ewanmellorSo all we need is for some kind soul to put a Glance package wherever it is that they go for easy_install to find them.19:23
jaypipesewanmellor: ya, PyPI.19:24
mtayloraroo?19:27
ewanmellormtaylor: Just the man!19:27
mtaylorjaypipes: oh - you need some deb packaging?19:27
jaypipesmtaylor: kindly.19:27
mtaylorjaypipes: /me puts on list19:27
jaypipeshehe, indeed :)19:27
ewanmellormtaylor: Can I help?19:28
ewanmellormtaylor: I really just need the Glance client code to end up on PyPI.19:28
mtaylorewanmellor: well, PyPI is an easier thing than .debs19:29
mtayloralthough we really should get both going19:29
mtaylorlooking19:30
mtaylorewanmellor: for pypi, you want to do:19:30
mtaylorpython setup.py register19:30
mtaylor(you only need to do this one, it registers the project)19:30
mtaylorand then when you want to make a release, you do:19:30
*** rcc has quit IRC19:31
mtaylorpython setup.py sdist bdist bdist_egg upload19:31
mtaylorewanmellor: you have to do the sdist/bdist commands on the same invocation as the upload, as upload will only upload artifacts created in the current iteration19:31
rlucioanyone seen this greenthreads error before?  http://paste.openstack.org/show/459/19:32
ewanmellormtaylor: Is there some "OpenStack LLC" username that I should be using, or can anyone just release anything?19:32
rluciorelated to eventlet i guess (austin release)19:32
dubsquaredLots of folks about now, I'll post this again if anyone cares to take a look:   http://paste.openstack.org/show/453/  :D19:32
creihtJordanRinke: btw, a doc bug has already been entered for that issue19:33
mtaylorewanmellor: pretty much anyone can release anything19:33
mtaylorewanmellor: there should probably be more consolidation of that related to pypi at some point19:33
mtaylorttx: ^^^^19:33
rluciodubsquared: did you try running iptables-restore -h ?19:33
ewanmellormtaylor: Is it normal just to release the whole thing as one package, or should I consider having a glance-client package separate from glance itself?19:34
rluciodubsquared: it looks like you have an old version of iptables-restore or something, that doesnt have the option nova needs (--icmp_type)19:34
dubsquaredYeah, 'iptables -h' spits out a usage howto19:35
mtaylorewanmellor: it's pretty normal for pypi to just have one thing19:35
mtaylorewanmellor: when we make .debs, we'll split things into other packages19:35
ewanmellormtaylor: OK, consider it done.  Thanks for your help.  Very useful.19:35
dubsquaredrlucio: —icmp_type isn't an option that it lists19:35
ewanmellorjaypipes: Thanks to you, too.19:35
mtaylorewanmellor: by glance-client, which bits are you wanting?19:35
dubsquaredrlucio: ii  iptables                        1.4.4-2ubuntu219:36
uvirtbotNew bug: #701176 in swift "Multi node doc missing install param" [Undecided,New] https://launchpad.net/bugs/70117619:36
mtaylorewanmellor: glance/client.py ?19:36
ewanmellormtaylor: The client-facing SDK.19:36
mtaylorewanmellor: ok. good. just wanted to make sure we were both talking about the same thing :)19:36
ewanmellormtaylor: Yeah, client.py, and whatever its dependencies are.19:36
sandywalshvishy, that was it.19:36
mtaylorewanmellor: I realized there was a small chance that you were wanting the files in bin/19:36
mtaylor:)19:36
jaypipesmtaylor: glance.client19:36
mtaylorgreat. that should fix you up then19:36
ewanmellormtaylor: Yeah, glance-dev might have been a better way to put it.19:37
mtaylorjaypipes: if you want to file a bug about making debs and assign it to me, there's less chance I'll forget about it19:37
jaypipesmtaylor: will do. cheers, and thx for all your help.19:37
rluciodubsquared: yea, you on lucid then?19:37
mtaylorjaypipes: my pleasure!19:37
dubsquaredrlucio: im going to try maverick...19:37
dubsquaredrlucio:  haha, that is correct19:37
rluciodubsquared: i just saw the same version on my machine.. looks  like a bug then, unless there is some backported version of iptables on the PPA we are supposed to use for lucid19:38
*** pandemicsyn has quit IRC19:39
dubsquaredinteresting…haven't filed a bug yet..have a few oddities ive run into…should look into that19:40
*** adiantum has quit IRC19:42
uvirtbotNew bug: #701180 in glance ".debs need to be created for glance.client and glance" [Low,Confirmed] https://launchpad.net/bugs/70118019:46
jt_zgAre there any advantages to running Swift on a specific distro? I know Gluster requires 64bit and seems to play nicer on Ubuntu.19:47
*** adiantum has joined #openstack19:49
*** dubs has left #openstack19:50
jaypipescreiht: see jt_zg ^^19:51
*** dubs has joined #openstack19:51
creihtjt_zg: well we run 64bit ubuntu server for cloud files19:55
jt_zgIs that due to ram considerations?19:56
jt_zgMy personal testing environment nodes have ~512Mb-1024, so I figured there was no sense in using the 64bit option19:57
jt_zgJust wondering if its a glaring oversight19:57
jt_zg*On my part :D19:57
creihtYeah it is mostly due to memory19:58
openstackhudsonProject nova build #377: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/377/19:59
openstackhudsonTarmac: Adds the requisite infrastructure for automating translation templates import/export to Launchpad.19:59
jt_zgGotcha. So Swift really doesn't care what its installed on...within reason. That's great to know19:59
jt_zgI just don't want to start scripting for automatic deployment on Debian then realize I take a performance hit unless I use CentOS, or some other crazy scenario20:00
creihtright20:00
creihtheh20:00
jt_zgI have nightmares!20:00
creihtyeah it is pretty agnostic, as long as you can get the dependencies20:00
jt_zgmakes sense20:00
jt_zgthanks again!20:01
ewanmellormtaylor: That glance packaging hasn't quite worked.  The tarballs all look good, but when I easy_install glance inside my nova venv, it decides to download the .linux-i686.tar.gz, not the plain .tar.gz, and then is surprised when setup.py isn't in there.20:02
*** adiantum has quit IRC20:03
mtaylorewanmellor: oh - hrm20:03
mtaylorewanmellor: perhaps we should only be doing setup.py sdist upload20:03
*** trin_cz has joined #openstack20:04
ewanmellormtaylor: I can delete the binary if you think that's the right thing to do.20:04
mtaylorewanmellor: do that20:04
mtaylorewanmellor: and the egg20:04
*** pandemicsyn has joined #openstack20:04
mtaylorewanmellor: and then next time leave the bdist references out of the command20:05
*** adiantum has joined #openstack20:08
*** ChanServ sets mode: +v pandemicsyn20:10
*** adiantum has quit IRC20:15
*** daleolds has quit IRC20:18
*** adiantum has joined #openstack20:21
creihtjt_zg: everything said, we haven't done any testing on 32bit linux, so take all the above with a grain of salt :)20:23
jt_zgAbsolutely. I'm just testing in 32bit. We'll be the real dev/production on 64bit systems20:24
sandywalshSomeone have 5 minutes for a simple review? https://code.launchpad.net/~sandy-walsh/nova/lp698123/+merge/4574520:25
*** joearnold has quit IRC20:28
*** nelson__ has joined #openstack20:28
*** adiantum has quit IRC20:28
nelson__annegentle: the help message for 'st' says "Cloud Files general documentation". Is that right?20:29
openstackhudsonProject nova build #378: SUCCESS in 1 min 21 sec: http://hudson.openstack.org/job/nova/378/20:29
openstackhudsonTarmac: Bugfix.20:29
annegentlenelson__: nope, that looks like an oversight20:32
*** hggdh has joined #openstack20:33
nelson__Okay, I'll fix it in my docfixes.20:33
annegentlenelson__: awesome, thanks. It could either say Swift general doc or OpenStack Object Storage general doc (which is a bit much)20:33
dubsquaredrlucio:  same issue in maverick, iptables 1.4.4-2ubuntu320:34
*** adiantum has joined #openstack20:34
xtoddxsandywalsh: can you put # TODO(sandywalsh): instead of just # TODO20:35
*** littleidea has quit IRC20:35
xtoddxsandywalsh: also a # NOTE(sandywalsh) might be good for the comment about breaking out the stop loop method as well20:35
nelson__annegentle: grep -r 'Cloud Files' . | wc -l       # gives me a count of 14.20:35
xtoddxsandywalsh: it looks good to me though20:35
nelson__annegentle: maybe replacing that name should be a separate project/patch?20:36
*** brd_from_italy has joined #openstack20:36
annegentlenelson__: ah yes perhaps so. Log a bug indicating 14 count20:37
sandywalshxtoddx, where's the todo?20:37
annegentlenelson__: and if you want a lot of info on st, see http://jbplab.com/post/1697289751/a-utility-for-the-openstack-object-store-swift if you haven't seen that already20:37
sandywalshxtoddx, will fix NOTE, thx20:37
annegentlenelson__: I have yet to fold that into the docs20:37
nelson__cool, thank, no, I hadn't.  BTW, if Rackspace claims a trademark on Cloud Files, it should be a little more careful with it.20:38
xtoddxsandywalsh: the TODO is on line 18 of the diff in launchpad20:38
* nelson__ takes off my trademark legal eagle hat.20:38
annegentlenelson__: you're quite right.20:38
xtoddxsandywalsh: "create fake SR record"20:38
sandywalshxtoddx, oh, bzr screwed up. That was old code. My code in that block ends @1420:38
*** pandemicsyn has quit IRC20:39
*** fitzdsl has quit IRC20:39
*** fitzdsl has joined #openstack20:39
xtoddxyea, it looked like a ninja-patch20:39
xtoddxbut i was willing to let it slide20:40
sandywalsh:) ... thanks for the review!20:41
sandywalshchange pushed20:41
*** littleidea has joined #openstack20:43
*** miclorb_ has joined #openstack20:44
*** littleidea has quit IRC20:47
*** ctennis has quit IRC20:52
*** littleidea has joined #openstack20:53
*** adiantum has quit IRC20:54
*** pothos_ has joined #openstack20:57
*** adiantum has joined #openstack20:59
*** pothos has quit IRC20:59
*** pothos_ is now known as pothos20:59
uvirtbotNew bug: #701216 in nova "When a floating IP is associated with an instance, describe_instances for ec2 fails" [Undecided,New] https://launchpad.net/bugs/70121621:01
creihtnelson__: yeah that must have been an area we missed when getting everything ready for open sourcing21:04
*** ctennis has joined #openstack21:05
*** damon__ has joined #openstack21:06
nelson__I figured. Since it's Rackspace's problem, I'll let them (you) fix it.21:10
creihthehe21:11
*** miclorb_ has quit IRC21:22
*** fabiand_ has joined #openstack21:25
*** miclorb has joined #openstack21:26
openstackhudsonProject nova build #379: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/379/21:29
openstackhudsonTarmac: xenapi_conn was not terminating utils/LoopingCall when an exception was occurring. This was causing the eventlet Event to have send_exception() called more than once (a no-no).21:29
openstackhudsonThis would have affected more than just pause/unpause, but any XenApi call that raised an exception.21:29
*** joearnold has joined #openstack21:30
sandywalsh\o/ thank you21:30
*** jaypipes is now known as jaypipes-afk21:38
*** westmaas has quit IRC21:43
*** adiantum has quit IRC21:43
*** littleidea has quit IRC21:48
*** littleidea has joined #openstack21:49
*** adiantum has joined #openstack21:49
*** critch has joined #openstack21:53
*** littleidea has quit IRC21:53
*** allsystemsarego has quit IRC22:00
uvirtbotNew bug: #701248 in swift "Refactor unit tests to us a fake logging class" [Low,Confirmed] https://launchpad.net/bugs/70124822:01
*** adiantum has quit IRC22:03
*** skrusty has quit IRC22:03
*** adiantum has joined #openstack22:04
*** jarrod has joined #openstack22:09
*** skrusty has joined #openstack22:09
*** MarkAtwood has joined #openstack22:19
dragondmhey'all, is  vishy about?22:22
* vishy is lurking22:22
dragondmah. good.   Just to let you know, I renamed that api class in the xs-console branch as you suggested.  If you could take a look at the xs-console merge prop whence ya get a moment, that'd be good.22:24
vishyok cool22:24
dragondmthanks22:24
*** adiantum has quit IRC22:25
*** littleidea has joined #openstack22:27
*** adiantum has joined #openstack22:30
*** kainam has joined #openstack22:30
*** arcane has quit IRC22:30
*** adiantum has quit IRC22:37
*** fabiand_ has quit IRC22:40
tr3buchethttps://blueprints.launchpad.net/nova/+spec/instance-state-arbiter/22:42
*** littleidea has quit IRC22:42
tr3buchetplease give feedback22:42
tr3bucheti could use it :D22:42
*** brd_from_italy has quit IRC22:47
*** adiantum has joined #openstack22:50
*** hggdh has quit IRC22:50
*** rossij has quit IRC22:53
*** littleidea has joined #openstack22:58
*** adiantum has quit IRC23:00
*** littleidea has left #openstack23:05
*** adiantum has joined #openstack23:05
*** spectorclan has joined #openstack23:07
*** schisamo has joined #openstack23:10
*** mray has joined #openstack23:11
*** adiantum has quit IRC23:12
uvirtbotNew bug: #701278 in nova "iptables is failing when lauching instances " [Undecided,New] https://launchpad.net/bugs/70127823:12
*** Glaurung has quit IRC23:12
*** ppetraki has quit IRC23:13
*** adiantum has joined #openstack23:16
*** adiantum has quit IRC23:24
*** adiantum has joined #openstack23:29
*** gondoi has quit IRC23:33
*** rnirmal has quit IRC23:38
*** mray has quit IRC23:38
spectorclanOpenStack Design Summit - Program Committee Announcement; thanks to all that volunteered - http://www.openstack.org/blog/2011/01/openstack-conferencedesign-summit-program-committee/23:38
*** adiantum has quit IRC23:40
*** adiantum has joined #openstack23:47
*** spectorclan has quit IRC23:48
*** mray has joined #openstack23:53
*** adiantum has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!