Wednesday, 2011-02-02

*** adiantum has joined #openstack00:00
*** Ryan_Lane is now known as Ryan_Lane|away00:02
*** j05h has joined #openstack00:03
desai(or for anyone else around) when i boot an instance, it only gets local addressing (a 10.0.X.X address). how do we get it to hand out public ips as well for nodes?00:05
desaii've associated a floating block of ips with the system, but they don't seem to get used00:05
desaiI'm als getting these in the nova-network log:
*** grapex has joined #openstack00:10
*** drico has quit IRC00:11
*** zul has quit IRC00:11
*** westmaas has joined #openstack00:15
*** zul has joined #openstack00:18
*** KnuckleSangwich has joined #openstack00:21
*** KnuckleSangwich has quit IRC00:23
*** KnuckleSangwich has joined #openstack00:23
*** fysa has quit IRC00:25
*** devcamcar has left #openstack00:26
*** devcamcar has joined #openstack00:26
masumotokdevcamca: Hi, could you please tell me how to handle this error about openstack-dashboard? "ImproperlyConfigured: Error importing middleware django.middleware.csrf: "No module named csrf"00:26
devcamcarmasumotk: what version of django are you using?00:27
devcamcardevcamcar: openstack-dashboard requires django 1.2.300:27
devcamcaror higher00:27
devcamcarmsaumotk: if you look in the tools/ folder there is a pip-requires file so you can easily install all the dependencies with pip00:27
masumotokdevcamcar: that's why.. I'm using 1.1.1-2, ubuntu10.0400:27
masumotokdevcamcar: thanks! I'm gonna try it right now.00:28
devcamcarmasumotok: recommend using pip -r tools/pip-requires00:29
devcamcarto install the dependencies00:29
*** dirakx has quit IRC00:33
desaianother dumb question: does all of the relevant network state (which nova-compute is RoutingNode for a given security group) get tracked in the networks table of the database?00:33
*** fysa has joined #openstack00:36
*** adiantum has quit IRC00:39
*** zul has quit IRC00:42
*** adiantum has joined #openstack00:46
*** londo has quit IRC00:48
*** ranger57 has joined #openstack00:48
*** ranger57 has quit IRC00:49
*** masumotok has quit IRC00:49
kpeppleis anyone a sqlalchemy-migrate expert here ? i'm running into some strangeness trying to write a migration ...00:50
*** dragondm has quit IRC00:50
*** grapex has quit IRC00:51
*** joearnold has quit IRC00:57
*** littleidea has quit IRC00:59
*** MarkAtwood has quit IRC01:03
*** hadrian has quit IRC01:06
*** adiantum has quit IRC01:08
*** zul has joined #openstack01:09
*** Ryan_Lane has joined #openstack01:12
*** adiantum has joined #openstack01:13
kpepplenever mind about the sqlalchemy-migrate request :)01:13
*** grapex has joined #openstack01:17
*** kpepple has left #openstack01:19
*** zenmatt has quit IRC01:20
*** Ryan_Lane has quit IRC01:23
*** reldan has quit IRC01:34
*** adiantum has quit IRC01:37
*** adiantum has joined #openstack01:42
*** blakeyeager has quit IRC01:43
*** hazmat has quit IRC01:48
*** fcarsten has joined #openstack01:50
*** baldben has quit IRC01:50
*** dirakx has joined #openstack01:51
fcarstenswift + RAID 5/6: According to the docu swift on RAID 5/6 is a bad idea. I'm currently facing some IT guys here saying "We only do RAID 5/6. It's better. The Swift documentation is wrong". Is there some material which I can use to back up the swift documentation?01:52
*** littleidea has joined #openstack01:52
*** pvo is now known as pvo_away01:54
*** kpepple has joined #openstack01:57
*** daleolds has joined #openstack02:00
*** hazmat has joined #openstack02:02
kpepplefcarsten: that will be tough … when the storage guys gets their hearts set on something (Raid S anyone ?), they can be quite insistent. You're going to need to walk them through the swift architecture. I used to explain it as "we're doing RAID, but on the server level not on the disk level"02:05
notmynamefcarsten: tell them that they can do RAID 5/6 if they want, but expect orders of magnitude worse performance :-)02:07
notmynamefcarsten: swift does lots of small random reads and writes which is pathological worst case for RAID 5/602:08
desairaid5 is also a terrible idea if you are using large disks02:09
notmynameand if you loose a single node, rebuild times get horrible (on the order of weeks, with the size of volumes we were/are using)02:09
notmynameeven if you lose it ;-)02:09
*** computr has quit IRC02:10
fcarstennotmyname: Problem is that I want it to be a success, they don't care.02:11
notmynamethen that is a difficult position to be in.02:12
kpepplefcarsten: how about that it is 25% more expensive ?02:12
notmynamecreiht: do we have any empirical data on the badness of RAID + swift? didn't we have a graph somewhere?02:12
desainotmyname: you need to be really careful about making sweeping statements like that when you want to win over cynical it staff02:16
*** mray has joined #openstack02:16
fcarstenkpepple: since the RAIDed hardware already exists in abundance here but they probably have to buy new hardware to do non-RAID the 25% don't matter :-(02:16
desaithey are different design tradeoffs, that is all02:16
kpepplefcarsten: true, true02:17
*** zenmatt has joined #openstack02:17
notmynamedesai: true. I wouldn't suggest using that ("well, fine, just do it and watch it be really bad") as an opening argument ;-)02:17
notmynameRAID 5/6 is great in certain cases. it just happens to be bad for swift and doesn't really gain you much02:18
notmynameRAID controllers in JBOD mode with a battery-backed cache is a great thing for swift, so already having the hardware may be a plus02:20
creihtfcarsten: I think redbo has a graph somewhere02:30
fcarstencreiht: that would be great.02:30
creihtBut it does turn out that data usage with swift is the pathalogical worst case for RAID 502:31
creihtnamely large number random writes (usually in small chunks)02:31
creihtIf they absolutely have to have raid, then RAID 10 should be ok02:32
creihtOne of the test systems that we had we had a 24 disk raid 602:33
creihton one drive failure, it was going to take 2-3 weeks to rebuild02:33
creihtand in the degraded state, it really affected the performance of swift negatively02:34
creihtIf you imagine that in a largish cluster, you can count on there being at least a failed drive every day, those 2-3 week rebuilds stack up pretty quick02:34
creihtEven if the raids are in good health, with the average swift use case, the raid performance was no better, and often worse than just the machine writing to a single drive02:35
creihtWe really wanted to make RAID work, because that would have made the software much easier to write02:36
fcarstencreiht: to simulate swift what would be a suitable io size per read/write and what read : write ratio?02:37
creihtRAID 5/6 that is02:37
*** kashyapc has joined #openstack02:37
creihtIn our use case we are pretty heavy write (I don't remember the exact ratio, but it is the largest chunk of requests)02:37
fcarsten(apparently they have a tool that can simulate IO access behavior for benchmarking)02:38
creihtWe try to optimize writes as much as possible02:38
creihtor I should say, given a choice, we try to optimize the write over the read performance02:38
creihtfcarsten: The best thing to do would be to set up a small cluster of say 5 machines, and try both ways and do some performance testing02:39
*** ctennis has joined #openstack02:39
*** ctennis has joined #openstack02:39
creihtSince we do not have RAID to fall back on, swift does the following to work around failure02:40
creiht1. we store 3 replicas of all data (objects, containers, accounts) in swift02:40
creiht2. replication is always running accross the cluster02:41
fcarstencreiht: Is it possible to make a rough estimate of what numbers in their simulator would give representative results for swift usage? They want to know 3 things:02:42
fcarsten1) read / write ratio, e.g 70:30 or 50:50, ...02:42
creiht3. When drive failure is detected, replication will push the data that was supposed to be on that drive to handoff nodes02:42
fcarsten2) Size of io - e.g. 4k, 8k, ....02:42
fcarsten3) number of parallel streams / users02:43
creihtthose handoffs are spread evenly accross the cluster, so it happens very quickly, and is more efficient the larger the cluster is02:43
creiht4. When an object PUT comes in, at least 2 replicas have to be written successfully, and if one of the main replica nodes is down, it will write one copy to the handoff node (which gets replicated back as soon as the device is back up)02:44
creihtfcarsten: the answer to those is going to depend on your use cases02:45
*** littleidea has quit IRC02:45
notmynamefcarsten: those numbers would be very dependent on your users. would you use a swift cluster as a backup target or as a CDN origin (or something in between)02:45
creihtfcarsten: and all that said, simulation is one thing, but real world tests are completly different02:46
fcarstencreiht: mostly probably around things like: Sensors aquire data (from small temperature reading to MRI scans in size) and store on swift. Backend process download /stream data from swift and do some processing on it.02:46
creihtfcarsten: so you would want to figure out what the average size of those data points, how many/how often are collected02:47
fcarstencreiht: Yes, simulation goes only so far, but if it shows bad performance in their simulator it makes it easier for me to convince them not to use RAID 5/6 :-)02:47
creihtand then how often they are processed02:47
creihtonce you have that data though, you should be able to create a pretty good estimate of the variables they are asking for02:48
fcarstencreiht: avg size is very hard to guess since the potential range in our application is large, from a few bytes to gigabytes; and I can't predict which will make more use of ythe service.02:49
*** vvuksan has quit IRC02:49
creihtfcarsten: then it might make good sense to run two simulations, one with smaller files, and one with larger average size files02:50
fcarstencreiht: I was under the impression that swift does small read/writes independent of the size of the stored files?02:51
creihtfcarsten: it writes the whole file to disk in one piece02:52
notmynameyes, but as it gets the data it fsyncs, so there are some smaller writes02:52
creihtOur use cases happen to gives us smallish (sorry I can't give you a real answer there :/)02:52
fcarstencreiht: so if the distribution of files and access is similar to files normally found on a harddisk, swift would not perform worse under RAID 5 than when stroing the data on RAID 5 disks?02:53
notmynamefcarsten: but remember that another big factor is the size of the volumes and the rebuild times02:53
creihtAnd the very poor performance when the raid array is in a degraded state02:54
*** dendrobates is now known as dendro-afk02:54
fcarstenSo, if they currently use RAID 5 to store data and we are considering swift as an alternative, running swift on raid 5 will not incur additional performance penalties due to raid 5?02:54
fcarsten... sorry have to go ... Ithanks a lot !02:55
notmynameif everything is working correctly (no dead drives, etc), probably02:55
fcarstenI'll read any replies when I get back. thanks.02:55
creihtfcarsten: That is hard to say, but I guess possibly02:57
creihtThough I would argue that you would not be making very good use of the hardware :)02:57
creihtfcarsten: that was a fairly simple test of blasting writes to the partition as fast as could be done in a mannder similar to how swift does a 24 drive raid 6 volume vs. 1 hard drive02:59
creihtheh, and redbo just updated to show a raid 0 test as well :)03:00
creihtBut like I said, there is also nothing like testing the real thing.  We went back and forth between raid and non raid several times03:01
*** mray has quit IRC03:05
redboRaids 3 through 6 are always going to be slow, no matter what SAN vendors say.  Now they may be fast enough, but the rebuild times when you do lose a drive are terrible.  You'll probably lose another drive before it completes.03:06
*** miclorb has quit IRC03:08
redboSo... not generally a fan of parity, other than the algorithms are pretty cool.03:08
*** zx225 has joined #openstack03:08
*** miclorb_ has joined #openstack03:08
*** zx225 has quit IRC03:09
*** miclorb__ has joined #openstack03:11
*** miclorb_ has quit IRC03:14
*** mray has joined #openstack03:19
*** iammartian has joined #openstack03:22
*** BK_man has quit IRC03:22
*** mray has quit IRC03:26
*** littleidea has joined #openstack03:28
desaiso another probably dumb networking question: i've setup a series of networks on using vlanmanager, which are used for VMs; when i start an instance, both addresses show up as being in one of the 10.0.X.0/24 networks that are in the network pool. I've tried adding floating addresses to the system as well, but they don't seem to be used. I'm likely doing something wrong, any pointers?03:28
*** dendro-afk is now known as dendrobates03:36
*** adiantum has quit IRC03:39
*** littleidea has quit IRC03:44
*** adiantum has joined #openstack03:44
*** kpepple has left #openstack03:49
*** mdomsch has joined #openstack03:54
*** littleidea has joined #openstack03:56
*** jsgotangco has joined #openstack04:01
*** jsgotangco has quit IRC04:08
*** jsgotangco has joined #openstack04:08
*** RJD22 is now known as RJD22|away04:08
*** jsgotangco has quit IRC04:08
*** jsgotangco has joined #openstack04:09
*** baldben has joined #openstack04:09
*** bwalker7125 has quit IRC04:10
*** fcarsten has quit IRC04:14
*** RJD22|away is now known as RJD2204:14
*** hadrian has joined #openstack04:20
*** adiantum has quit IRC04:23
*** fcarsten has joined #openstack04:27
*** adiantum has joined #openstack04:35
*** maplebed has quit IRC04:37
*** kashyapc has quit IRC04:41
*** ctennis has quit IRC04:42
*** sateesh has joined #openstack04:45
*** desai has quit IRC04:51
*** kpepple_ has joined #openstack04:55
*** kpepple has joined #openstack04:56
*** kpepple_ has left #openstack04:56
*** pvo_away is now known as pvo05:02
*** kpepple has quit IRC05:06
*** kpepple has joined #openstack05:07
*** kpepple has quit IRC05:14
*** kpepple has joined #openstack05:15
*** kpepple has quit IRC05:19
*** kpepple has joined #openstack05:19
*** pvo is now known as pvo_away05:21
*** kpepple has left #openstack05:23
*** kpepple_ has joined #openstack05:23
*** kpepple_ has quit IRC05:23
*** kashyapc has joined #openstack05:25
*** kpepple has joined #openstack05:28
*** dirakx has quit IRC05:28
*** kpepple has left #openstack05:30
*** iammartian has quit IRC05:43
*** BK_man has joined #openstack05:48
*** f4m8_ is now known as f4m805:50
*** kpepple has joined #openstack05:52
*** fcarsten has quit IRC05:53
*** kpepple has left #openstack05:57
*** kpepple1 has joined #openstack05:59
*** kpepple1 has left #openstack06:00
*** kpepple has joined #openstack06:04
openstackhudsonProject nova-tarmac build #57,733: FAILURE in 37 sec:
*** thimble has joined #openstack06:13
*** zul has quit IRC06:13
openstackhudsonYippie, build fixed!06:15
openstackhudsonProject nova-tarmac build #57,734: FIXED in 2.8 sec:
*** notmyname has quit IRC06:22
*** notmyname has joined #openstack06:23
*** ChanServ sets mode: +v notmyname06:23
*** omidhdl has joined #openstack06:24
*** mdomsch has quit IRC06:37
*** adiantum has quit IRC06:39
*** adiantum has joined #openstack06:39
*** hadrian has quit IRC06:40
*** daleolds has quit IRC06:45
*** jfluhmann__ has joined #openstack06:52
*** jfluhmann_ has quit IRC06:55
*** guigui has joined #openstack07:13
*** grapex has quit IRC07:22
*** adiantum has quit IRC07:27
*** miclorb__ has quit IRC07:31
*** adiantum has joined #openstack07:33
*** guynaor has joined #openstack07:34
*** guynaor has left #openstack07:34
*** miclorb_ has joined #openstack07:48
*** londo has joined #openstack08:01
*** jsgotangco has quit IRC08:02
*** rcc has joined #openstack08:02
ttxcreiht: ack08:04
*** miclorb_ has quit IRC08:07
*** miclorb_ has joined #openstack08:07
*** ccustine has quit IRC08:07
*** miclorb has joined #openstack08:08
*** miclorb_ has quit IRC08:13
*** littleidea has quit IRC08:18
*** adiantum has quit IRC08:25
*** adiantum has joined #openstack08:30
*** befreax has joined #openstack08:35
*** omidhdl has quit IRC08:37
*** calavera has joined #openstack08:40
*** Nacx has joined #openstack08:41
*** adiantum_ has joined #openstack08:50
*** adiantum has quit IRC08:51
*** ramkrsna has joined #openstack08:56
*** ramkrsna has joined #openstack08:56
*** omidhdl has joined #openstack09:16
*** arthurc has joined #openstack09:17
*** londo has quit IRC09:29
*** reldan has joined #openstack09:30
*** jakedahn has quit IRC09:31
*** londo has joined #openstack09:35
*** metoikos has joined #openstack09:36
*** tomo_bot has quit IRC09:39
*** akaii has joined #openstack09:39
*** adiantum_ has quit IRC09:40
*** omidhdl has quit IRC09:41
akaiiis anyone here still awake? or is everyone asleep?09:42
alekibangoakaii: most people will come here after some 5-7 hours... but there are some... try asking real question...09:43
akaiii was just wondering where the end-user API is documented09:43
alekibangowhich api you mean? eucatools?09:45
kpeppleakaii: which API ? nova (compute) supports openstack and EC2.09:45
akaiithe openstack api09:46
*** adiantum_ has joined #openstack09:46
akaiithe one modeled after rackspace?09:46
alekibangoakaii:  i  didnt use that one much :) but i will try this week09:47
akaiii see... but is the api documented anywhere?09:47
kpepplei think here -
*** omidhdl has joined #openstack09:49
kpepplebut the openstack interpretation/implementation docs are
alekibangokpepple: ty09:49
akaiiwondering, since swift and nova function differently, dont they have different interfaces?09:51
akaiiand openstack hasn't made any changes to rackspace's interface as of now? it would be alright to just follow rackspace's documentation?09:52
akaiiah, i missed the second link, ill check that first09:53
RJD22isn't the storage modelled after Rackspace and the Compute modelled after the nasa cloud source?10:01
kpeppleRJD22: yes, but compute offers both ec2 api (which i believe nasa's cloud uses) and openstack api10:02
* RJD22 is waiting for tommorow10:03
RJD22I need to make a cloud for scalr >.> So they can implement the openstack API10:04
akaiithe wiki ( says that the openstack API is intended to be a superset of rackspace's API...10:08
akaiithe rest of this superset is still in development? it doesnt exist yet, just in the planning stages?10:08
kpeppleakaii: yes - planning documents (design proposals) are held here: (this is compute)10:10
*** reldan has quit IRC10:11
akaiiso currently, both rackspace and amazon's ec2 are functional... i suppose there's a configuration switch to choose one of the two?10:12
akaiiand will swift have a completely different/separate api from nova? or will swift's api be a subset/superset of nova's?10:13
kpeppleakaii: yes, there are flags in /etc/nova/nova.conf that let you set the API end-points for both EC2 and Openstack. You then talk to the correct end point depending on which you've configured.10:13
kpeppleakaii: haven't done much work on swift … but it appears to use the S3 API .. docs are at
*** arthurc has quit IRC10:15
*** omidhdl has quit IRC10:19
*** arthurc has joined #openstack10:21
*** kpepple has left #openstack10:24
*** miclorb has quit IRC10:26
*** adiantum_ has quit IRC10:28
*** adiantum_ has joined #openstack10:34
*** omidhdl has joined #openstack10:38
*** fabiand_ has joined #openstack10:40
*** colinnich has joined #openstack10:44
*** justinc has joined #openstack11:02
*** adiantum_ is now known as adiantum11:02
*** reldan has joined #openstack11:06
justincI installed nova on two hosts, as described on wiki. Instances can be run either from controller or compute node. But ssh and ping to VMs works from  controller node only.11:09
justincI'm not sure if I corectlly understand (which is default). So I would like to ask if this behaviour is normal, eg part of design?11:11
openstackhudsonProject dashboard-tarmac build #117: FAILURE in 4 hr 38 min:
openstackhudsonYippie, build fixed!11:14
openstackhudsonProject dashboard-tarmac build #118: FIXED in 2.5 sec:
*** zul has joined #openstack11:18
*** omidhdl has quit IRC11:19
openstackhudsonProject nova build #485: SUCCESS in 1 min 30 sec:
openstackhudsonTarmac: Set FINAL = True in
*** adiantum has quit IRC11:31
*** adiantum has joined #openstack11:31
*** jaypipes has quit IRC11:37
*** adiantum has quit IRC11:38
*** fabiand_ has quit IRC11:42
*** adiantum has joined #openstack11:43
*** berendt has joined #openstack11:44
berendtI try running the DevAuth, but I can't add a new user (only getting the error Update failed: 503 Service Unavailable). auth-server and proxy-server are up and running. has someone an idea what can be wrong?11:45
*** alekibango has quit IRC11:45
*** akaii has quit IRC11:46
*** alekibango has joined #openstack11:46
berendtlooks like my account-server has a problem11:47
*** Sebastien-Lo has left #openstack11:47
*** guigui has quit IRC11:53
*** akaii has joined #openstack11:54
*** akaii has joined #openstack11:54
*** justinc has quit IRC11:56
*** dendrobates is now known as dendro-afk12:23
*** justinc has joined #openstack12:40
*** dirakx has joined #openstack12:43
*** omidhdl has joined #openstack12:46
uvirtbotNew bug: #711822 in openstack-devel "Only trunk docs are published" [Undecided,New]
*** adiantum has quit IRC12:48
*** adiantum has joined #openstack12:53
*** jaypipes has joined #openstack13:00
*** iRTermite has quit IRC13:11
*** grapex has joined #openstack13:14
ttxjaypipes: ping me when you arrive13:15
*** desai has joined #openstack13:17
zulsoren: do you want me to do an upload of glance when its released tomorrow?13:20
*** ctennis has joined #openstack13:22
*** ctennis has joined #openstack13:22
*** Winston has joined #openstack13:23
*** Winston is now known as Guest770613:23
*** hggdh has quit IRC13:26
*** Guest7706 has left #openstack13:26
*** omidhdl has quit IRC13:28
*** BK_man has quit IRC13:31
*** dprince has joined #openstack13:32
*** ramkrsna has quit IRC13:33
*** adiantum has quit IRC13:38
*** adiantum has joined #openstack13:43
*** KnuckleSangwich has quit IRC13:46
uvirtbotNew bug: #711853 in swift "account-replicator fails while reading SQLite database file" [Undecided,New]
*** sandywalsh has quit IRC13:51
*** nelson has quit IRC13:52
*** sandywalsh has joined #openstack13:53
*** adiantum has quit IRC13:56
*** grapex has quit IRC13:56
*** sandywalsh has quit IRC13:58
*** sandywalsh has joined #openstack14:01
*** hggdh has joined #openstack14:02
*** adiantum has joined #openstack14:02
*** hggdh has quit IRC14:04
*** hadrian has joined #openstack14:06
*** littleidea has joined #openstack14:06
*** dirakx has joined #openstack14:10
*** mdomsch has joined #openstack14:13
*** pvo_away is now known as pvo14:18
*** nelson has joined #openstack14:20
*** gondoi has joined #openstack14:22
*** adiantum has quit IRC14:23
*** hggdh has joined #openstack14:23
*** lvaughn_ has joined #openstack14:24
*** dendro-afk is now known as dendrobates14:24
*** adiantum has joined #openstack14:25
*** lvaughn has quit IRC14:26
*** hggdh has quit IRC14:28
*** hggdh has joined #openstack14:28
*** dendrobates is now known as dendro-afk14:28
*** dendro-afk is now known as dendrobates14:30
*** hggdh has quit IRC14:32
*** sandywalsh has quit IRC14:33
*** hggdh has joined #openstack14:35
*** adiantum has quit IRC14:37
*** vvuksan has joined #openstack14:41
*** omidhdl has joined #openstack14:42
*** f4m8 is now known as f4m8_14:45
*** sandywalsh has joined #openstack14:46
*** iammartian has joined #openstack14:47
*** that__guy has joined #openstack14:47
*** adiantum has joined #openstack14:50
desaivvuksan: thanks for the ec2_dmz tip yesterday, it helped a bunch. (it didn't fix everything, but that got me unstuck)14:50
*** westmaas_ has joined #openstack14:51
*** mdomsch has quit IRC14:53
*** pvo is now known as pvo_away14:58
*** pvo_away is now known as pvo14:58
*** iRTermite has joined #openstack15:02
vvuksandesai: excellent15:03
*** littleidea has quit IRC15:04
*** imsplitbit has joined #openstack15:04
vvuksandesai: you may want to update the Wiki documentation15:04
desaiyeah, i was planning to do that once everything is actually working ;)15:04
vvuksani'd do it as you discover things15:05
vvuksanas you will inadvertently forget things15:05
desaiyeah, i know what you mean15:05
desaidoes anyone know if it is safe to remove stale entries from the services table in the db?15:07
desaii think that things are getting slowed down by old references to nova-compute instances that aren't up any longer15:07
*** imsplitbit has quit IRC15:07
*** imsplitbit has joined #openstack15:08
*** justinc has quit IRC15:08
*** hggdh has quit IRC15:11
*** hggdh has joined #openstack15:13
*** rnirmal has joined #openstack15:15
*** grapex has joined #openstack15:20
*** dendrobates is now known as dendro-afk15:21
*** mray has joined #openstack15:21
*** dendro-afk is now known as dendrobates15:22
*** adiantum has quit IRC15:23
*** nelson has quit IRC15:24
*** aliguori has quit IRC15:26
desainova-network seems to be spinning and kicking out this traceback:
desaidoes it ring any bells for anyone?15:30
*** Ryan_Lane has joined #openstack15:30
ttxcreiht: any swift known bug you want to mention in ?15:30
ttxjaypipes: ping15:31
creihtttx: Not that I know of15:31
ttxcreiht: ok15:31
ttxnova-core: please review for correctness15:32
*** mray1 has joined #openstack15:35
*** nelson has joined #openstack15:35
*** adiantum has joined #openstack15:35
*** blakeyeager has joined #openstack15:36
*** mray has quit IRC15:37
*** troytoman has joined #openstack15:38
*** mray1 is now known as mray15:38
*** abecc has joined #openstack15:39
*** tomo_bot has joined #openstack15:42
*** grapex has quit IRC15:43
*** dendrobates is now known as dendro-afk15:44
*** ivan has quit IRC15:45
*** Isvara has joined #openstack15:45
*** ivan has joined #openstack15:50
vishydesai: that is a bug15:52
vishydesai: you can fix it by manually adding an SNATTING chain15:53
*** adiantum has quit IRC15:53
vishysudo iptables -t nat -N SNATTING15:53
vishysudo iptables -t nat -A POSTROUTING -j SNATTING15:53
*** aliguori has joined #openstack15:54
ttxvishy: I put a note in release notes about single interfaces and FlatDHCPManager, please check it makes sense15:58
*** adiantum has joined #openstack15:58
*** j05h has quit IRC15:58
vishyttx: the bug desai is hitting we need to document or fix15:58
ttxdesai: could you file a bug at ?16:01
vishyttx: where can I find release notes16:01
vishyttx, desai: i'll file the bug i just made a fix16:01
*** thimble has quit IRC16:01
ttxvishy: ok16:01
vishyttx: note in release notes looks fine16:02
ttxvishy: Please detail in which case that would trigger, since nobody else hit it yet16:02
vishyttx, yes16:02
vishy(it is on restart of nova-network if iptables have been flushed and a floating ip is assigned)16:03
vishyI'll put a workaround in as well16:03
ttxok, then I propose to document it16:03
*** dragondm has joined #openstack16:03
ttxmtaylor: up yet ?16:05
*** ctennis has quit IRC16:06
*** sateesh has quit IRC16:06
*** adiantum has quit IRC16:08
*** dragondm has quit IRC16:08
*** dragondm has joined #openstack16:09
*** littleidea has joined #openstack16:09
*** adiantum has joined #openstack16:13
*** j05h has joined #openstack16:15
*** dendro-afk is now known as dendrobates16:16
desaivishy: can i get a pointer to the fix?16:16
desaioh wait16:16
desainm, read scrollback now16:16
vishybug 71194816:17
uvirtbotLaunchpad bug 711948 in nova "nova-network crashes on restart with floating ips assigned" [Medium,In progress]
desaivishy: thx16:17
*** kirkland has joined #openstack16:17
vishydesai: np16:19
*** Ryan_Lane has quit IRC16:20
that__guyvishy, ttx: applied the fix you gave to desai and got this:
that__guyvishy, ttx: should this be filed as a bug or does our network table need to be reinit'd16:21
uvirtbotNew bug: #711948 in nova "nova-network crashes on restart with floating ips assigned" [Medium,In progress]
*** Ryan_Lane has joined #openstack16:23
Ryan_Laneis there any documentation for multi-zone configurations?16:23
Ryan_Laneand if I don't use multi-zone when I start, will it be difficult to transition later?16:23
*** kpepple has joined #openstack16:24
*** westmaas_ has quit IRC16:29
*** RJD22 is now known as RJD22|away16:30
*** blakeyeager_ has joined #openstack16:32
vishythat__guy: how did that happen?16:34
*** baldben has quit IRC16:34
*** blakeyeager has quit IRC16:35
mtaylorttx: yup16:35
that__guyvishy: stopped nova-network, added the snatting chain, restarted, and euca-run-instance16:35
vishythat__guy: looks like you had an instance already running and you blew away the networking tables?16:35
vishythat__guy: or you have multiple databases16:36
desaivishy: we've got a database that has a day worth of weird info in it from debugging16:36
desaishould we drop the network table and re-create it?16:36
*** berendt has quit IRC16:37
*** blakeyeager_ has quit IRC16:37
vishydesai: sure, but if so, you have to destroy all the running instances16:37
desaiwe can do that16:37
desaiwe're still trying to get things working reliably16:37
vishythat might be best16:37
desaiso we kill all of the instances that we can16:37
vishyyou're definitely in some weird middle state at the moment16:37
desaithere are a few zombies kicking around still16:37
desaiyeah, it definitely looks like that16:37
vishyvirsh list16:38
desaion which node?16:38
vishyvirsh destroy X16:38
vishyon all of the compute nodes16:38
vishyto kill instances that are running but somehow disappeared from the db16:38
desaiwe have the opposite16:38
desaiwe have things in the db that aren't running16:38
vishyoh in that case16:38
desaiat one point, we had scaled up the number of nova-compute nodes16:39
desaiand it was making debugging much harder16:39
vishyah i see, and euca-terminate won't work because the compute host doesn't exist anymore16:39
vishyyou can manually just set those to deleted=116:39
desaiin the db instances table?16:40
colinnichcreiht, notmyname : Hi. Got 2 problems with RC (revision 206). While account authentication seems fine, and actual requests are ok - none of the swauth- commands work. All return 400/Bad Request.16:42
colinnichOther problem is all the services log everything twice into syslog and proxy-server twice into /var/log/swift/proxy.log16:42
*** ciswrk has joined #openstack16:42
colinnichI know logging changed, so hopefully that's a config thing16:43
*** omidhdl has quit IRC16:43
notmynamecolinnich: the second issue sounds like a syslog-ng config issue16:43
colinnichnotmyname: storage nodes are doing it too, and they don't have syslog-ng installed16:44
*** adiantum has quit IRC16:44
colinnichnotmyname: It doesn't seem to be *everything*. I'll paste something16:45
*** grapex has joined #openstack16:45
*** blakeyeager has joined #openstack16:46
vishydesai: yes16:46
colinnichnotmyname: ok, it is - but it's weird. The last line in syslog is always a single, with the duplicate appearing whenever the next thing is output, which may be many seconds later16:46
desaivishy: ok, got them16:46
desaivishy: so which tables should i empty out? networks, and what else?16:47
*** MarkAtwood has joined #openstack16:47
vishydesai: fixed_ips, floating_ips16:48
colinnichnotmyname: doesn't look like a problem in the code, more of a system thing16:48
vishyand recreate networks and floating ips16:48
vishyvia nova-manage16:48
colinnichnotmyname: I also did an apt-get upgrade today16:48
desaivishy: and finish it off with a nova-network restart?16:50
*** adiantum has joined #openstack16:51
*** iRTermite has quit IRC16:52
vishydesai: we had a nova-manage command at some point to clean up zombie instances like that...perhaps it never made it into trunk16:52
desaii suspect we might be writing a few things like that in the next few weeks16:53
*** nelson has quit IRC16:54
ttxmtaylor: ignore me, got jaypipes16:54
desaihm, looks like i can't hit the metadata service from the nodes again, but the nat rule looks ok16:54
vishyis it getting hit? (iptables -t nat -L -n -v)16:55
vishycat /proc/sys/net/ipv4/ip_forward (on network host)16:56
desaithe latter command returns 016:56
vishyecho 1 into it16:56
vishyprobably should change sysctl to set it to 1 so it happens on reboot16:57
vishyedit /etc/sysctl.conf16:57
*** ccustine has joined #openstack16:57
vishydid that fix it?16:58
desaithis is weird, since hitting the metadata service was working beforehand16:58
desaidoesn't look like16:58
desaii'm giving it a minute to retrans16:58
desaii don't think that it did16:58
vishypastie iptables -t nat -L -n -v16:58
desaii'm starting a fresh instance16:58
desaiwill paste in a sec16:58
desainew instance still failing16:59
vishyyou are getting leased messages in nova-network log?16:59
desaiyeah, leasing messages showing up as expected17:00
vishybut the prerouting rule isn't working...17:01
vishydid it associate a public ip on boot?17:02
colinnichcreiht, notmyname: scratch the swauth problem, that was me - dodgy auth url17:02
vishyor is that from a different instance?17:02
colinnichnotmyname: log problem goes away when I go back the code I was running this morning17:02
notmynamethe swift code? what versions before and after the problem? I'll do a diff17:02
desaivishy: euca-describe-instances only shows 10.X addresses17:03
colinnichnotmyname: yes, swift code. Good question.... I'll try and find out17:03
desaiso i don't think that it is associating a public address17:03
desaii suspect that address maybe from before17:03
*** befreax has quit IRC17:03
vishyah ok that must be leftover17:04
vishydesai: what instance are you running?17:04
desai10.0.0.4 and are both mine17:05
vishyi mean which image17:05
vishyis it an ami-tty?17:05
colinnichnotmyname: any way of finding out from the code? Failing that, I'll have to send you it17:05
vishyand you are getting leased messages for both of them? but metadata is failing?17:06
desaiit is timing out17:06
creihtcolinnich: huh... I just repeated what you are seeing17:06
vishyand cat /proc/sys/net/ipv4/ip_forward shows 1 now?17:07
vishyif you ifconfig on the network and compute host17:07
vishydo they both show traffic on vlan10017:07
notmynamecolinnich: swift/ (or import swift; swift.__version__)17:08
notmynamecreiht: what changed?17:08
creihtnotmyname: not sure, gholt fixed some logging problems that we had17:08
notmynamecolinnich: or bzr log, I guess17:08
creihtI'll have to look17:08
*** adiantum has quit IRC17:08
desaifishy: that is fine (it is vlan3010, due to local requirements); i can ping the instance from the node running nova-network, so the vlan is being brought up properly17:09
colinnichnotmyname: it doesn't have bzr history (I keep my own local repo of the code I am using) and __version__ = '1.1.0' :-(17:09
desaishouldn't any outbound traffic to the metadata service be hitting the PREROUTING DNAT rule for
desaivishy: sorry, somehow autocorrect got turned on in my im client, need to fix that17:10
colinnichnotmyname: would have been from around Jan 20th17:10
creihtcolinnich: yeah don't worry about that, we have duplicated the issue... looking into it now17:10
notmynamewe're looking in to it17:10
*** guigui1 has joined #openstack17:11
creihtand thanks for catching that17:11
vishydesai yes17:11
vishydesai: if you can ping you might wait for the metadata to fail and then ssh in17:12
vishyand see what is happening17:12
creihtcolinnich: would you mind posting a bug?17:12
colinnichcreiht: ok17:12
vishydesai: you could also try killall dnsmasq and restart nova-network17:12
vishydesai: possibly some settings changed and the old dnsmasq instances don't have the right settigns?17:13
desaicould me17:13
desaicould be, i'll try that17:13
*** adiantum has joined #openstack17:13
desaivishy: still failing17:16
vishyok just as a sanity check17:16
vishythis was working previously, yes?17:17
desaithe curl works17:17
colinnichcreiht: Bug #71199517:17
uvirtbotLaunchpad bug 711995 in swift "all components output duplicate logging" [Undecided,New]
creihtcolinnich: thanks17:18
vishyok so for some reason the 169 req isn't making it17:18
vishydesai: if you wait for metadata to time out you should be able to ssh in to the instance17:18
desaiso it looks like it is hitting the PREROUTING rule, but not making it through to the subsequent DNAT rule17:18
desaiok, i'll ssh in when it become available17:18
vishywhen you do, try pinging then try to wget
vishyyou might have to do some tcpdumps on various interfaces to figure out where it is getting dropped17:21
vishyit looks like the packet is not making it to the network host for some reason17:21
desai10.0.0.1 will ping; i tested that out on the last instance before the restart (and is needed for the ssh to work in the first place)17:21
vishyotherwise it would be hitting17:22
vishyis it possible you have another host claiming
*** adiantum has quit IRC17:22
desaipossible, let me run a quick check17:22
vishylike an old network host that is running dnsmasq?17:22
desaii went through and killed them all17:22
desaii'll check the arp table when the node is up17:22
*** lvaughn_ has quit IRC17:23
vishycuz killing nova-network doesn't remove the bridge with the ip or kill dnsmasq17:23
desaii see a few old interfaces kicking around on former network hosts17:24
*** lvaughn has joined #openstack17:24
desaibut not
*** dendrobates is now known as dendro-afk17:25
*** adiantum has joined #openstack17:25
uvirtbotNew bug: #711995 in swift "all components output duplicate logging" [Critical,Confirmed]
vishyodd.  Gotta head in to the office.  BB in 30 min.  I'll keep thinking about what else could be going wrong.  Some instances (desktop ubuntu for example) try to arp for 169.254... This fails, but can be fixed by adding 169.254 to the network host.  I don't think i've ever seen ami-tty arp for the address though17:28
*** blueadept has joined #openstack17:28
*** blueadept has joined #openstack17:28
vishyyou could add the 169 address to the network host and see if that helps but i think it is probably something else17:28
*** guigui1 has quit IRC17:28
vishyip addr add scope link dev <eth where vlans are added>17:30
desaicool, thanks, i'll keep digging17:30
*** hggdh has quit IRC17:31
*** hggdh has joined #openstack17:32
*** colinnich has quit IRC17:32
*** adiantum has quit IRC17:33
*** pvo is now known as pvo_away17:33
*** dendro-afk is now known as dendrobates17:33
desaioh, i think that i figured it out17:33
*** MarcMorata has joined #openstack17:34
desaiit looks like the traffic is being natted (though left on 3010) because nova-network used to run on the host running nova-compute17:34
*** adiantum has joined #openstack17:34
jaypipessirp-: option groups merged into lp:glance/cactus. (with merge comment: "Merge Rick's Super-Cool Option Group Extravanganza.")17:38
*** MarcMorata is now known as Seoman17:38
*** Seoman is now known as MarcMorata17:38
sirp-jaypipes: woot17:39
jaypipessirp-: and re: the migrations patch... patch looks great, but wondering if we can verify it with some sort of test?17:40
*** joearnold has joined #openstack17:42
sirp-jaypipes: hmm that could be tricky since it's really just a cli wrapper on sqlaclhemy-migrate, no real logic to speak of17:43
desaivishy: that seems to have done the trick17:44
jaypipessirp-: yeah, I know, which is why I wasn't *demanding* a test case, just asking what your thoughts were on it...17:46
*** adiantum has quit IRC17:47
*** piken has quit IRC17:48
*** adiantum has joined #openstack17:49
*** pvo_away is now known as pvo17:52
*** pvo is now known as pvo_away17:52
*** Isvara has quit IRC17:52
*** blakeyeager has quit IRC17:58
*** adiantum has quit IRC18:00
*** daleolds has joined #openstack18:02
*** pvo_away is now known as pvo18:02
*** maplebed has joined #openstack18:02
*** maplebed has joined #openstack18:03
*** adiantum has joined #openstack18:05
*** tmarble has left #openstack18:08
*** tmarble has joined #openstack18:09
*** dendrobates is now known as dendro-afk18:09
*** RJD22|away is now known as RJD2218:11
vishydesai: cool18:13
desaii think that there is one question left, and we might be good18:13
vvuksani'm tempted to go out and jump in the snow cause I nearly got my own custom RHEL5 image going on openstack18:14
blueadeptanyone know the names of any companies currently using openstack compute as the baselayer for their vm offering?18:14
desaiwhen we'd run instances with euca, it would auto-associate an external ip if one was available18:14
desaithat doesn't seem to happen automagically with nova18:14
desaishould it?18:14
desaior is there something that we need to tell our users to do differently?18:14
vishyvvuksan: it is a bit tricky, i got centos working ok so it can be done18:14
vishydesai: no there is no auto-associate18:15
vvuksanvishy: last issue I'm trying to resolve is that virtio doesn't work18:15
mrayvishy: is there a list of images to feed into nova's /var/lib/nova/images somewhere? I'm digging through your setup recipe in your Chef repo18:15
*** calavera has quit IRC18:15
mraylike some public ones for testing?18:15
vishyvvuksan: did you mkinitrd?18:16
vvuksanvishy: inside the image ?18:16
vishymkinitrd --with virtio_pci --with virtio_blk ---with virtio -f /boot/initrd-$(uname -r) $(uname -r )18:16
vishyi had to do that (and if you are using separate kernel and ramdisk, copy it out of the image18:17
vishyi think you also need to make sure that ahpci is modprobed on boot if you want volumes to work18:18
vishymray: i was just grabbing ami-tty18:18
desaivishy: i think that we're nearly there18:20
vishymray: there isn't really a default list but i use :nova => { :images => [""] }18:20
desaiwhen i try adding an external address, i get the following trace from nova-network18:20
*** nelson has joined #openstack18:20
desaii'm not sure why it thinks that it needs to add the address to that vlan18:21
vishydesai: you need to specify your public interface18:21
vishyin flags18:21
desaiin the floating create?18:21
vishy--flags.DEFINE_string('public_interface', 'vlan1',18:22
vishyin nova.conf18:22
vishyotherwise it doesn't know where to put the ips18:22
mrayvishy: just making minor changes as I get a feel for what all the recipes and roles do, replacing shell calls with Chef resources18:23
vishymray there is a maverick one also there:
*** adiantum has quit IRC18:24
vishymray: good deal, my most up-to-date stuff is in the devpackages branch btw18:24
mrayoh, crap18:24
mraywell, I actually want to make a "Bexar" release first rather than target the build tools18:25
mrays/build tools/dev builds/18:25
vishythe differences are mostly to support bexar18:25
vishypackages is off of an older release about 3 weeks ago18:25
mrayok, thanks for the heads-up18:25
mraya little visualization I made to keep things straight in my head18:26
vishythose roles were made by adam a long time ago18:27
desaivishy: once i add that to the config, then things seem to work more correctly, though i'm not sure the nat is completely working properly18:27
vishyI have been using the recipes directly18:27
desaiif i try connecting to port 22 on the external ip, i seem to get the management node18:27
vishy(that's why nova network and nova scheduler aren't included in the roles, they didn't exist then18:27
mrayyeah, I'm doc'ing them as I figure out what's going on18:27
vishyand the nova::mysql and nova::rabbit are new as well18:28
vishydesai: from where?18:28
desaifrom the management node, but also from the outside18:29
vishyfrom the mgmnt node that is expected18:29
vishybecause local traffic doesn't use natting tables18:29
vishyoutside should work though18:29
*** adiantum has joined #openstack18:29
desaioh, that makes sense18:29
vishyalthough you have to authorize 2218:29
desaiyeah, i already did that18:29
vishyor it will be blocked on the host18:29
mrayok, I'll update my fork to go off the devpackages branch. I'm thinking I should just call my a 'bexar' branch since all I want to do is maintain a stable releases for now18:30
vishymray: the only differences between devpackages and bexar is that devpackages expect an /etc/default/nova-common file to be created18:30
vishyso the devpackages branch creates it18:30
vishybut there shouldn't be any problem creating it anyway18:31
*** MarkAtwood has quit IRC18:31
vishy(+ there is a bugfix in devpackages that actually makes flatdhcp work properly)18:31
vishyit is a little broken if you don't have a spare interface with no ip in bexar18:32
* mray away18:32
desaihm, it looks like euca-describe-instances is consistently failing with this trace:
*** ctennis has joined #openstack18:34
*** ctennis has joined #openstack18:34
vvuksanvishy: you da bomb18:36
vvuksanvishy: mkinitrd works :-)18:36
desaisorry, not describe-instances, describe-addresses18:36
vvuksani must run out and shovel my driveway now :-)18:36
vvuksanthat's the only way to celebrate ;-)18:37
vishydesai: are you using packages?18:37
desaiyeah, from the anso apt source18:37
vvuksanvishy: thanks a lot18:37
vishythat was fixed a while ago, but after packages18:37
desaii don't mind hacking up the code on the head node until we get newer packages on everything18:38
vishydesai: ok it is a pretty easy fix18:39
vishydesai, just need to change 'ec2_id' to 'id'18:39
desaiok, one sec18:39
*** adiantum has quit IRC18:41
desaigreat, that fixed it18:41
*** hub_cap has joined #openstack18:41
desaiso i think that we have some routing issues to overcome with our net folk, but it looks like everything is working now18:41
vishydesai: we should be updating packages soon18:41
desaivishy: thank you so much for all of the hlep18:41
desaivishy: cool, we'll upgrade when that is ready18:42
vishydesai: yw18:42
*** ctennis has quit IRC18:42
*** colinnich has joined #openstack18:44
vvuksannow quick desai document everything :-)18:44
* desai grins18:45
desaii think that a set of details for a multi-node system network would be helpful18:45
desaii can write that up18:45
*** Isvara has joined #openstack18:46
*** adiantum has joined #openstack18:46
*** jfluhmann_ has joined #openstack18:52
*** rabbityard has joined #openstack18:53
*** kpepple has left #openstack18:53
rabbityardkpepple: concall ?18:53
*** rabbityard has quit IRC18:54
*** pvo is now known as pvo_away18:55
*** jfluhmann__ has quit IRC18:56
*** littleidea has quit IRC18:57
*** adiantum has quit IRC18:57
*** arthurc has quit IRC18:58
*** kashyapc has quit IRC18:59
*** rnirmal_ has joined #openstack19:00
*** pvo_away is now known as pvo19:02
*** adiantum has joined #openstack19:02
*** rnirmal has quit IRC19:03
*** rnirmal_ is now known as rnirmal19:03
*** joe123 has joined #openstack19:06
that__guyvishy: what has to be done to get m1.small and m1.large instances launched19:07
that__guyvishy: they're currently just hanging in 'pending' status19:07
that__guyvishy: m1.tiny instances launch just fine19:07
*** adiantum has quit IRC19:08
*** grapex has quit IRC19:08
*** omidhdl has joined #openstack19:08
vishythat__guy: hmm it should just work19:08
vishythat__guy: do you get an error in nova-compute?19:09
*** colinnich has left #openstack19:09
*** hub_cap_ has joined #openstack19:09
vishythat__guy perhaps there is an unmet dependency?19:09
*** hub_cap has quit IRC19:10
*** hub_cap_ is now known as hub_cap19:10
Ryan_Lanethat__guy: does your compute node have enough free memory to launch an instance of that size?19:10
*** adiantum has joined #openstack19:10
that__guyryan: yes, it has plenty19:10
Ryan_Laneand enough free storage space?19:11
vvuksanvishy: here is the doc :-)
that__guyah, it's the partitioning on the nodes...19:12
*** ctennis has joined #openstack19:12
*** ctennis has joined #openstack19:12
vvuksanvishy: do you recommend including any of the other virtio drivers like balloon ?19:12
*** Ryan_Lane is now known as Ryan_Lane|food19:13
*** colinnich has joined #openstack19:15
*** grapex has joined #openstack19:15
*** baldben has joined #openstack19:16
*** miclorb has joined #openstack19:16
*** reldan has quit IRC19:18
*** ctennis has quit IRC19:19
*** laurensell has left #openstack19:21
*** aimon has joined #openstack19:21
*** RJD22 is now known as RJD22|away19:23
*** fabiand_ has joined #openstack19:23
*** RJD22|away is now known as RJD2219:24
vishyvirtio: yes i think balloon is needed too19:24
vishyvvuksan: ^^19:24
*** baldben has quit IRC19:25
*** BK_man has joined #openstack19:25
vvuksanvishy: how about ring ?19:25
vvuksani suppose I should to net as well19:25
vishydon't know about ring...19:25
vishyi do net because virtio net is much faster19:25
vishyalthough there is no logic to set it properly in libvirt.xml yet, but that will be soon19:26
*** baldben has joined #openstack19:26
vvuksanadded it to the Wiki19:26
*** adiantum has quit IRC19:27
*** adiantum has joined #openstack19:33
*** nelson has quit IRC19:34
*** mtaylor has quit IRC19:37
*** kbringard has joined #openstack19:38
*** burris has quit IRC19:38
*** littleidea has joined #openstack19:38
*** MarcMorata has quit IRC19:40
*** mtaylor has joined #openstack19:42
*** ChanServ sets mode: +v mtaylor19:42
*** adiantum has quit IRC19:43
jaypipesvishy: hey. vagrant looks interesting. however, if I read the instructions correctly, you are using Virtualbox to deploy VMs with Nova on them, which then deploys VMs somewhere else?  Can you help me wrap my head around how you are using vagrant? Thanks!19:44
vishyjaypipes that is it exactly19:44
*** bcherian has joined #openstack19:44
jaypipesvishy: ah, ok :)19:44
jaypipesvishy: cheers19:44
jaypipesvishy: I'm not going to comment any more on that particular merge req cuz I'm no expert in networking, and any input I had would be superficial :)19:45
jaypipesvishy: very MCEscher, btw ;)19:45
kbringardjaypipes: it's just turtles, all the way down :-)19:46
kbringardhey vishy, I have (what I hope is) a quick question19:46
*** rkstager has joined #openstack19:46
termieheya, what are the valid characters in a bzr tag/branch name?19:46
jaypipeskbringard: :)19:46
kbringardI create a project and assign it an admin19:46
kbringardthen I create a new user, who isn't an admin, and add them to said project19:47
kbringardbut when I try to launch an instance in that project as the non-admin user, it's giving me a 40119:47
jaypipestermie: hmm, good question... I think alphanum, hyphen, dot...19:47
jaypipestermie: whatcha trying? :)19:47
*** baldben has quit IRC19:47
termiejaypipes: i just want a full list, bzr allows tilde and git does not, for example19:47
jaypipestermie: ah, I see... I'll see what I can dig up.19:48
termiegit has a page like this:
termiebut i couldn't figure out what to search for with bzr19:48
vishykbringard: nova-manage r a <user> sysadmin19:48
jaypipestermie: I'll see what I can find.19:48
termiejaypipes: thanks :)19:48
*** colinnich_ has joined #openstack19:48
vishykbringard: then nova-manage r a <user> sysadmin <proj>19:48
termie(the recent creation of a tag named '2011.1~rc1' caused my to have to filter something19:49
kbringardok, so they have to be a sysadmin to launch instances19:49
*** adiantum has joined #openstack19:49
vishyaye, permissions are listed in nova/api/ec2/__init__.py19:49
* jaypipes heads over to #bzr...19:49
vvuksankbringard: now you have to document what vishy said19:49
kbringardvvuksan: hehe19:50
vvuksankbringard: no weaseling out19:50
devcamcarcreiht: are you around?19:50
creihtdevcamcar: what's up?19:50
devcamcarcreiht: howdy!19:51
devcamcarso we're rolling out swift for nebula this week in an alpha state for some users to start playing with, after going through a few experimental installs19:51
devcamcarit's been a few months since I did a roll out, just wanted to sync up with you and find out what i've missed :)19:51
creihtwhat version are you installing?19:52
devcamcarwe're rolling out 5 zones with 3 machines each19:52
devcamcarwhichever one you tell me to19:52
creihtI would highly recommend 1.219:52
devcamcarour goal is to create a stable small cluster and then add to it19:52
*** colinnich has quit IRC19:52
*** colinnich has joined #openstack19:53
devcamcari got a bit of a strange mix of capacity, what kind of issues will we have if the zones aren't the same size?19:53
creihtdevcamcar: how much of a delta?19:53
creihtWe haven't specifically tested zones that have a large variance in capacity19:54
*** colinnich_ has quit IRC19:54
*** colinnich has joined #openstack19:54
devcamcarcreiht: my problem is i got dropped on me a mix of 12T and 24T storage blades19:55
devcamcarso i have 7 x 24T and 8 x 12T19:55
devcamcarwhich isn't ideal19:55
*** kpepple has joined #openstack19:56
creihtso 2 zones will have 12T more than the others?19:56
devcamcarcreiht: yea, thats what it looks like19:56
*** kpepple has left #openstack19:56
*** kpepple has joined #openstack19:56
*** adiantum has quit IRC19:56
devcamcarcreiht: we won't be filling those to capacity before we expand though, so i'm not worried about filling it19:56
creihtk, I think you should be alright19:56
creihtThose 2 zones will just have excess capacity that can't really be used until you expand19:57
devcamcarcreiht: figured, but can't hurt to ask19:57
*** hggdh has quit IRC19:58
creihtdevcamcar: I need to go fix some power issues real quick, leave you questions here, and I will answer when I get back19:58
*** hggdh has joined #openstack19:58
*** reldan has joined #openstack19:59
devcamcarcreiht: that's all i have for now, thanks19:59
*** pvo is now known as pvo_away19:59
kbringardwhere can I get a list of the role types... I am looking in the nova admin manual, but it doesn't look like those are current20:01
*** adiantum has joined #openstack20:02
*** pvo_away is now known as pvo20:04
jaypipestermie: answer from Robert Collins: "tags are unicode pretty much anything except newlines and 0x00".20:04
*** Ryan_Lane|food is now known as Ryan_Lane20:05
kbringardin the tests I only see "netadmin", "sysadmin", and "cloudadmin"20:06
*** littleidea has quit IRC20:07
kpepplekbringard: i think the allowed_roles flag controls this … it's currently set to ['cloudadmin', 'itsec', 'sysadmin', 'netadmin', 'developer'] in trunk20:08
kbringardkpepple: ah, cool, thanks20:08
kpepplekbringard: i assume you can override this in /etc/nova/nova.conf20:09
kbringardI was looking for the projectadmin role...20:09
kbringardI want a role that can fire up and terminate machines in the project, but doesn't have full access to everything20:09
kbringardkpepple: thanks20:11
kpepplekbringard: hmmm … when you create the project, you name the admin … but not sure what role that actually is20:11
kpepplekbringard: hold on, i'm hacking nova-manage right now20:11
*** littleidea has joined #openstack20:12
kbringardyea... I want to have a project admin, but then I want to create users who can act within that project20:12
kbringardshouldn't the developer role be able to fully manage it's own instances and nothing more?20:13
colinnichcreiht: updated proxy to 207 and it looks fixed20:14
*** adiantum has quit IRC20:15
openstackhudsonProject swift build #188: SUCCESS in 31 sec:
openstackhudsonTarmac: Fix duplicate logging20:16
*** rnirmal has quit IRC20:18
*** rnirmal has joined #openstack20:19
*** adiantum has joined #openstack20:21
vvuksankpepple: BTW I got everything up and running on RHEL5 including repurposing existing guests20:21
*** fabiand_ has quit IRC20:22
creihtcolinnich: awesome, thanks!20:22
creihtcolinnich: let me know if you run into any further issues20:22
creihtdevcamcar: cool20:22
ttxcreiht: got a post-RC fix in ?20:23
creihtyes, it fixes the double logging issue20:23
ttxok, typically annoying20:23
ttxany reason why we didn't catch this before ? Recent regression ?20:24
ttxcreiht: ^20:24
*** rcc has quit IRC20:24
creihtIt was a bug from a pre-gamma fix that wasn't noticed20:25
ttxcreiht: ok, thanks for the details20:25
ttxcreiht: don't forget to commit the release version number before the end of your day.20:25
creihtWe might be able to have a functional test that checks for duplicate logging20:25
creihtoh yeah20:26
creihtforgot we have a 1 day RC20:26
ttxshould release tomorrow before you get up.20:26
creihtwill do20:26
ttxcreiht: the release notes are OK from your perspective ?20:27
creihtttx: yeah sent them to the team, and we are good20:27
ttxjaypipes: same question to you for the Glance part20:28
jaypipesttx: haven't gotten to that yet... I'll edit the wiki page shortly.20:31
*** lucasnodine has joined #openstack20:33
*** glenc_ has joined #openstack20:33
*** ciswrk has left #openstack20:34
*** miclorb has quit IRC20:35
*** glenc has quit IRC20:35
*** dubsquared has joined #openstack20:36
*** adiantum has quit IRC20:36
jaypipesttx: done:
*** grapex has quit IRC20:42
*** adiantum has joined #openstack20:42
*** hggdh has quit IRC20:46
*** pvo is now known as pvo_away20:48
*** dprince has quit IRC20:50
*** rnirmal has quit IRC20:51
*** reldan has quit IRC20:51
*** dprince has joined #openstack20:52
*** adiantum has quit IRC20:53
*** adiantum has joined #openstack20:54
*** ctennis has joined #openstack20:55
*** ctennis has joined #openstack20:55
*** hub_cap has quit IRC20:57
*** rnirmal has joined #openstack20:57
*** hub_cap has joined #openstack20:57
*** hggdh has joined #openstack21:01
*** dprince has quit IRC21:02
ttxjaypipes: ok, thanks21:06
*** miclorb has joined #openstack21:06
*** pvo_away is now known as pvo21:10
*** omidhdl has left #openstack21:12
*** adiantum has quit IRC21:17
devcamcarcreiht: i'm building our swift deployment using chef this go around.  opscode put up openstack recipes awhile ago, but i think they are out of date. if you have a minute can you look at these templates and let me know how out of date they are
creihtdevcamcar: unfortunately I'm not that familiar with chef, but given that they were committed in July, they are likely quite out of date21:21
creihtyeah, opening one of the config files has the old style config (which was swift 1.0)21:21
devcamcarcreiht: yea that was my assumption21:21
*** dirakx has quit IRC21:22
*** adiantum has joined #openstack21:22
*** rnirmal_ has joined #openstack21:29
zykes-anyone here familiar with openvswitch and kvm ?21:31
*** ctennis has quit IRC21:32
*** rnirmal has quit IRC21:32
*** rnirmal_ is now known as rnirmal21:32
*** bwalker7125 has joined #openstack21:34
termiejaypipes: hrm, disappointingish :/ sort of generates an unreconcilable incompatibility between git and bzr21:41
*** adiantum has quit IRC21:43
*** rkstager has quit IRC21:43
termiettx: is it possible to reset that tag to something without a tilde in it?21:47
*** littleidea has quit IRC21:47
termiettx: while i can hack the tool to replace tilde with something, it would be easiest if the repo simply used git-compatible names (tilde is the only one that looks like it would ever turn up)21:48
jaypipesttx: yeah, agreed with termie... tildes in branch names isn't a good idea for compat.21:48
*** adiantum has joined #openstack21:49
*** kpepple has left #openstack21:49
jaypipestermie: though... what tool would you have to hack?  just curious...21:49
termiejaypipes: i have git-bzr-ng working well again21:50
termiejaypipes: so i can manage things in git and import/export with bzr21:50
termiejaypipes: greatly improves my workflow21:50
termiejaypipes: but the addition of a tag with tilde in it today made the latest update invalid for git so i had to patch the bzr-fastimport library again21:50
termieand since i just finally yesterday got my bzr-fastimport patches accepted into trunk it is a little disappointing to be foiled by a name choice21:51
termieanything things done to automatically escape invalid characters and de-escape them when pulling them back into bzr are going to be hacks at best21:51
*** joe123 has quit IRC21:52
jaypipestermie: gotcha, understood.21:53
termiettx: so, in summary, is it possible to roll back and just an underscore? 2011.1_rc121:54
termieit will force anybody who has pulled today to grab a new branch, but there aren't any code changes there21:56
jaypipestermie: how about a hyphen instead of underscore?21:56
termiejaypipes: i don't care what it is as long as it is git compatible21:57
* jaypipes votes for hyphen21:57
termiejaypipes: usually if somebody uses a tilde it means they are trying to imply something different from a hyphen21:57
termiejaypipes: but i don't see a problem with a hypen21:57
jaypipestermie: agreed. it means "home directory" to me...21:57
jaypipesor "user-related"21:57
termieit means version number outside of actual app versioning to me21:58
termiewhich is appropriate for this21:58
termiebut underscore can be used first21:58
termieand tilde is the secondary outside of app number21:58
termieas in a package version on apt21:58
jaypipesah, yes21:59
termieis ttx the appropriate person to be asking about this, anyway?21:59
*** kpepple has joined #openstack22:00
termiewho has access to the repo22:00
annegentlejaypipes: do you have access to the /openstack-devel/ Launchpad page, and can I submit my documentation code there? Right now it won't let me - says it isn't set up for that.22:01
*** blakeyeager has joined #openstack22:02
termiettx: actually, the old releases used a hyphen anyway22:02
*** adiantum has quit IRC22:03
jaypipesannegentle: lemme check22:06
jaypipestermie: soren and ttx, yes22:06
jaypipestermie: ttx may be asleep. (he's in Paris, AFAIK)22:06
annegentlejaypipes: thank you for the help22:06
jaypipestermie: he's also been pretty ill.22:06
*** blakeyeager has quit IRC22:07
jarrodi can figure what to do xen or kvm22:07
sorenjaypipes:  hm?22:07
sorenOh, the tilde?22:08
jaypipesannegentle: no branch is linked to openstack-devel's trunk...  I would ask mtaylor what the plan was for that... I'm not sure myself.22:08
termiesoren: heya, trying to get the tag 2011.1~rc1 renamed to 2011.1-rc1 in a history rewriting way22:08
jaypipessoren: ya, read back. termie's asking to replace the tilde with a hyphen. I think it's a reasonable request.22:08
sorenUh... Why?22:08
termiesoren: the existing names have used hyphens instead of tildes22:08
sorenIt's totally intentional.22:08
sorentermie: Yeah. That was a mistake.22:08
annegentleah ok. I'll ask him... can I link my branch to  its trunk? Or is that a permissions issue that only mtaylor has ability to do?22:08
jaypipessoren: git branch names cannot contain a tilde22:08
*** adiantum has joined #openstack22:09
sorenI'm not sure limitations in git should mandate our versioning scheme.22:10
termiesoren: i am asking that they do, it is just a simple name22:10
sorenWhy do you want a branch off an rc anyway?22:10
termiesoren: i don't, but i have to import the history22:10
termiesoren: and when it attempts to create an invalid name it errors22:10
sorenTe tilde was chosen because dpkg sorts foo~ before foo.22:11
termiesoren: this is the first invalid name in the history of the project22:11
termiesoren: who does it sort _22:11
termiesoren: s/who/how22:11
zykes-how would look into implementing openvswitch in openstack ?22:11
sorentermie: Sorry, I don't understand the question.22:11
sorentermie: Well,I think I do, but I also think I just answered it, so I must have misunderstood the question.22:11
termiesoren: does dpkg sort foo_ before foo. ?22:11
jaypipesannegentle: that's something you'll have to ask mtaylor. Typically, it is a team that owns the trunk branch (or Hudson owns it). I don't know what the intentions were in the case of openstack-devel22:12
sorentermie: underscores have very special meaning in dpkg.22:12
termiesoren: or foo- before foo.22:12
annegentlejaypipes: ok, got it. Thanks!22:12
sorentermie: No.22:12
mtaylorannegentle: I wasn't sure what was going to go in there22:12
sorentermie: Or... wait.22:12
sorentermie: no.22:13
jaypipeszykes-: Step one: Read :)22:13
sorentermie: The goal is to make the rc version sort as older than the final version.22:13
sorentermie: Only the tilde has that property.22:13
notmynamefor swift we decided on 1.2-rc and 1.2.0 (for final)22:13
mtaylorannegentle: feel free to link your branch to it22:13
bwalker7125is there a simple command in swift to see how much storage space is remaining (free) in the cluster, or would that have to be calculated by a custom program?22:13
notmyname- < .22:13
zykes-jaypipes: more thinking of that libvirt doesn't support it22:13
termiesoren: that can be managed in packaging, too22:13
termiesoren: rather than in the branch tag name22:14
termiesoren: since it is a packaging specific issue22:14
sorentermie: vs. a git specific issue.22:14
notmynamebwalker7125: we use dsh and df22:14
termiesoren: vs a core developer specific issue22:14
bwalker7125ok, thanks22:14
sorentermie: Sorry, why can't you work around this?22:15
sorenWell, one of us have to.22:15
termiesoren: i told you what my issue is, you are choosing a simple name22:15
jaypipeszykes-: I'm no networking expert, I just know that OpenVSwitch has been discussed in relation to that spec/blueprint. vishy may have some insight on it, or you could email the mailing list. should get some quick feedback.22:15
termiesoren: the name can be easily changed22:15
sorentermie: Ok, so one of us have to do some rewriting somewhere.22:15
sorenThat I understand.22:16
uvirtbotNew bug: #712147 in swift "logging doesn't support differing log_levels for the same log_name" [Low,In progress]
sorenI'm not sure why it necessarily can't be you.22:16
*** burris has joined #openstack22:16
termiei am not asking you to patch dpkg22:16
creiht~ seems like an odd character to use in a release name22:17
sorenI'm not asking you to patch git.. i think.22:17
sorenAm I?22:17
* creiht shrugs22:17
*** adiantum has quit IRC22:17
termieyou are asking me to patch a variety of libraries that deal with bzr and git22:17
sorenI'm cool patching, dpkg, though. I used to maintain the bloody thing in Ubuntu :)22:17
creihtwhy can't you guys find a reasonable name that works with both, and nobody has to patch anything?22:18
sorenI can explain that.22:18
*** adiantum has joined #openstack22:18
termiecreiht: that is all i am asking for22:18
jaypipescreiht: I think that's what termie's asking for ;)22:18
jaypipesah, jinx.22:18
creihtI agree, just stating the obvious to show how silly soren's argument is :)22:18
sorenWe want the final version to sort as newer than the rc version. Assuming that the rc version is made up of the final version+some suffix, the first character of suffix must sort as newer than the emptry string. The only character in the whole wide world that has that property is the tilde.22:19
notmynamebut that's a packaging issue, right? how does that affect the branch names?22:20
gholt} | {22:20
sorengholt: hm?22:20
sorennotmyname: I'm not sure how branch names fit into this at all.22:21
mtaylorsoren, notmyname: it's not branch names in bzr - it's tags22:21
mtaylorwe tag releases with their version22:21
gholtI guess it depends on your sorting algorithm. Maybe collate = soren?22:21
creihtso how about 2011.1-rc and 2011.1.0? or something like that22:21
mtaylorthat ^^^ does sort properly22:21
sorengholt: Sorry, the assumption was dpkg's sorting algorithm.22:21
sorenmtaylor: What does?22:22
mtaylorsoren: .0 sorts newer than -rc22:22
mtaylordpkg --compare-versions 1.2-rc lt 1.2.022:22
gholtIsn't ~ normally used for local patches?22:22
sorengholt: No.22:22
sorengholt: Well, yes.22:22
gholtAh, I'm a packaging newb, so, heh.22:22
sorengholt: Because you usually want those to have this property.22:23
jaypipesthis conversation brought to you by the letter ~ and the number #.22:23
sorengholt: So it's not made for that specific purpose, but for that specific purpose, it's usually what you want to use.22:23
sorenmtaylor: Uh. No.22:23
kbringarddid bundling an entire image, kernel, ramdisk, and all end up making it into bexar?22:23
mtaylorsoren, termie: perhaps we should see what git-buildpackage does to deal with debian packages with ~ in it22:23
mtaylorsoren: no?22:23
sorenmtaylor: uh, sorry.22:23
sirp-*shakes fist at sqlalchemy-migrate*22:24
sorenmtaylor: You're right.22:24
termiemtaylor: i don't think git-buildpackage would really apply, the tilde is a packaging issue, not a branch name issue, i would doubt that git-buildpackage would attempt to make a new tag named after the released package22:25
sorenI would totally expect it to.22:25
sorenThe other blah-buildpackage do, I believe.22:25
termiesoren: they change yoru source repo?22:26
*** adiantum has quit IRC22:26
termieregardless it seems like we have a working suggestion from mtaylor22:26
jaypipeskbringard: for EC2 API or OpenStack API? In Nova, or just in Glance?22:26
sorenI suggesed the .0 extenstion long ago. At the release meeting where we decided on the versioning scheme, I believe.22:26
jaypipessirp-: oh, did my "useless test" uncover something? ;P22:26
sorenI forget why it was rejected.22:27
sorenMan, I type like crap on this laggy connection.22:27
*** baldben has joined #openstack22:27
* soren looks through meeting logs22:28
kbringardjaypipes: ec222:28
*** hub_cap has quit IRC22:29
*** bcherian has quit IRC22:30
sirp-jaypipes: oh it uncovered something, the pit-of-despair that is migrate :P22:30
jaypipeskbringard: no. still have to do the euca2ools method of creating the kernel, ramdisk, then bundling the machine image and using euca-bundle-image, which talks to nova-objectstore under the covers.22:31
jaypipessirp-: ha! :)22:31
termiesirp-: agreed on despair, but what is the problem?22:31
kbringardjaypipes: cool, thanks :-)22:31
sirp-termie: migrate doesn't seem to be playing nicely with db-indexes, it wants to recreate them erroneously which causes an OperationalError22:31
*** kpepple_ has joined #openstack22:31
termiesirp-: when migrating austin->bexar?22:32
termiesirp-: or from scratch22:32
sirp-termie: this is migrations for glance (just added)22:32
*** adiantum has joined #openstack22:32
termiesirp-: ah i see22:32
*** kpepple_ has quit IRC22:32
sirp-termie: i'm running a test that jaypipes suggested where i downgrade and them immediately re-upgrade22:32
sirp-it should work, but just happens not to22:33
* jaypipes couldn't think of a better/smarter test :(22:33
sirp-if i remove the index from the column, all is well :|22:33
pvojaypipes: do you know then the bps are getting cut off? is it 12:00 on the 4th?22:33
termiesirp-: i don't necessarily expect downgrade to work22:33
termiesirp-, jaypipes: is the migrate stuff modelled after nova's migrate?22:33
sirp-termie: it's similar, but i tweaked some small things22:34
termiesoren: any luck on the meeting logs? would it be an irc meeting22:34
jaypipestermie: yes, almost identical to nova's22:35
*** kbringard has quit IRC22:36
jaypipespvo: ? no idea. I would assume midnight on the 4th, UTC?22:36
pvok, thats what I thought too22:36
sorentermie: I thought it was, but perhaps not.22:36
sirp-*might be spoiled by rails migrations, which Just Work*22:36
sorentermie: Well, it was on IRC for sure, but perhaps not in #openstack-meeting.22:36
*** kpepple_ has joined #openstack22:37
sorentermie: Can you help me understand what exactly breaks because of this?22:37
termiesoren: when i import a bzr repository, some of the history can't be played because one command attempts to create an invalid tag name22:37
*** grapex has joined #openstack22:38
kpepplesirp- : i hear you on the rails migrations … struggled with sqlalchemy-migration last night22:38
sorentermie: What do you use to "import a bzr repository"? How does that work?22:38
termiebzr-fast-export into git-fast-import22:38
daboheading out a little early tonight. Besides the lab being down, I have to go to a wake (my wife's cousin-in-law's mother or something like that - never met the woman)22:39
sirp-kpepple:  yeah, wish there were some better tutorials out there, hard to figure out what the best-practices are22:39
*** kpepple_ has quit IRC22:39
*** kpepple_ has joined #openstack22:40
*** adiantum has quit IRC22:40
*** kpepple_ has quit IRC22:41
sorentermie: Would it be terribly complicated to change the tag name in flight in that pipeline?22:41
sorentermie: I don't know the fast-export format.22:41
*** hazmat has quit IRC22:41
*** hazmat has joined #openstack22:41
* soren is still digging through logs..22:41
termiesoren: if i change the name my history is different22:42
termiesoren: the amount that that affects things is somewhat unknown as to the future22:43
termiehowever, changing the name to a compatible name22:43
termiewould solve all of this, and would not require patching third-party libraries22:43
*** vvuksan has quit IRC22:43
termiei have to go to a meeting22:44
sorenWell, I wasn't thinking you'd patch bzr fast-export, just add a bit of sed or something to the pipeline.22:44
sorenbut meh.22:44
termieso, please consider mtaylor's suggestion22:44
termieand i'll be back in a bit22:44
sorenHey, I'm not the one who needs convincing. I like it.22:44
sorenI can't find anything in my logs.22:45
*** adiantum has joined #openstack22:46
sorenWeird. I specifically remember a stack of ideas for versioning being thrown around and then finally settling on YYYY.X (specifically rejecting YYYY.X.X).22:46
sorenMy grep-fu is failing me.22:46
sorenWell, or I don't have logs of it, but that sounds even stranger to me.22:47
*** kpepple has left #openstack22:47
* soren has a *lot* of IRC logs.22:47
pvomy logs would be full too if irc was my "office" : )22:48
* soren has probably a couple of GB of irc logs22:48
*** konetzed_ has joined #openstack22:48
sorenpvo: Yeah. It's pretty neat that you can keep a log of everything that goes on in your office :)22:49
*** konetzed_ is now known as koko22:49
pvomethinks that could be good and bad.22:49
*** koko is now known as Guest6496222:49
sirp-jaypipes: i think i have migrate figured out now; moral: do NOT use the autogenerated scripts, full of bad-practices (module-wide meta,  import *, etc)22:49
sirp-the module-wide meta burned me (how exactly still a little unclear on)22:50
*** imsplitbit has quit IRC22:51
*** Guest64962 has left #openstack22:51
jaypipessirp-: good to know :)22:51
*** kpepple has joined #openstack22:51
*** dirakx has joined #openstack22:58
creihtttx: version update is in trunk22:59
*** ctennis has joined #openstack23:00
*** kpepple has left #openstack23:00
*** colinnich has quit IRC23:01
*** adiantum has quit IRC23:01
*** colinnich has joined #openstack23:01
openstackhudsonProject swift build #189: SUCCESS in 32 sec:
openstackhudsonTarmac: Bumping version to 1.2.0 in preparation for release23:01
*** kpepple has joined #openstack23:03
kpepplesirp-: have you found a way around this "AttributeError: 'module' object has no attribute 'MigrateDeprecationWarning'" error in sqlalchemy-migrate ? it seems to just be a bug in the version we are using … but can't figure out how to get away from it (i'm prepopulating the db with some custom SQL in my migration).23:03
sirp-kpepple:  you're getting this error in nova?23:04
kpepplesirp-: in my branch, i've written a new migration (to add instance_types table) and i get it ...23:05
sirp-kpepple:  i ran into a simillar issue once or twice; the way i got around it, was i figured out what was causing the deprecation warning and then fixed that23:05
*** dendro-afk is now known as dendrobates23:06
*** adiantum has joined #openstack23:06
kpepplesirp-: i know what's causing it (custom sql query) … but not sure how to get "undeprecated" … the docs are a bit unclear on some of this23:07
sirp-kpeppe: agreed, the docs have been very little help23:09
kpepplesirp-: i'll keep plugging away … i'm pretty close … just need to prepopulate a few ec2 types and it'll be done23:09
*** kpepple has left #openstack23:10
*** bcherian has joined #openstack23:11
*** adiantum has quit IRC23:12
sirp-ha, can't use normal import on migrations since they begin with a number (effin' thing sucks)23:12
*** adiantum has joined #openstack23:13
*** baldben has quit IRC23:14
*** abecc has quit IRC23:16
*** abecc has joined #openstack23:17
*** kpepple has joined #openstack23:21
*** baldben has joined #openstack23:30
*** bcherian has quit IRC23:33
*** Isvara has left #openstack23:37
*** reldan has joined #openstack23:37
*** gondoi_ has joined #openstack23:45
*** dubsquared has left #openstack23:46
*** blakeyeager has joined #openstack23:46
*** reldan has quit IRC23:48
*** gondoi has quit IRC23:48
*** gondoi_ is now known as gondoi23:48
*** reldan has joined #openstack23:51
*** opengeard has quit IRC23:57
*** troytoman has quit IRC23:57
*** konetzed has joined #openstack23:58

Generated by 2.14.0 by Marius Gedminas - find it at!