*** matiu has quit IRC | 00:21 | |
*** matiu has joined #openstack | 00:28 | |
*** matiu has joined #openstack | 00:28 | |
notmyname | ttx: swift 1.3.0 approved | 00:35 |
---|---|---|
*** winston-d has joined #openstack | 00:38 | |
*** throughnothing has joined #openstack | 00:46 | |
openstackjenkins | Project swift build #239: SUCCESS in 27 sec: http://jenkins.openstack.org/job/swift/239/ | 00:47 |
openstackjenkins | Tarmac: Final Cactus versioning. Please accept when you sign off the release. | 00:47 |
*** adjohn has joined #openstack | 00:52 | |
*** grape has joined #openstack | 00:58 | |
*** JuanPerez has joined #openstack | 01:05 | |
*** bcwaldon has quit IRC | 01:08 | |
*** zigo-_- has quit IRC | 01:13 | |
*** matiu has quit IRC | 01:14 | |
*** dendro-afk is now known as dendrobates | 01:14 | |
*** Ryan_Lane has quit IRC | 01:16 | |
*** matiu has joined #openstack | 01:21 | |
*** shentonfreude has joined #openstack | 01:29 | |
*** hagarth has quit IRC | 01:35 | |
*** mray has joined #openstack | 01:37 | |
*** bcwaldon has joined #openstack | 01:44 | |
*** mray has quit IRC | 01:57 | |
*** rchavik has quit IRC | 01:59 | |
*** zul has quit IRC | 02:05 | |
*** zul has joined #openstack | 02:05 | |
uvirtbot | New bug: #761324 in nova "I start a instance,but for a moment ,it shutdown" [Undecided,New] https://launchpad.net/bugs/761324 | 02:11 |
*** eskp has quit IRC | 02:13 | |
*** eskp has joined #openstack | 02:23 | |
*** j05h has quit IRC | 02:32 | |
*** grapex has joined #openstack | 02:38 | |
*** j05h has joined #openstack | 02:49 | |
openstackjenkins | Project nova build #818: SUCCESS in 2 min 28 sec: http://jenkins.openstack.org/job/nova/818/ | 02:54 |
openstackjenkins | Tarmac: Final versioning for Cactus | 02:54 |
*** dendrobates is now known as dendro-afk | 02:57 | |
*** dendro-afk is now known as dendrobates | 03:08 | |
*** koolhead17 has quit IRC | 03:14 | |
*** hggdh has quit IRC | 03:32 | |
*** hadrian has quit IRC | 03:35 | |
*** dendrobates is now known as dendro-afk | 03:40 | |
*** joearnold has joined #openstack | 03:43 | |
*** eskp has quit IRC | 03:44 | |
*** smeier00 has joined #openstack | 03:45 | |
*** bcwaldon has quit IRC | 03:56 | |
*** gaveen has joined #openstack | 04:01 | |
*** gaveen has joined #openstack | 04:01 | |
*** joearnold has quit IRC | 04:01 | |
*** eskp has joined #openstack | 04:05 | |
*** joearnold has joined #openstack | 04:05 | |
*** adiantum has quit IRC | 04:13 | |
*** bcwaldon has joined #openstack | 04:16 | |
*** adiantum has joined #openstack | 04:27 | |
*** joearnold has quit IRC | 04:27 | |
*** matiu has quit IRC | 04:33 | |
*** eskp has quit IRC | 04:36 | |
*** bcwaldon has quit IRC | 04:41 | |
*** f4m8_ is now known as f4m8 | 04:43 | |
*** throughnothing has quit IRC | 04:45 | |
*** throughnothing has joined #openstack | 04:46 | |
*** eskp has joined #openstack | 04:46 | |
*** throughnothing has quit IRC | 04:47 | |
*** guynaor has joined #openstack | 04:51 | |
*** guynaor has left #openstack | 04:51 | |
*** throughnothing has joined #openstack | 04:53 | |
*** santhosh has joined #openstack | 04:53 | |
*** mihgen has joined #openstack | 04:56 | |
*** elambert_ has joined #openstack | 04:58 | |
*** elambert has quit IRC | 04:58 | |
*** elambert_ is now known as elambert | 04:58 | |
*** rchavik has joined #openstack | 05:18 | |
*** ccustine has quit IRC | 05:23 | |
*** CloudChris has joined #openstack | 05:25 | |
*** mihgen has quit IRC | 05:26 | |
*** santhosh has quit IRC | 05:29 | |
*** santhosh has joined #openstack | 05:34 | |
*** elambert has quit IRC | 05:37 | |
*** smeier00 has quit IRC | 05:49 | |
*** jeffjapan has quit IRC | 05:49 | |
*** kashyap has joined #openstack | 05:57 | |
*** rsaidan has joined #openstack | 06:01 | |
*** mihgen has joined #openstack | 06:02 | |
*** mgoldmann has joined #openstack | 06:04 | |
*** mgoldmann has joined #openstack | 06:04 | |
zykes- | anyone here from midokura ? | 06:08 |
adjohn | zykes-: Ype | 06:12 |
adjohn | yep | 06:12 |
*** jeffjapan has joined #openstack | 06:12 | |
zykes- | adjohn: does midostack use openvswitch ? | 06:15 |
adjohn | zykes-: we have a virtual networking platform (midonet) which uses some of openvswitch | 06:15 |
zykes- | ok, do you support san ? | 06:16 |
*** zaitcev has quit IRC | 06:16 | |
*** herki has quit IRC | 06:18 | |
adjohn | We support L2 switches, so in theory you could run FC or similar protocols on a virtual L2 switch | 06:19 |
adjohn | Other than that, we don't have support for those protocols, but they could be added.. | 06:20 |
*** santhosh has quit IRC | 06:22 | |
zykes- | adjohn: how does mido do storage then ? we have shared storage in our company so. | 06:24 |
adjohn | Our distro also provides a block storage service which can run on commodity hardware (similar to ebs).. Though we are open to adding san support for specific customers needs. | 06:25 |
*** winston-d has quit IRC | 06:26 | |
*** winston-d has joined #openstack | 06:28 | |
winston-d | BK_man : hi, do you have what libvirt.xml.template Nova compute is using? there's one under /usr/share/nova and another one in /usr/lib/python/site-packages/nova/virt | 06:29 |
*** adjohn_ has joined #openstack | 06:34 | |
*** daveiw has joined #openstack | 06:35 | |
*** adjohn has quit IRC | 06:36 | |
*** adjohn_ is now known as adjohn | 06:36 | |
*** notmyname has quit IRC | 06:40 | |
*** notmyname has joined #openstack | 06:41 | |
*** ChanServ sets mode: +v notmyname | 06:41 | |
*** shentonfreude has quit IRC | 06:43 | |
*** notmyname has quit IRC | 06:46 | |
*** notmyname has joined #openstack | 06:46 | |
*** ChanServ sets mode: +v notmyname | 06:46 | |
*** openstack has joined #openstack | 06:54 | |
*** ttx has joined #openstack | 06:55 | |
*** ttx has joined #openstack | 06:55 | |
*** zenmatt has quit IRC | 06:57 | |
*** antenagora has joined #openstack | 06:58 | |
*** rcc has joined #openstack | 07:06 | |
*** jstinson has quit IRC | 07:20 | |
*** Gaelfr has joined #openstack | 07:21 | |
*** lurkaboo is now known as purpaboo | 07:22 | |
*** bkkrw has joined #openstack | 07:24 | |
*** lionel has quit IRC | 07:27 | |
*** lionel has joined #openstack | 07:28 | |
*** gaveen has quit IRC | 07:36 | |
ttx | soren: ping me when around | 07:39 |
*** nacx has joined #openstack | 07:42 | |
winston-d | HugoKuo_ : around? | 07:51 |
soren | ttx: o/ | 07:53 |
ttx | soren: hey, I'm about to tag release, you agree this needs to be done before branching cactus out ? | 07:54 |
soren | ttx: If you're talking about bzr tags it's not very important, really. We can move those around at will. We shouldn't! But we could. | 07:55 |
ttx | ok | 07:55 |
*** allsystemsarego has joined #openstack | 07:57 | |
*** CloudChris has left #openstack | 07:57 | |
* ttx grabs a coffee and will upload release after that. | 08:02 | |
*** fayce has joined #openstack | 08:02 | |
*** fayce_ has joined #openstack | 08:03 | |
*** fayce has quit IRC | 08:03 | |
*** fayce_ is now known as fayce | 08:03 | |
*** Gaelfr has quit IRC | 08:05 | |
*** uksysadmin has joined #openstack | 08:17 | |
zykes- | ttx: do you know if there's been any interest for OS from Scandinavia ? | 08:18 |
zykes- | like Bahnhof or some of the others. | 08:18 |
*** alekibango has quit IRC | 08:22 | |
soren | zykes-: What's Bahnhof? | 08:23 |
soren | (Other than a train station in German) | 08:23 |
zykes- | soren: .se isp | 08:23 |
soren | Ok. | 08:24 |
zykes- | i guess some norwegian providers that could use OpenStack would be something like Basefarm / active24 | 08:25 |
*** Gaelfr has joined #openstack | 08:26 | |
ttx | soren: you can branch out lp:nova/cactus and push Final=False to lp:nova. | 08:27 |
ttx | soren: if you can document the proces for doing so at http://wiki.openstack.org/HowToRelease, would be great | 08:28 |
*** koolhead11 has joined #openstack | 08:29 | |
zykes- | i thought maybe you would know ttx ;) | 08:31 |
ttx | zykes-: sorry, no. | 08:31 |
zykes- | ok | 08:31 |
ttx | I'm not the Scandinavian type. | 08:31 |
zykes- | heh, soren? | 08:32 |
ttx | let's do some bugmail noise | 08:34 |
soren | zykes-: I don't know of anyone working with OpenStack in Scandinavia, no. | 08:37 |
zykes- | would be fun to change that | 08:37 |
soren | Very mush so. | 08:38 |
soren | Much, even. | 08:38 |
soren | So much to do, so little time. | 08:38 |
ttx | mush mush. | 08:38 |
*** mihgen has quit IRC | 08:38 | |
*** ramkrsna has joined #openstack | 08:40 | |
*** 77CAAF90J has joined #openstack | 08:52 | |
soren | ttx: Did you create a diablo series yet? | 08:57 |
ttx | soren: let me check | 08:57 |
ttx | yes, the series is created | 08:57 |
soren | Ok. lp:~hudson-openstack/nova/cactus created based on current trunk. | 08:58 |
soren | Do you have the LP magic covered? | 08:58 |
ttx | defien LP magic | 08:58 |
winston-d | BK_man : is "injection of network settings" implemented in your Nova-2011.1 RHEL package? | 08:59 |
ttx | soren: I did the uploads, if that's what you mean | 08:59 |
winston-d | BK_man : or do I have to apply additional patches? I didn't find any related information on http://openstackgd.wordpress.com/ | 08:59 |
soren | ttx: I was more referring to the shuffling around of development series, pointing branches for the various series to the right places, etc. | 09:01 |
soren | Anyway. Coffee. Must. Have. Coffee. | 09:01 |
*** guigui has joined #openstack | 09:01 | |
ttx | soren: no I didn't | 09:01 |
*** flopflip has joined #openstack | 09:02 | |
*** flopflip_ has quit IRC | 09:03 | |
*** dodeeric has joined #openstack | 09:06 | |
zykes- | you on a bug fix race ttx ? | 09:09 |
ttx | on a bug releasing race, yes | 09:09 |
zykes- | oh, why ? | 09:10 |
ttx | (cactus is out, but don't tell anyone) | 09:10 |
zykes- | hmm, why all the mails then about new bugs? | 09:10 |
ttx | Not new bugs | 09:10 |
ttx | FixCommitted -> FixReleased changes | 09:10 |
zykes- | ah | 09:10 |
*** adjohn has quit IRC | 09:12 | |
*** jeffjapan has quit IRC | 09:13 | |
soren | ttx: Ok. | 09:20 |
RichiH | what's the status wrt release? iirc, this might/should happen today? | 09:21 |
ttx | RichiH: don't tell anyone, but it's released already. | 09:21 |
RichiH | nice | 09:21 |
RichiH | can i get images, already? | 09:21 |
soren | ttx: I've set diablo to point to trunk and cactus to point to cactus. | 09:22 |
ttx | soren: ok, did you push the Final=False versioining in ? | 09:22 |
soren | ttx: No, not yet. | 09:22 |
ttx | RichiH: https://launchpad.net/nova/cactus/2011.2 | 09:22 |
soren | ttx: Shall I go and do the same for glance and swift as well? | 09:22 |
ttx | soren: I think you can do Glance. | 09:22 |
*** perestrelka has quit IRC | 09:23 | |
soren | ?!? Glance is confusing. | 09:24 |
zykes- | why ? :p | 09:25 |
RichiH | in somewhat related news, two of us (not me :/) will be in santa clara in two(?) weeks. it seems we might be betting on openstack in a, for us, big-ish way. as such, i am wondering if there are any plans for a community council or similar where people with a strong interest can get involved. all of this is subject to openstack being a good fit for us, of course. but at least from what i saw up to now, i suspect this to be the case | 09:25 |
ttx | soren: yes, there was some strange cactus vs. trunk issues | 09:25 |
soren | WEll, for one, the glance trunk is owned by glance-core. | 09:25 |
RichiH | ttx: thanks | 09:25 |
ttx | soren: the reason is that jaypipes opened cactus early | 09:26 |
ttx | (under glance-core) | 09:26 |
RichiH | (also, i am wearing my openstack t-shirt today in anticipation of the release(thanks soren)) | 09:26 |
soren | :) | 09:26 |
ttx | then I think when lp:glance was pointed to it, it retained its glance-core ownership | 09:26 |
zykes- | RichiH: us who? | 09:27 |
RichiH | globalways.net | 09:27 |
RichiH | german ISP | 09:27 |
ttx | soren: so nothing prevents galnce-core peeps to actually bypass the merge control | 09:27 |
soren | ttx: Nothing at all. | 09:27 |
soren | I'm fixing this up.. | 09:27 |
soren | Somehow. | 09:27 |
*** alekibango has joined #openstack | 09:29 | |
soren | Erk, this is confusing. | 09:29 |
ttx | ok, it's officially out | 09:30 |
RichiH | what is the current minimum hardware recommended to get my own openstack cloud going? | 09:35 |
RichiH | http://nova.openstack.org/getting.started.html should note this as well, imo | 09:35 |
*** adiantum has quit IRC | 09:35 | |
ttx | soren: lp:nova still points to cactus | 09:35 |
alekibango | RichiH: that really depends on how you want to use it | 09:35 |
RichiH | alekibango: for now, i just want it to run in test mode | 09:36 |
soren | ttx: I know. Haven't fixed that part yet. | 09:36 |
soren | ttx: Will do now. | 09:36 |
alekibango | 4-64 GB ram should do | 09:36 |
ttx | soren: preparing Final=False versioning | 09:36 |
RichiH | alekibango: i would scavenge a few machines and get going | 09:36 |
alekibango | even less if your virtual hosts will be small | 09:36 |
alekibango | or if you will want to use swift | 09:36 |
soren | ttx: I set Bexar to "obsolete". | 09:38 |
ttx | eh | 09:38 |
alekibango | so cactus is out? | 09:38 |
alekibango | congrats :) | 09:38 |
alekibango | please put appropriate urls in topic :) | 09:39 |
soren | ttx: You disagree? | 09:39 |
ttx | soren: no, just feels strange :) | 09:39 |
* soren would argue that Bexar has been obsole for months :) | 09:40 | |
soren | I think my keyboard is lossy. | 09:40 |
alekibango | soren: +1 | 09:40 |
alekibango | i hope cactus will actually work :) | 09:40 |
soren | alekibango: I know 100% certin that come things to work. | 09:41 |
soren | certain. | 09:41 |
soren | Lossy keyboard again. | 09:41 |
alekibango | or fingres | 09:42 |
soren | Nope, they're all there. | 09:42 |
alekibango | :) | 09:42 |
soren | I just counted. | 09:42 |
ttx | soren: https://code.launchpad.net/~ttx/nova/open-diablo/+merge/57834 | 09:43 |
ttx | needs to be accepted before we open the floodgates | 09:43 |
*** rsaidan has quit IRC | 09:45 | |
soren | I wonder if there's anything I need to do on Jenkins.. | 09:47 |
ttx | we'll soon see. | 09:50 |
ttx | soren: you can mark lp:nova/cactus "mature", maybe | 09:56 |
soren | ttx: Good point. | 09:59 |
*** NovSlave has joined #openstack | 10:01 | |
alekibango | annegentle: it would be nice to include some real world config examples | 10:01 |
NovSlave | Hi again guys | 10:02 |
alekibango | into docs, so people will be able to have something to start with | 10:02 |
NovSlave | vishy ? here? | 10:02 |
soren | NovSlave: It's 3 AM in Vishy-land. | 10:03 |
NovSlave | Lol | 10:04 |
soren | ttx: Oh, dear. | 10:04 |
soren | ttx: Seems our docs are not being updated. | 10:04 |
NovSlave | ok no problem may you ll be able to help! | 10:04 |
alekibango | hehe... soren its in your best interest to make installation of openstack easier :) | 10:05 |
soren | alekibango: That's all I do. All day :) | 10:05 |
dsockwell | :P | 10:05 |
alekibango | soren: i know... :) | 10:05 |
alekibango | its pretty hard job | 10:05 |
NovSlave | ok thx just 2sec so i can set up myself to provide you with my logs | 10:05 |
soren | alekibango: I've had worse. | 10:06 |
alekibango | soren: some config examples would be helpfull.. sometimes docs and wiki are totally wrong | 10:06 |
alekibango | so having real world examples - with all touched config files... would help | 10:06 |
soren | alekibango: I used to have to make it easy to install something that I knew didn't work. And there was no way for me to fix it. If Openstack has problems, I can go and fix them. | 10:06 |
*** rsaidan has joined #openstack | 10:07 | |
alekibango | soren: good point | 10:07 |
alekibango | reminds me sun tzu... ordering your general a task which army is not able to do is dangerous even for emperors | 10:07 |
ttx | soren: could you explain the impact of "docs not being updated" ? | 10:08 |
koolhead11 | hello all | 10:08 |
alekibango | soren: which docs? | 10:08 |
soren | ttx: Ok, so apparantly, it's been quite a while since the docs on http://nova.openstack.org were updated. That's usually done from a Jenkins job, but something changed (in Nova), so the job didn't really work. | 10:08 |
soren | brb | 10:08 |
ttx | oh, ok | 10:09 |
ttx | anything that doesn't mean I need to issue an emergency 2011.2.1 is fine by me. | 10:09 |
* ttx wants a sane weekend. | 10:09 | |
ttx | before the 3-week traveling frenzy. | 10:10 |
soren | You | 10:11 |
soren | re traveling next week, too? | 10:12 |
ttx | I'll be leaving home early next weekend, so that's my last sane weekend for a while. | 10:12 |
*** guigui has quit IRC | 10:14 | |
BK_man | winston-d: hi | 10:15 |
soren | ttx: The job that updates the docs is the nova job. It triggers a bunch of downstream jobs that I really don't want to run. | 10:16 |
BK_man | winston-d: network injection is implemented for RHEL guests (RHEL-like) | 10:16 |
soren | ttx: Hm.. | 10:16 |
soren | ttx: I guess I could disable the downstream jobs for a little bit. | 10:16 |
BK_man | winston-d: any of those who have /etc/sysconfig/network-scripts/ifcfg-ethX | 10:16 |
BK_man | winston-d: libvirt.xml is specified in /etc/nova/nova.conf: --libvirt_xml_template=/usr/share/nova/libvirt.xml.template | 10:17 |
ttx | soren: if you accept the open-diablo branch it will trigger "nova" | 10:18 |
ttx | soren: with the adequate bunch of downstream jobs ? | 10:19 |
soren | ttx: right, but my concern is snapshotting the cactus final docs. | 10:19 |
ttx | ah, ok. | 10:19 |
soren | ttx: ..but i don't have a current copy of them. | 10:19 |
* ttx lunches and lets soren sort the mess. | 10:19 | |
soren | ttx: Bah, I'll get it sorted somehow. | 10:19 |
soren | ...and then approve the open-diablo thing. | 10:19 |
* soren puts on his ninja suit | 10:19 | |
*** shermanboyd has quit IRC | 10:21 | |
NovSlave | i m starting a new oneserver installation brb in 1 hour | 10:28 |
* soren -> lunch | 10:32 | |
openstackjenkins | Project nova build #819: SUCCESS in 2 min 26 sec: http://jenkins.openstack.org/job/nova/819/ | 10:39 |
openstackjenkins | Tarmac: Diablo versioning. | 10:39 |
*** DigitalFlux has quit IRC | 10:40 | |
*** DigitalFlux has joined #openstack | 10:41 | |
*** arun__ has joined #openstack | 10:55 | |
*** arun_ has quit IRC | 10:56 | |
soren | ttx: Oh, crap. | 11:06 |
*** perestrelka has joined #openstack | 11:07 | |
soren | ttx: Failed to copy cactus release to the release ppa. | 11:08 |
soren | ttx: Failed as in forgot. | 11:08 |
uvirtbot | New bug: #761623 in nova "Json response don't meet specification for addresses and metadata." [Undecided,New] https://launchpad.net/bugs/761623 | 11:09 |
*** 77CAAF90J has quit IRC | 11:14 | |
*** mihgen_ has joined #openstack | 11:15 | |
*** ctennis has quit IRC | 11:19 | |
*** adiantum has joined #openstack | 11:27 | |
*** ctennis has joined #openstack | 11:31 | |
*** ctennis has joined #openstack | 11:31 | |
uvirtbot | New bug: #761652 in nova "get addresses servers/id/ips doesn't have handler" [Undecided,New] https://launchpad.net/bugs/761652 | 11:36 |
ttx | soren: you can still copy it -- the new version isn't built yet. | 11:41 |
soren | ttx: I tried. | 11:41 |
soren | ttx: Couldn't. | 11:41 |
soren | The copy packages thing doesn't show it. | 11:41 |
ttx | also you need to get rid of the gamma2 hack | 11:41 |
soren | Oh, crap. | 11:41 |
ttx | maybe just delete the *king packages | 11:41 |
ttx | before someone upgrades to them. | 11:42 |
soren | Yeah. | 11:42 |
* soren does so. | 11:42 | |
ttx | might just make the previous ones available, who knows. | 11:42 |
soren | They suffer from the same problem, really. | 11:44 |
soren | Well, not the exact same, but a very related one: | 11:44 |
soren | Their package version says "~gamma2" | 11:44 |
ttx | soren: looks like you're in for a manual upload. | 11:44 |
onlany | is there any information available about Zones? | 11:45 |
soren | Removed the ~gamma2 hack. | 11:45 |
onlany | i would like to know how many Zones can one compute node manage? | 11:46 |
onlany | and how many vms can be in one Zone? | 11:46 |
ttx | soren: the 2011.3~gamma2 packages are still pending at https://launchpad.net/~nova-core/+archive/trunk/+builds?build_state=pending | 11:48 |
ttx | not sure how to get rid of those | 11:48 |
*** arun__ is now known as arun_ | 11:48 | |
*** arun_ has joined #openstack | 11:49 | |
dabo | onlany: It's the other way around: a zone can have multiple compute nodes, and each compute node can manage multiple VMs. | 11:50 |
*** rchavik has quit IRC | 11:51 | |
dabo | onlany: see http://wiki.openstack.org/MultiClusterZones?action=AttachFile&do=get&target=NestedZones_sm.png | 11:51 |
soren | ttx: What really? | 11:56 |
soren | ttx: Hm.. | 11:56 |
ttx | soren: maybe it wil scrap them once built, maybe | 11:56 |
soren | ttx: I'll delete them as soon as I can. | 11:56 |
ttx | hehe | 11:56 |
soren | Damnit. | 11:56 |
ttx | soren: document the process so that we don't make the same errors next time :) | 11:57 |
soren | HOLY CRAP! | 11:57 |
onlany | thanks dabo | 11:58 |
*** rchavik has joined #openstack | 11:58 | |
soren | PPA build queue: 16223 jobs (eight days) | 11:58 |
ttx | your builds are 12min away | 11:58 |
soren | My current hypothesis is that the build will fail. | 11:59 |
soren | ..because the source package is gone. | 11:59 |
dabo | onlany: zones are simply logical groupings of services. They will usually be based on physical groupings, but that's not essential. For my testing, I have 9 nested zones on a single box! | 12:00 |
*** bcwaldon has joined #openstack | 12:00 | |
*** nagyz has joined #openstack | 12:02 | |
onlany | wow :) | 12:04 |
*** aimon has joined #openstack | 12:05 | |
onlany | i have to try out this too :) | 12:05 |
dabo | onlany: Oh, it's a lot of fun to manage! http://leafe.com/tmp/csshX.png | 12:06 |
onlany | :) | 12:07 |
onlany | u know about existing commercial solutions based on openstack, except cloudscaling? | 12:08 |
dabo | onlany: I don't really keep up on that side of things. It's in production at NASA, but I'm not sure they count as 'commercial'. | 12:09 |
*** adiantum has quit IRC | 12:11 | |
*** drico has quit IRC | 12:11 | |
zykes- | onlany: Midokura? | 12:12 |
onlany | thanks a lot | 12:13 |
*** h0cin has joined #openstack | 12:18 | |
*** paltman has quit IRC | 12:25 | |
ttx | soren: ok, the orphaned builds failed. | 12:25 |
soren | \o/ | 12:25 |
*** paltman has joined #openstack | 12:25 | |
soren | Phew. | 12:25 |
soren | Crisis averted. | 12:25 |
soren | Window of opportunity shattered. | 12:25 |
* soren exhales for the first time in an hour | 12:25 | |
soren | Approved. | 12:26 |
ttx | soren: still need to manually push to release PPA, and probably regen a trunk PPA build | 12:27 |
*** zigo-_- has joined #openstack | 12:27 | |
soren | ttx: I already did all of that. I just needed to be on top of this. | 12:27 |
soren | ttx: Check it out: | 12:27 |
soren | https://launchpad.net/~nova-core/+archive/tmp | 12:27 |
soren | Failed to build. I must suck. | 12:28 |
ttx | depwait on glance | 12:28 |
soren | Yeah. | 12:29 |
soren | I got it. | 12:29 |
zigo-_- | soren & ttx: I got all working on my Debian test server now. | 12:29 |
soren | \o/ | 12:30 |
soren | zigo-_-: With kvm? | 12:30 |
zigo-_- | My issue was the uec-publish-tarball stuff totally broken... | 12:30 |
zigo-_- | Yup. | 12:30 |
zigo-_- | It should be written in BOLD letters on the doc that we should not use uec-publish-tarball. | 12:30 |
ttx | soren: might want to copy glance packages over from trunk ppa to release ppa, before my new version hits ? | 12:30 |
soren | zigo-_-: I use it *all* the time. | 12:30 |
soren | ttx: Already did. | 12:30 |
zigo-_- | I got stuck durring a week, thinking it was something else. | 12:30 |
zigo-_- | Oh... | 12:30 |
ttx | zigo-_-: uec-publish-tarball is fine... though it might rely on some specific version of euca2ools | 12:31 |
zigo-_- | soren: Which version do you use then? | 12:31 |
soren | zigo-_-: Of what? | 12:31 |
zigo-_- | uec-publish-tarball | 12:31 |
soren | zigo-_-: uec-publish-tarball? Whichever is in Ubuntu. | 12:31 |
zigo-_- | From cloud-utils, right? | 12:31 |
ttx | zigo-_-: yes | 12:31 |
soren | zigo-_-: Sounds about right. | 12:31 |
zigo-_- | Ok, I'll give it another try then. | 12:32 |
alekibango | it might just work now :) | 12:32 |
soren | zigo-_-: It's part of my automated test setup. | 12:32 |
soren | zigo-_-: It works on Lucid, Maverick, and Natty. | 12:32 |
zigo-_- | Ok, thanks for the info. | 12:32 |
soren | Sure thing. | 12:32 |
zigo-_- | What's the set dead line for Cactus? | 12:33 |
soren | Yesterday evening. | 12:33 |
zigo-_- | :) | 12:33 |
soren | It's released. | 12:33 |
zigo-_- | So, we're now working on Diablo, then? | 12:34 |
soren | That's the plan. | 12:34 |
zigo-_- | Ok. | 12:34 |
ttx | soren: re: "Already did." -- can't see them in https://launchpad.net/~nova-core/+archive/release ? | 12:35 |
zigo-_- | So, you can pull my patches for Debian now? :) | 12:35 |
zigo-_- | What's the status? | 12:35 |
soren | ttx: Sorry, I must have gotten confused somewhere. I have them ready to upload. I'll nuke the ones labeled "gamma2". | 12:35 |
zul | is diablo ready yet? huh huh is it? is it? how about now? ;) | 12:36 |
zigo-_- | zul: No, not ready, it STARTS development, as Cactus is released. | 12:36 |
*** ryker has joined #openstack | 12:38 | |
*** dendro-afk is now known as dendrobates | 12:39 | |
*** NovSlave has quit IRC | 12:42 | |
zigo-_- | soren: Does the unit tests work in the released Cactus? | 12:43 |
soren | zigo-_-: Sure. | 12:44 |
soren | zigo-_-: We run them on every single commit. | 12:44 |
zigo-_- | I'll try with your orig.tar.gz then. | 12:45 |
*** hadrian has joined #openstack | 12:46 | |
*** bcwaldon has quit IRC | 12:53 | |
*** j05h has quit IRC | 12:54 | |
*** aliguori has joined #openstack | 12:55 | |
gholt | soren: Are the older packages of Swift supposed to have disappeared immediately when Cactus went live? | 12:55 |
soren | gholt: I don't believe we've touched swift *at all*. | 12:56 |
soren | ttx: ? | 12:56 |
gholt | I don't really see older packages of nova either; maybe I just don't know where to look. | 12:56 |
ttx | no.. except from releasing the tarballs on LP | 12:56 |
soren | ttx: Oh, you did that? | 12:56 |
soren | Ok. | 12:56 |
ttx | gholt: what are you looking for exactly ? | 12:56 |
soren | I didn't know. | 12:56 |
gholt | More curious really. Trying to figure out things "work". | 12:57 |
ttx | they are all at https://launchpad.net/swift/+download | 12:57 |
*** ppetraki has joined #openstack | 12:57 | |
soren | gholt: Can you be a tad more specific? :) | 12:57 |
ttx | gholt: depends on what you call "older packages" obviously | 12:58 |
gholt | The tar balls yeah, but I thought we provided a form of packages for Ubuntu. | 12:58 |
ttx | ah, that I haven't touched | 12:58 |
gholt | So, if somebody was using them for their prod environment (which would be crazy maybe, but...) are they all of the sudden going to have new stuff if they kick a box today? 'Cause that'd be bad. | 12:59 |
soren | gholt: We most certainly do. | 12:59 |
ttx | soren: when I said "cactus is released" I meant swift+glance+nova. | 12:59 |
soren | gholt: Not yet, as we haven't touched Swift at all. | 12:59 |
gholt | Okay, this page looks weird then? https://launchpad.net/swift/+packages | 12:59 |
soren | gholt: But the current idea is that the PPA's aren't labeled "bexar", "cactus", and "diablo", but rather "trunk" and "release". | 13:00 |
ttx | gholt: oh, that's the Natty packages. Apparently the ubuntu folks got busy | 13:00 |
soren | The idea being that "release" should be well tested and fit for upgrades as well. | 13:00 |
ttx | soren: zul uploaded 1.3.0 to natty. | 13:00 |
gholt | You're going to have to have at least a release - 1 as well. | 13:00 |
soren | gholt: How would that help? | 13:00 |
soren | gholt: So today, they'd upgrade from austin to bexar. | 13:01 |
soren | gholt: How's that useful? | 13:01 |
gholt | Well, I'd keep all the releases, but you said you'd only keep trunk and release. | 13:01 |
zigo-_- | soren: Where should I take cactus from? The PPA, or is it in the official Natty repo? | 13:01 |
soren | zigo-_-: It's not in natty yet. | 13:01 |
zigo-_- | k | 13:01 |
ttx | soren: not so sure of that. | 13:01 |
soren | ttx: orly? | 13:02 |
*** ryker has quit IRC | 13:02 | |
soren | ttx: Oh, I see. | 13:02 |
ttx | https://launchpad.net/ubuntu/natty/+source/nova | 13:02 |
soren | Cool beans. | 13:02 |
soren | Darn it, I need to run for... an hours and fifteen minutes. | 13:02 |
gholt | Ah, found 'em: https://launchpad.net/~swift-core/+archive/ppa | 13:02 |
ttx | gholt: you mean, people should have a way to install, from PPA, the 1.2.0 release. | 13:03 |
ttx | even once the 1.3.0 release is out | 13:03 |
gholt | Oh yeah, definitely. | 13:03 |
gholt | Or are you teasing me? | 13:03 |
ttx | gholt: right, our basic release/trunk PPA layout doesn't allow for that | 13:04 |
ttx | no, not teasing | 13:04 |
gholt | Well, they can get them from the archive I found, that's good enough. | 13:04 |
ttx | someone that enables release PPA currently says "give me the last stable release" | 13:04 |
*** alekibango has quit IRC | 13:05 | |
gholt | Yeah, I think that's okay as long as they can get the old ones from somewhere. | 13:05 |
gholt | I just couldn't figure out where the old ones were, but I know now: https://launchpad.net/~swift-core/+archive/ppa | 13:05 |
*** Zangetsue has quit IRC | 13:05 | |
ttx | gholt: that's the trick -- I'm not sure that PPA will indefinitely contain Bexar. | 13:05 |
*** alekibango has joined #openstack | 13:06 | |
ttx | nothing in its description says so | 13:06 |
gholt | Indefinite isn't a problem, but for at least a while would be good. | 13:06 |
*** mray has joined #openstack | 13:06 | |
gholt | Most folks that run production environments probably make their own packages to be honest. | 13:06 |
gholt | But if some crazed person uses the ppas, they aren't going to be able to instantly upgrade a huge cluster on release day. | 13:07 |
ttx | gholt: mtaylor may upload new versions on that one at any time. The only way to be sure is to have a "Bexar" PPA and copy thoise packages to it | 13:07 |
gholt | And their probably going to have to kick some boxes before they are ready to upgrade everything, so they'll need the code they're running on the rest of the cluster. | 13:07 |
zigo-_- | ttx: Is this cactus? https://launchpad.net/ubuntu/natty/+source/nova/2011.2-0ubuntu1/+files/nova_2011.2-0ubuntu1.dsc | 13:07 |
ttx | gholt: i guess we could have a Release-1 PPA. | 13:07 |
zigo-_- | I'm not used with your naming schemes... | 13:08 |
ttx | cactus release is 2011.2 for Nova, yes | 13:08 |
zigo-_- | Merci! :) | 13:08 |
*** ryker has joined #openstack | 13:08 | |
gholt | ttx: I'm not sure it's an issue, we can manually upload stuff to the archive, I'll bug notmyname/team on it later this morning and see what they want. | 13:09 |
ttx | gholt: note that you can access old versions by looking at "superseded" packages | 13:09 |
ttx | in a PPA | 13:09 |
gholt | Oh cool, where do you see that? Or you just mean from external packaging tools? | 13:10 |
ttx | gholt: example: https://launchpad.net/~nova-core/+archive/trunk/+packages?field.name_filter=&field.status_filter=superseded&field.series_filter= | 13:11 |
ttx | 1791 old packages :) | 13:11 |
*** f4m8 is now known as f4m8_ | 13:11 | |
*** hggdh has joined #openstack | 13:11 | |
gholt | Okay cool, that's a lot of old stuff. :) | 13:14 |
* gholt bookmarks https://launchpad.net/~swift-core/+archive/trunk/+packages?field.status_filter=superseded | 13:14 | |
*** kbringard has joined #openstack | 13:17 | |
gholt | We also don't seem to provide packaging for Lucid, the latest stable Ubuntu, but I guess that really isn't an issue since, like I said, most prod folks probably won't use them anyway, heh. | 13:18 |
kbringard | happy release day friends | 13:18 |
gholt | \o/ | 13:18 |
*** fayce has quit IRC | 13:20 | |
*** kbringard has quit IRC | 13:20 | |
*** kbringard has joined #openstack | 13:20 | |
*** bcwaldon has joined #openstack | 13:22 | |
koolhead11 | :P | 13:26 |
nagyz | I've entended both manager.py and api.py in the compute node to add a new call | 13:26 |
nagyz | and it works fine | 13:26 |
nagyz | except, that the resulting string is escaped.. :) | 13:27 |
*** HugoKuo_ has quit IRC | 13:28 | |
uksysadmin | hey - I've just installed nova from nova-core/trunk and I get a bug with creating a keypair after I've already created that keypair | 13:30 |
uksysadmin | kevinj@nova-api:~$ euca-add-keypair --debug openstack | 13:31 |
uksysadmin | UnknownError: An unknown error has occurred. Please try your request again. | 13:31 |
uksysadmin | known bug or should I raise an issue? [or someone point out what I'm doing wrong (apart from running the same command again and again! ;-)) | 13:31 |
doude | Hi all, when I start a VPN on a project (in VLANnetwork mode), the cloudpipe instance didn't take the first IP of the subnet (in my case 172.16.1.2). I use the command 'nova-manage vpn run p1'. How the cloudpipe instance requests specially the firts IP of the subnet ? | 13:32 |
uksysadmin | (deleting the keypair and creating it again works, btw) | 13:33 |
kbringard | uksysadmin: what do you see in your nova-api.log? | 13:33 |
kbringard | any errors? | 13:34 |
kbringard | /var/log/nova/nova-api.log to be exact | 13:34 |
kbringard | assuming that's where you've set logging to do | 13:35 |
kbringard | go* | 13:35 |
uksysadmin | add keypair (when exists) nova-api.log http://paste.openstack.org/show/1197/ | 13:36 |
uksysadmin | yeah its logging tehre | 13:36 |
uksysadmin | there | 13:36 |
uksysadmin | just seems an unhandled exception - no biggie. | 13:36 |
kbringard | uksysadmin: it seems to think the keypair name already exists | 13:36 |
kbringard | Duplicate: The key_pair openstack already exists | 13:37 |
uksysadmin | yeah that's correct | 13:37 |
uksysadmin | its the output that isn't nice | 13:37 |
kbringard | which would explain why removing it and readding it works | 13:37 |
uksysadmin | yeah | 13:37 |
kbringard | oh, ok | 13:37 |
kbringard | sorry | 13:37 |
*** dendrobates is now known as dendro-afk | 13:37 | |
kbringard | so you're asking about the UnownError output | 13:37 |
kbringard | not why it's not working | 13:37 |
uksysadmin | I'll raise a bug - no showstopped in the grand scheme of things - just think it can be handled better | 13:37 |
uksysadmin | yeah | 13:37 |
kbringard | got it, sorry, I was wrong about what you were asking | 13:38 |
kbringard | :-) | 13:38 |
uksysadmin | np | 13:38 |
nagyz | where is returning the data thru rabbitmq from a method call in manage.py on any service is handled? | 13:40 |
kbringard | sorry nagyz, I don't know | 13:44 |
*** JuanPerez has quit IRC | 13:44 | |
kbringard | but I know it sucks to ask a question and get no reply at all, so I figured I'd let you know I saw it :-D | 13:44 |
nagyz | :D | 13:44 |
nagyz | thanks :) | 13:44 |
nagyz | I've extended the serialization metadata to include adding xmls from strings | 13:45 |
nagyz | and it works fine | 13:45 |
nagyz | ...except the string (which contains the xml) returned by an internal API call gets escaped while it travels thru rabbitmq | 13:45 |
nagyz | somewhere. | 13:45 |
*** gondoi has joined #openstack | 13:47 | |
*** dspano has joined #openstack | 13:48 | |
uvirtbot | New bug: #761774 in nova "Adding a keypair that already exists produces an Unknown error" [Undecided,New] https://launchpad.net/bugs/761774 | 13:51 |
ccooke | hmm. How possible would it be to install Windows from an ISO in a VM on openstack? | 13:52 |
*** ramkrsna has quit IRC | 13:52 | |
ccooke | (I'm currently evaluating openstack here and it's nice enough to manage VMs for myself, so I wonder if I could get away with using it for the windows VM I have to keep around for work) | 13:52 |
dsockwell | ccooke: i haven't actually done it, but there is reason to be hopeful, considering the hypervisors used | 13:54 |
*** iammartian has left #openstack | 13:54 | |
ccooke | indeed | 13:54 |
ccooke | I guess I could set it up as a non-managed kvm vm and just import it later | 13:55 |
kbringard | ccooke: yea, or install it in like virtualbox and then convert it to qcow2 | 13:55 |
ccooke | indeed | 13:55 |
kbringard | (which is basically what you said, but sounds easier to me, because I'm lazy) :-D | 13:56 |
kbringard | so, I've got an odd one | 13:58 |
kbringard | http://paste.openstack.org/show/1198/ | 13:58 |
kbringard | the file is there | 13:59 |
kbringard | when I run the qemu command manually, I get the same error | 13:59 |
kbringard | but, here is the weird part | 13:59 |
doude | Did you test this functionnality ? cloudpipe ? | 13:59 |
kbringard | when I do like, euca-run-instances -n 6 with that image id | 13:59 |
uksysadmin | right - an eon ago (month or so) I used to be able to assign a floating address so I could access my openstack instances as part of my network using: sudo nova-manage floating create hostname IP/CIDR | 14:00 |
kbringard | it works on the one compute node I have that is the samy physical host as the glance api/registry server (glance is using file backing) | 14:00 |
kbringard | but the 2 other compute nodes, it fails | 14:00 |
kbringard | I've even tried copying the file over manually (both via curl to the glance API, and scp) | 14:00 |
uksysadmin | That creates entries in my nova.floating_ip table but when I associate with an instance I get ApiError: ApiError: Address (IP) is not allocated | 14:00 |
kbringard | uksysadmin: if you're using VLAN mode, make sure the IP has a project associated with it | 14:01 |
uksysadmin | e.g. euca-associate-address -i i-00000002 172.241.0.4 | 14:01 |
*** DogWater has joined #openstack | 14:01 | |
*** bcwaldon has quit IRC | 14:01 | |
* uksysadmin goes off to check my configs | 14:01 | |
DogWater | anyone looked this over? http://www.rackspace.com/downloads/pdfs/dell_tech_wp-bootstrapping_openstack_clouds.pdf | 14:01 |
kbringard | uksysadmin: http://paste.openstack.org/show/1199/ | 14:02 |
kbringard | that's how mine looks in the db | 14:02 |
uksysadmin | hmmm, yeah my project_id rows are null | 14:03 |
kbringard | uksysadmin: I'd just do a quick update to those tables | 14:04 |
uksysadmin | I need to shoot [cheers for help so far kbringard] so I'll have a look at what I'm doing wrong | 14:04 |
uksysadmin | will do | 14:04 |
kbringard | I don't know if you're doing anything wrong | 14:04 |
kbringard | I thought I recalled it working awhile back, and then recently I had to update it manually | 14:04 |
kbringard | perhaps there was a change to the nova-manage script | 14:04 |
kbringard | *shrug* | 14:04 |
uksysadmin | yeah - if its not something I can set on the command line, I'll assume its a bug | 14:05 |
kbringard | seems reasonable to me :-D | 14:05 |
uksysadmin | confirm that's working now after updating the table | 14:05 |
kbringard | oh, on that previous weirdness about the images, the checksum in the glance server matches what is on the remote servers , so it doesn't seem like the image is getting munged | 14:05 |
* kbringard scratches his head | 14:06 | |
kbringard | oh, and you're most welcome uksysadmin | 14:06 |
kbringard | I've gotten so much help here, it's the least I can do to give back when I can | 14:06 |
*** Zangetsue has joined #openstack | 14:07 | |
*** mahadev has joined #openstack | 14:08 | |
DogWater | Is it really common in Openstack clouds to use non-raided local storage to store VMs? | 14:10 |
DogWater | I was a tiny bit surprised to read that in the Dell/Rackspace bootstrapping document | 14:10 |
kbringard | not sure, last time I read that was like, 3 months ago | 14:11 |
kbringard | let me go back and reskim it | 14:11 |
* soren returns | 14:11 | |
kbringard | wb soren | 14:11 |
DogWater | I'm just asking what you folks actually do in real life, rather than what Dell is proposing people do. | 14:11 |
kbringard | I've got my VM instances themselves running on local disk, which is hardware raid 1 | 14:12 |
kbringard | and I've got my images on an nfs mounted volume | 14:12 |
kbringard | the backing for the images is raided, etc | 14:13 |
DogWater | It seems like that document is written for service providers and hyperscale clouds in a service provider environment it seems a bit risky to store customer data in JBOD | 14:13 |
creiht | DogWater: I think it all depends on how much you want to protect yourself from failure | 14:13 |
alekibango | DogWater: some companies who do cloud ready applications do not want raid | 14:14 |
kbringard | if you're using swift, then in theory, that is where you have the fault tolerance for your data stores | 14:14 |
*** Tongpow has joined #openstack | 14:14 | |
alekibango | as its cheaper to live without one and they are ready to accept some problems of the node | 14:14 |
alekibango | they have hundreds nodes ready | 14:15 |
kbringard | and VMs are considered volitile... if you have data you want to save then you should be putting it on a volume, which would probably be backed by swift | 14:15 |
creiht | swift is indeed redundant, and can be used with glance to store your base images, but can't be used for local storage or volumes | 14:15 |
alekibango | no, swift is cold storage | 14:15 |
kbringard | oh right, because of lvm | 14:15 |
kbringard | yea yea, my bad | 14:15 |
alekibango | DogWater: if you want redundancy, look at nova-volume more closely | 14:15 |
alekibango | and on sheepdog or rbd | 14:15 |
alekibango | i am betting on sheepdog | 14:15 |
*** Tongpow has left #openstack | 14:16 | |
kbringard | what I meant was, you snap your volume and store it back to swift (or sheepdog, or whatever) | 14:16 |
alekibango | which should be production ready (stable) in may | 14:16 |
alekibango | yes, snap it | 14:16 |
alekibango | you just do backups its safe enough for most people | 14:16 |
DogWater | Well, I was just trying to spec out a reference design for a cloud and I was planning on doing the whole "enterprise SAN" thing which is why I was a bit surprised that Dell suggested storing the data on non-raided disks | 14:16 |
soren | ttx: Ok, status? Where are we at? | 14:16 |
*** grapex has quit IRC | 14:16 | |
soren | ttx: We should set up an etherpad with stuff that needs doing. | 14:16 |
doude | How the cloudpipe instance gets the first IP of the subnet in VLAN network mode ? | 14:17 |
alekibango | DogWater: i know how you feel, but its hard really to provide those, as nova can be used 1000 ways | 14:17 |
* creiht goes to look at the disks | 14:17 | |
creiht | erm docs | 14:17 |
alekibango | kbringard: look here for some more examples http://stackops.org/display/documentation/Global+Network+Requirements | 14:17 |
soren | ttx: http://etherpad.openstack.org/oRyca5rmk9 | 14:17 |
kbringard | alekibango: thanks, I'll check it out | 14:17 |
alekibango | (i do not endorse the distro, i didnt try it,.... byt there are some images you might like) | 14:17 |
DogWater | in fact a lot of the "software cloud" solutions require using centralized storage (such as OnApp.. etc) | 14:17 |
DogWater | So if anything I am more confused by reading that document than ever =) | 14:18 |
alekibango | DogWater: with nova, you can avoid central point of failure | 14:18 |
alekibango | but its not easy | 14:18 |
kbringard | DogWater: haha, yea | 14:18 |
alekibango | as easy as it should be | 14:18 |
alekibango | DogWater: it will be easy in 6 months imho | 14:18 |
alekibango | for now its only for the brave... who has many computers | 14:18 |
alekibango | and experience | 14:18 |
kbringard | alekibango: +1 | 14:19 |
*** daveiw has quit IRC | 14:19 | |
alekibango | but its the most open,powerfull, scalabe, adoptable cloud solution | 14:19 |
alekibango | adaptable* | 14:19 |
alekibango | lol | 14:19 |
alekibango | those things also mean its very hard to get it running :) | 14:20 |
DogWater | So nova is a compute platform/hypervisor? | 14:20 |
alekibango | nova manages virtual hosts in a sane way | 14:20 |
DogWater | or it's the front-end to the hypervisor | 14:20 |
creiht | DogWater: in the dell doc, it seems that they give a pretty good story for both sides | 14:20 |
alekibango | kind of frontend in your sentence | 14:20 |
creiht | use raid for local storage if you want that extra durability on that node | 14:21 |
creiht | or don't to get better performance/cost in general and build your apps on the cloud knowing that your vm's are etherial(sp?) | 14:21 |
alekibango | you can even have diskless nova nodes -- running virtual servers having disks in SAN | 14:21 |
DogWater | creiht: I see if that node goes down though, there is no HA? | 14:21 |
creiht | for a user's vm, no | 14:21 |
creiht | it is the same for ec2 | 14:21 |
*** dendro-afk is now known as dendrobates | 14:22 | |
*** chuck_ has joined #openstack | 14:22 | |
DogWater | and people are fine with that? | 14:22 |
DogWater | or? | 14:22 |
alekibango | DogWater: you can have disks on network using sheepdog | 14:22 |
DogWater | I guess that goes back to the 'scaling out' thing, where customers have apps on multiple nodes. | 14:22 |
*** zul has quit IRC | 14:22 | |
alekibango | or RBD (ceph) | 14:22 |
DogWater | load balancing, etc | 14:22 |
creiht | DogWater: right, if they are just using one cloud instnace, then they are not really using the cloud :) | 14:22 |
alekibango | its more like virtual hosting | 14:23 |
alekibango | :) | 14:23 |
alekibango | creiht: but there are such uses... | 14:23 |
kbringard | so, since people seem to be awake now, I've got a kind of head scratcher | 14:23 |
dsockwell | ephemeral | 14:23 |
creiht | things like ceph and sheepdog are interesting, but you really pay the price in performance | 14:23 |
kbringard | if anyone is interested | 14:23 |
creiht | dsockwell: thank you :) | 14:23 |
*** reldan has joined #openstack | 14:23 | |
*** j05h has joined #openstack | 14:23 | |
blamar | kbringard: bring it | 14:23 |
kbringard | blamar: | 14:24 |
kbringard | ok, so | 14:24 |
alekibango | creiht: do you think its really not worth? sheepdog is (almost) as fast as local nfs | 14:24 |
creiht | and nfs is blazing fast! :) | 14:24 |
alekibango | and can do some tricks with volumes also | 14:24 |
DogWater | creiht: I guess the big downside with centralized storage is going to be scalability, and cost.. performance is generally ok? | 14:24 |
creiht | DogWater: centralized storage as in a san? | 14:24 |
DogWater | creiht: have you messed with gluster+native NFS at all? I was fairly impressed but it does add some overhead. | 14:24 |
creiht | DogWater: I haven't messed with that actually | 14:25 |
kbringard | I've got an image that I uploaded into glance, when I launch it it works fine when it runs on the compute node that physically is the same host as the glance server, but when it goes to any other compute nodes, I get qemu-img errors | 14:25 |
DogWater | creiht: yes, storing the VMs centrally in a san, such as equallogic or DIY iSCSI | 14:25 |
alekibango | DogWater: i tested gluster... i like it but its slow | 14:25 |
dsockwell | my plan was to store VMs on an OpenSolaris/OpenIndiana box | 14:25 |
kbringard | http://paste.openstack.org/show/1198/ | 14:25 |
creiht | DogWater: I think it depends on how big your cluster is going to get | 14:25 |
kbringard | but, I checked, and the file is in _base | 14:25 |
creiht | or what your plans are | 14:25 |
kbringard | and it has the same md5sum as what glance thinks it should have | 14:25 |
dsockwell | and it would have worked if not for my cheap FC switch with a bad PSU | 14:25 |
blamar | kbringard: What errors? | 14:25 |
kbringard | blamar: http://paste.openstack.org/show/1198/ | 14:26 |
kbringard | is one | 14:26 |
creiht | DogWater: traditional san is doable, but is expensive, and is only going to scale so far | 14:26 |
kbringard | I've also gotten an error about info chardev as well | 14:26 |
kbringard | but it's not doing it any more | 14:26 |
DogWater | dsockwell: there are some interesting raid controller caching things hitting right now.. LSI/dell cachecade, adaptecs SSD/SAS cache | 14:26 |
kbringard | these same compute nodes work fine with all the other images I've uploaded | 14:26 |
kbringard | so it doesn't seem to be a problem with the compute nodes themselves | 14:26 |
*** openstackjenkins has quit IRC | 14:26 | |
kbringard | and I've tried reupping the "broken" image | 14:26 |
creiht | DogWater: yeah, My group will be playing with that soon :) | 14:27 |
kbringard | but it does the same thing | 14:27 |
alekibango | creiht: problem is, most people do nto have plans | 14:27 |
dsockwell | DogWater: i have 2 SSDs set up as logs atm | 14:27 |
alekibango | they just want to 'get cloudy' | 14:27 |
alekibango | and they need some examples | 14:27 |
alekibango | so they can get idea what they really want | 14:27 |
*** openstackjenkins has joined #openstack | 14:27 | |
alekibango | those examples (use cases) should be well documented | 14:27 |
creiht | DogWater: and we are looking at making something around an iscsi stack | 14:27 |
alekibango | and even all example config files should be published somehow | 14:27 |
dsockwell | i haven't done a lot of testing (stupid silkworm) but they seem to soak up IOs pretty well | 14:27 |
creiht | I should have more to talk about it at the design summit | 14:27 |
alekibango | many peolpe have no idea what to do or why | 14:27 |
kbringard | blamar: I've also tried copying the file over manually (via both curl from the glance api directly, and scping it) | 14:27 |
blamar | kbringard: Permissions issue? | 14:28 |
kbringard | don't think so, it has the same perms as all the other images in there (that work) | 14:28 |
*** johnpur has joined #openstack | 14:28 | |
*** ChanServ sets mode: +v johnpur | 14:28 | |
kbringard | and, when I run the qemu-img command manually as root, it tells me the same thing | 14:28 |
*** j05h has quit IRC | 14:28 | |
kbringard | same versions of everything I've checked on all the machines | 14:30 |
kbringard | blamar: weird, right? | 14:30 |
DogWater | when deploying load balancing for a cloud would you use physical load balancers or VMs? | 14:30 |
DogWater | seems like there would be a benefit to having physical load balancing upstream from the 'cloud' | 14:31 |
*** zenmatt has joined #openstack | 14:31 | |
blamar | kbringard: yeah, thinking... | 14:31 |
zigo-_- | http://paste.openstack.org/show/1200/ | 14:32 |
zigo-_- | Sorry, wrong window. | 14:32 |
kbringard | blamar: http://paste.openstack.org/show/1196/ | 14:33 |
kbringard | that's the other error I was getting | 14:33 |
kbringard | but that one seems to have stopped | 14:33 |
*** kirshil has joined #openstack | 14:33 | |
*** kakella has joined #openstack | 14:34 | |
kbringard | I upgraded this morning to the latest ppa, so perhaps that's what resolved that one error | 14:34 |
*** kakella has left #openstack | 14:34 | |
*** kakella has joined #openstack | 14:34 | |
*** kakella has left #openstack | 14:34 | |
*** Gaelfr has quit IRC | 14:34 | |
kirshil | Where is possible to get most up-to-date doc on nova compute, in particulare the description on 8773 api? | 14:35 |
*** j05h has joined #openstack | 14:35 | |
*** koolhead11 has quit IRC | 14:36 | |
*** reldan has quit IRC | 14:36 | |
annegentle | kirshil: working on Cactus doc copy right now, and a new landing page at docs.openstack.org. What specifically is 8773 api? | 14:37 |
*** shentonfreude has joined #openstack | 14:38 | |
*** grapex has joined #openstack | 14:38 | |
*** Gaelfr has joined #openstack | 14:39 | |
kbringard | blamar: one additional bit of info: I "snapped" this from another running instance | 14:40 |
blamar | kbringard, the compute node? | 14:40 |
kbringard | by which I mean, I took the disk, disk.local, etc | 14:40 |
blamar | kbringard, ah | 14:40 |
kbringard | and copied it elsewhere then uploaded it into glance | 14:40 |
kbringard | but, I would have thought that if it was an issue with the image itself | 14:40 |
kbringard | then it wouldn't work anywhere | 14:40 |
kbringard | but it works on that one compute node (which incidentally isn't the one I took it from originally) | 14:41 |
kbringard | it's got my mind boggled | 14:41 |
*** grapex has quit IRC | 14:43 | |
soren | mtaylor: Paging dr. Taylor. | 14:43 |
kbringard | blamar: and now it's gone back to the chardev error | 14:44 |
kbringard | blamar: http://paste.openstack.org/show/1201/ | 14:44 |
nagyz | soren, do you see any reason why the answer from an api call gets escaped? I wouldn't call that seamless rpc :-) | 14:44 |
soren | nagyz: Can you be more specific? | 14:45 |
soren | nagyz: This isn't still about the libvirt xml stuff, is it? | 14:45 |
nagyz | if I add a new method to compute/manager.py and api.py, and call it thru rabbitmq, and lets say the method returns "<x>a</x>", it gets espaced on the recieving end | 14:46 |
nagyz | and yes, it's still about that. | 14:46 |
nagyz | I've managed to add the returning XML into the DOM nicely | 14:46 |
soren | That's exactly what should happen. | 14:46 |
soren | You're returning a string that happens to contain XML. | 14:46 |
uvirtbot | New bug: #761827 in nova "Release floating IP before disassociate it" [Undecided,New] https://launchpad.net/bugs/761827 | 14:46 |
nagyz | and every string gets escaped that goes thru rabbitmq? | 14:47 |
soren | You're not extending the reponse XML with more XML. You're encapsulating a string that happens to be XML. | 14:47 |
soren | nagyz: This has nothing to do with rabbitmq. | 14:47 |
nagyz | note, we're only talking about API calls. Noone said anything about any other XML. | 14:47 |
soren | ? | 14:47 |
soren | Huh? | 14:47 |
nagyz | it isn't the same escape problem as yesterday, I've managed to fix that :) | 14:48 |
nagyz | let me rephrase my problem | 14:48 |
*** elambert has joined #openstack | 14:49 | |
nagyz | I've added this to manager.py: | 14:49 |
soren | pastebin! | 14:49 |
nagyz | ok :) | 14:49 |
nagyz | its only two lines, but I'll make a more complete example that way | 14:49 |
nagyz | sec | 14:49 |
*** RJD22 is now known as RJD22|away | 14:50 | |
*** rnirmal has joined #openstack | 14:52 | |
nagyz | soren, http://pastebin.com/vnPhJTaD | 14:52 |
nagyz | the logs sais: <a>123<\/a> | 14:52 |
soren | Oh. | 14:53 |
soren | Dunno. | 14:53 |
nagyz | should that happen? | 14:53 |
soren | No. | 14:53 |
nagyz | that's why I said rabbitmq is involved, or at least the process that does the rpc | 14:54 |
soren | I would expect things to be unescaped by then. | 14:54 |
nagyz | any pointers where should I look in the code? | 14:55 |
dspano | The link to the cactus release notes on openstack.org is broken. It links to this URL: http://http//wiki.openstack.org/ReleaseNotes/Cactus | 14:55 |
*** RJD22|away is now known as RJD22 | 14:55 | |
*** j05h1 has joined #openstack | 14:56 | |
*** jkoelker has joined #openstack | 14:56 | |
ccooke | hmm. Deleting a network with nova-manage | 14:57 |
ttx | dspano: indeed | 14:58 |
ccooke | it says it failed, but I'm not even sure how to identify the network to the command - there's no usage information | 14:58 |
*** dragondm has joined #openstack | 14:58 | |
ccooke | ah, got it | 14:58 |
mtaylor | soren: ola | 14:58 |
*** j05h has quit IRC | 14:58 | |
*** grapex has joined #openstack | 14:59 | |
*** ryker has quit IRC | 14:59 | |
ccooke | argh | 14:59 |
ttx | notmyname: if you accept https://code.edge.launchpad.net/~ttx/swift/open-diablo/+merge/57886 you should be all set to start working on diablo/1.4 | 15:00 |
ttx | hrm | 15:00 |
ttx | https://code.launchpad.net/~ttx/swift/open-diablo/+merge/57886 | 15:00 |
*** ryker has joined #openstack | 15:00 | |
ccooke | So, I've deleted the project the network is associated with, but nova-manage complains the network needs to be disassociated with the project before it can be deleted... | 15:00 |
notmyname | ttx: done | 15:01 |
ttx | The Diablo floodgates are now officially open. | 15:03 |
kbringard | ttx: cool, I have a merge proposal :-) | 15:04 |
ttx | kbringard: cool, I need some sleep | 15:04 |
kbringard | haha, go sleep amigo | 15:04 |
kbringard | great work on cactus | 15:04 |
johnpur | ttx: Awesome job man! | 15:04 |
ttx | johnpur, kbringard thanks! | 15:05 |
annegentle | ttx: thanks for all the hard work! | 15:05 |
*** shentonfreude has quit IRC | 15:07 | |
*** jonkelly has joined #openstack | 15:08 | |
*** shentonfreude has joined #openstack | 15:09 | |
*** uksysadmin has quit IRC | 15:11 | |
*** shentonfreude has quit IRC | 15:13 | |
nagyz | have a nice weekend, all | 15:14 |
*** nagyz has quit IRC | 15:14 | |
*** reldan has joined #openstack | 15:15 | |
*** dmshelton has joined #openstack | 15:15 | |
*** shentonfreude has joined #openstack | 15:15 | |
openstackjenkins | Project swift build #240: SUCCESS in 28 sec: http://jenkins.openstack.org/job/swift/240/ | 15:17 |
openstackjenkins | Tarmac: Diablo versioning (1.4-dev). | 15:17 |
*** kirshil has quit IRC | 15:18 | |
*** kirshil has joined #openstack | 15:19 | |
kirshil | annegentle: 8773 is a port which is used by default by EC2 style API. Actually I'm mostly interested in libcloud - EC2-Openstack API | 15:20 |
kirshil | Also why the API both 8773 and 8774 are not SSL-encoded? | 15:22 |
*** Gaelfr has quit IRC | 15:26 | |
*** enigma1 has joined #openstack | 15:37 | |
jk0 | can I get a quick look at https://code.launchpad.net/~jk0/nova/lp754944/+merge/57200 please? it's just a one-liner :) | 15:37 |
zigo-_- | ttx: Cactus ships a novamanage.1 instead of a nova-manage.1 ... :) | 15:37 |
zigo-_- | Very deceptive ... :) | 15:38 |
ttx | zigo-_-: if only you would have looked into it three days ago | 15:39 |
zigo-_- | ttx: It took me a long time to find this out (I just did...) | 15:40 |
ttx | hmm | 15:40 |
*** rchavik has quit IRC | 15:40 | |
*** chuck_ is now known as zul | 15:42 | |
*** jonkelly has quit IRC | 15:42 | |
*** zul has quit IRC | 15:42 | |
*** zul has joined #openstack | 15:42 | |
ttx | zigo-_-: fortunately, nothing that can't be solved at packaging level. | 15:42 |
* ttx calls it a day, a week and a release. | 15:43 | |
*** fayce has joined #openstack | 15:44 | |
ttx | time to drink and celebrate. | 15:44 |
*** j05h has joined #openstack | 15:46 | |
*** koolhead17 has joined #openstack | 15:46 | |
*** j05h1 has quit IRC | 15:48 | |
*** j05h has quit IRC | 15:48 | |
zigo-_- | ttx: Guess what I'm currently doing... :) | 15:48 |
*** bkkrw has quit IRC | 15:48 | |
openstackjenkins | Project nova build #820: SUCCESS in 2 min 30 sec: http://jenkins.openstack.org/job/nova/820/ | 15:49 |
openstackjenkins | Tarmac: Only poll for instance states that compute should care about. | 15:49 |
jaypipes | ttx: have a beer for me :) | 15:50 |
*** dendrobates is now known as dendro-afk | 15:50 | |
*** fayce has quit IRC | 15:53 | |
*** j05h has joined #openstack | 15:54 | |
*** fysa has joined #openstack | 15:55 | |
kbringard | anyone have any idea what causes this error: | 15:55 |
kbringard | (nova.compute.manager): TRACE: | 15:55 |
kbringard | err | 15:55 |
kbringard | Error: operation failed: failed to retrieve chardev info in qemu with 'info chardev' | 15:55 |
*** dragondm has quit IRC | 15:56 | |
*** stewart has joined #openstack | 15:56 | |
*** mihgen_ has quit IRC | 15:57 | |
*** duffman_ has quit IRC | 15:58 | |
*** duffman_ has joined #openstack | 15:58 | |
*** j05h has quit IRC | 15:58 | |
*** j05h has joined #openstack | 15:58 | |
dspano | What flag in nova would I have to set to run libvirt under software emulation for testing on a PC I have setup? | 16:00 |
*** j05h has joined #openstack | 16:00 | |
dspano | I'm getting this error when I try to start instances: Instance '1' failed to spawn. Is virtualization enabled in the BIOS? | 16:00 |
kbringard | what's it say in the trace? | 16:01 |
kim0 | dspano: perhaps in nova.conf → --libvirt_type=qemu ? | 16:02 |
*** gondoi_ has joined #openstack | 16:03 | |
*** maplebed has joined #openstack | 16:03 | |
openstackjenkins | Project nova build #821: SUCCESS in 2 min 29 sec: http://jenkins.openstack.org/job/nova/821/ | 16:04 |
openstackjenkins | Tarmac: Removed the unused self.interfaces_xml variable. | 16:04 |
*** gondoi_ has quit IRC | 16:04 | |
*** gondoi_ has joined #openstack | 16:04 | |
dspano | I'll give that a try. | 16:05 |
dspano | The trace is kind of big. | 16:05 |
kbringard | dspano: put it into paste.openstack.org | 16:07 |
BK_man | Cactus RPMs for RHEL6.0 are out: http://openstackgd.wordpress.com/2011/04/15/cactus-rpms-released/ | 16:08 |
*** mahadev has quit IRC | 16:10 | |
kbringard | BK_man: nice | 16:11 |
kbringard | good work sir | 16:11 |
uvirtbot | New bug: #761913 in openstack-dashboard "python tools/install_venv.py errors with python2.7" [Undecided,New] https://launchpad.net/bugs/761913 | 16:11 |
*** glenc_ has joined #openstack | 16:16 | |
*** ryker has quit IRC | 16:17 | |
*** hggdh has quit IRC | 16:17 | |
*** ryker has joined #openstack | 16:18 | |
*** eday has quit IRC | 16:19 | |
*** glenc has quit IRC | 16:19 | |
*** redbo has quit IRC | 16:19 | |
*** eday has joined #openstack | 16:20 | |
*** hggdh has joined #openstack | 16:20 | |
*** redbo has joined #openstack | 16:20 | |
*** BK_man has quit IRC | 16:23 | |
*** BK_man has joined #openstack | 16:26 | |
*** mahadev has joined #openstack | 16:28 | |
*** nacx has quit IRC | 16:30 | |
*** HugoKuo has joined #openstack | 16:30 | |
colinnich | is there any way to influence the Last-Modified header returned from swift? | 16:31 |
notmyname | colinnich: not yet. we've talked about the ability to change it (ie keep it separate from the put timestamp), but I don't think any code's been written for that yet | 16:32 |
*** kashyap has quit IRC | 16:32 | |
*** dragondm has joined #openstack | 16:32 | |
colinnich | notmyname: ok, ta | 16:32 |
notmyname | well, no other way than to upload at a different time ;-) | 16:32 |
notmyname | step one: invent a time machine :-) | 16:32 |
colinnich | notmyname: in 2007? :-) | 16:32 |
*** MarkAtwood has joined #openstack | 16:34 | |
colinnich | notmyname: I take it X-Object-Meta-Mtime is only used by st? | 16:34 |
notmyname | maybe (ie I'm not sure)? it's a standard metadata header, so anyone can set/read it | 16:35 |
colinnich | notmyname: If I download the file, it ends up with a correct modification time, and that meta header is set to 2007, so I guess that's how st does it | 16:35 |
notmyname | ya, st sets it as the actual mtime of the file | 16:36 |
*** joearnold has joined #openstack | 16:36 | |
colinnich | notmyname: ok | 16:36 |
* zigo-_- got a lintian clean nova package! :) | 16:36 | |
*** world_weapon has joined #openstack | 16:37 | |
zigo-_- | Just finished fixing all remaining issues ... | 16:37 |
*** KnuckleSangwich has joined #openstack | 16:38 | |
KnuckleSangwich | Anyone know the current status of the S3 api compatibility layer in Swift? Is it still experimental? | 16:39 |
zigo-_- | Ok, I'll wait for that, try to fix it, upload and then finally ... sleep ! :) | 16:39 |
*** fysa has quit IRC | 16:39 | |
zigo-_- | Ooops, wrong window again. | 16:40 |
Nick0la | Hello all. Silly question (another one)... | 16:41 |
Nick0la | if I use flat dhcp in my set up, does nova act as the DHCP server, or do I have to configure my own? | 16:42 |
patri0t | Nick0la: Dnsmasq will do that | 16:42 |
*** fysa has joined #openstack | 16:42 | |
patri0t | and nova handles that part (start, config dnsmasq) | 16:43 |
Nick0la | excellent. | 16:43 |
kbringard | patri0t: I've also submitted a patch to allow you to set q dnsmasq_config_file in the event that you want to fine tune the dnsmasq settings | 16:44 |
kbringard | as an aside | 16:44 |
dspano | kbringard: Sorry I just got dogpiled by a bunch of people in my office. I just put it on paste.openstack.org | 16:44 |
kbringard | dspano: no worries | 16:44 |
dspano | kbringard: It | 16:44 |
kbringard | what's the url? | 16:44 |
dspano | It's http://paste.openstack.org/show/1202/ | 16:44 |
kbringard | LockFailed: failed to create /usr/lib/pymodules/python2.6/cld01.Dummy-1-2498 | 16:44 |
kbringard | there's your problem | 16:44 |
kbringard | set | 16:44 |
patri0t | kbringard: nice | 16:45 |
BK_man | --lock_path | 16:45 |
kbringard | --lock_path | 16:45 |
kbringard | uyea | 16:45 |
kbringard | sorry, was looking it up | 16:45 |
kbringard | I have mine set to | 16:45 |
kbringard | /var/lib/nova/tmp | 16:45 |
*** chuck_ has joined #openstack | 16:45 | |
BK_man | kbringard: are you using my build? :) | 16:45 |
*** zul has quit IRC | 16:45 | |
*** chuck_ is now known as zul | 16:46 | |
kbringard | BK_man: don't think so, I'm running ubuntu | 16:46 |
dspano | kbringard: Thanks a lot. I'm going to learn python because of this project. :) | 16:46 |
uvirtbot | New bug: #761947 in nova "create server response json doesn't contain progress" [Undecided,New] https://launchpad.net/bugs/761947 | 16:46 |
kbringard | dspano: if you put --lock_path=/var/lib/nova/tmp in your nova.conf | 16:46 |
kbringard | that should resolve it | 16:46 |
BK_man | kbringard: that mean that I have the same for RHEL :) | 16:46 |
kbringard | or any path that the user nova has write access to | 16:46 |
kbringard | BK_man: I'm probably going to use your build once Cent6 gets released | 16:47 |
kbringard | at least to test it out | 16:47 |
kbringard | man, it's cold, I need typing gloves | 16:47 |
*** colinnich has left #openstack | 16:47 | |
kbringard | BK_man: I used that path because it gets created with nova | 16:48 |
kbringard | in truth I'm not sure why it's not set as the default | 16:48 |
Nick0la | patri0t: is there any documentation anywhere on how nova hands out addresses? Lease times, ability to hand out static IP's based on MAC, etc...? | 16:48 |
dspano | kbringard: That did the trick. | 16:48 |
kbringard | dspano: cool | 16:48 |
kbringard | yay! | 16:48 |
Nick0la | I've use dhcpd frequently. Never used dnsmasq | 16:49 |
BK_man | kbringard: from my point of view casual users should NOT use source code to test nova. Thay should use packages. So, it's a packaging issue | 16:49 |
kbringard | Nick0la: generally you'll want to let dnsmasq handle it... but I think the leases are written out to /var/lib/nova/networks/ | 16:49 |
kbringard | I don't know that there is formal documentation on it, but the quick and dirty is that | 16:50 |
kbringard | it writes out the info about the instance to the lease file | 16:50 |
kbringard | and then the instance get's the correct info when it does a dhcpdiscover | 16:50 |
kbringard | I think it defaults to just the mac address | 16:50 |
kbringard | 120s lease time | 16:50 |
Nick0la | okay. i'm setting up a test/proof of concept for my company. | 16:51 |
kbringard | and the instance id as rthe hostname | 16:51 |
Nick0la | to offer rackspace-like offerings to our customers. | 16:51 |
kbringard | one thing that doesn't get set is the domain name (so like, hostname -f won't work) | 16:51 |
Nick0la | and we traditionally manage IP space very very carefully. :-) | 16:51 |
Nick0la | okay. and it will respect that the first address in the range is the gateway? | 16:52 |
kbringard | Nick0la: it pulls a free IP from the fixed_ips table in the nova DB | 16:52 |
*** purpaboo is now known as lurkaboo | 16:52 | |
kbringard | check the fixed_ips table in the DB | 16:52 |
kbringard | it's got the netmask, broadcast, gateway | 16:52 |
Nick0la | sweet. | 16:52 |
Nick0la | one last question... | 16:53 |
kbringard | although, I've only used VLAN mode... but I believe it is the same for the FlatDHCP mode | 16:53 |
Nick0la | hehe I was going to use vlans, except we are an ISP, so the doleing out of vlans is strictly prohibited unless it passes through our guru first. | 16:54 |
*** fysa has quit IRC | 16:54 | |
Nick0la | flatdhcp seemed the best alternative | 16:54 |
kbringard | yea, I think it works mostly the same way, I was just giving you the caveat :-D | 16:54 |
Nick0la | how do I add multiple IP ranges? On one directive in nova.conf, or multiple directives? | 16:55 |
kbringard | you mean to be handed out by DHCP? | 16:55 |
Nick0la | --fixed_range=192.168.0.0/24, 10.1.1.0/24 etc | 16:56 |
Nick0la | yah | 16:56 |
kbringard | hmmm, well, in VLAN mode, when you add a new network using nova-manage | 16:56 |
Nick0la | I have a small range for testing, but when it goes live, I'll need to be able to add more. | 16:56 |
kbringard | it inserts all the IPs into the fixed_ips table | 16:56 |
kbringard | I don't think the fixed_range gets used by nova | 16:56 |
Nick0la | ? Really? | 16:57 |
*** adiantum has joined #openstack | 16:57 | |
kbringard | someone with more intimate knowledge of it may correct me | 16:57 |
Nick0la | I thought that was the purpose of the fixed_range directive.. | 16:57 |
kbringard | it is if you're using pure dnsmasq | 16:57 |
kbringard | for dynamic assignment | 16:57 |
kbringard | but nova uses static-dynamic assignment | 16:58 |
*** gondoi_ has quit IRC | 16:58 | |
kbringard | it dynamically generates the leases file | 16:58 |
kbringard | which causes dnsmasq to statically assign the IP to the corresponding MAC addy | 16:58 |
kbringard | you'll want to look at the nova-manage network command | 16:58 |
kbringard | I think it's like | 16:59 |
Nick0la | hmmm okay. | 16:59 |
*** gondoi_ has joined #openstack | 16:59 | |
kbringard | nova-manage network create CIDR number_of_networks number_of_ips | 17:00 |
kbringard | so like | 17:00 |
kbringard | nova-manage create 10.1.1.0/24 1 256 | 17:00 |
Nick0la | okay. | 17:00 |
kbringard | would create an entry in the networks table for 10.1.1.0 | 17:00 |
kbringard | it would make it just one network | 17:00 |
kbringard | a | 17:00 |
kbringard | nd would put 256 addresses in the fixed_ips table | 17:00 |
*** gondoi_ has quit IRC | 17:00 | |
*** vvuksan has joined #openstack | 17:01 | |
kbringard | then, you'll need to go into the fixed_ips table and manually set reserved = 1 on any IPs that are used by other stuff (like routers, etc) | 17:01 |
kbringard | so nova won't assign those | 17:01 |
kbringard | then finally, in the networks table | 17:01 |
vvuksan | link to compute release note http://openstack.com/projects/compute/ => http://http//wiki.openstack.org/ReleaseNotes/Cactus | 17:01 |
vvuksan | is broken | 17:01 |
kbringard | make sure everything is happy in the networks table | 17:01 |
vvuksan | aah | 17:02 |
kbringard | Nick0la: getting networks setup initially still requires some work, in my experience sadly | 17:02 |
vvuksan | actually link on the page says http// instead of http:// | 17:02 |
kbringard | vvuksan: howdy friend | 17:02 |
vvuksan | kbringard: howdy :-) | 17:02 |
Nick0la | so if I were to use 10.1.1.0/24 2 128 it would create 2 networks (10.1.1.0/25 and 10.1.1.128/25) with 128 addresses each? | 17:02 |
kbringard | in theory, yes | 17:02 |
kbringard | hehe | 17:02 |
Nick0la | okay. I was wondering why I needed to put in the number of networks and addresses. That all seemed pretty self evident with the CIDR. | 17:03 |
kbringard | I think it has to do with out it splits up the addresses to add them to the fixed_ips table | 17:04 |
Nick0la | ...and again...the number of IP addresses seems pretty self evident, as well. | 17:04 |
*** fysa has joined #openstack | 17:04 | |
Nick0la | 10.1.1.128 is only gonna have 126 hosts available to it. Basic networking. I'm wondering if there is another reason it needs to be explicitely set on the commandline? | 17:05 |
kbringard | well, it breaks it up automagically based on the number of IPs you ask for, so you may have to fiddle with that number a bit | 17:06 |
*** MarkAtwood has left #openstack | 17:06 | |
*** Ryan_Lane has joined #openstack | 17:08 | |
Nick0la | hmmm. I'm still missing something, I think. What does it do with the IP's that it doesn't allocate? | 17:08 |
Nick0la | for example: 10.1.1.0/24 1 128 | 17:08 |
Nick0la | what happens to all the extra IPs? | 17:08 |
kbringard | nothing, they just never get allocated | 17:09 |
kbringard | because they're not in the fixed_ips table | 17:09 |
kbringard | the fixed_ips table and the networks table are gospel to nova | 17:09 |
kbringard | the nova-manage tool is just a way to get data in there quickly | 17:09 |
kbringard | but ultimately it only cares about what's in those 2 tables, when it comes to allocating IPs and configuring the network | 17:09 |
Nick0la | okay. Would it break it terribly to manipulate that database directly? should it go through controls/sanity checking first? Is the database accessed for every request or does nova read and cache the contents? | 17:10 |
vvuksan | kbringard: since you are now Nova expert :-) you can perhaps help me | 17:10 |
kbringard | personally, I use the nova-manage tool to put the skeleton info in there | 17:11 |
Nick0la | (sorry for the 2000 questions -- I just wanna understand it better) | 17:11 |
kbringard | Nick0la: no worries, I like to help, but I don't always know everything, so sorry if I don't do a good job explaining it :-/ | 17:11 |
kbringard | but yea, I create the basic info with nova-manage and then I'll go in and modify it manually to make it fit my environment | 17:11 |
kbringard | vvuksan: sure thing boss, what's up? | 17:12 |
Nick0la | kbringard: We can't all be experts on everything. :-) | 17:12 |
Nick0la | only my 3 year son knows everything. Or so he claims. | 17:12 |
kbringard | lol, yea, same with my daughter | 17:12 |
vvuksan | Nick0la: he's right :-) | 17:12 |
Nick0la | haha | 17:12 |
vvuksan | although in my case I'm informed that my help is not needed as she know everything :-) | 17:13 |
Nick0la | may I run a network design past you? for sanity? I'm very very very new to nova, and I don't wanna build it wrong. (and I have a deadline of getting a working system up by the end of June) | 17:13 |
Nick0la | HAHA | 17:13 |
vvuksan | kbringard: anyways I'm trying to solve my problem where I don't control my network infra and don't have access to VLANs | 17:14 |
kbringard | vvuksan: lol, back in that boat eh? | 17:14 |
Nick0la | vvuksan: you and me too! | 17:14 |
vvuksan | kbringard: Say I have a host machine and with it a set of IPs | 17:14 |
vvuksan | 8 IPs | 17:14 |
vvuksan | although I can get more | 17:14 |
*** dendro-afk is now known as dendrobates | 17:14 | |
vvuksan | so the way I did it which works to an extent | 17:15 |
winston-d | BK_man : hi | 17:15 |
vvuksan | and btw this is all for internal use | 17:15 |
BK_man | winston-d: hi | 17:15 |
vvuksan | no public IPs involved | 17:15 |
vvuksan | so on each node I set up compute, network | 17:15 |
vvuksan | each node has it's own network range ie. 192.168.x.0/24 | 17:15 |
*** blamar has quit IRC | 17:16 | |
vvuksan | that is because you can have only one nova-network per network range | 17:16 |
winston-d | you mentioned that 'network-injection' is there for RHEL like guest. what does that mean? I have similar issue with https://answers.launchpad.net/nova/+question/150776 | 17:16 |
kbringard | yea | 17:16 |
vvuksan | so that works | 17:16 |
*** blamar has joined #openstack | 17:16 | |
vvuksan | then what I have done is precreated NAT rules | 17:16 |
kbringard | vvuksan: why don't you just use one nova-network machine and create 8 different networks on it | 17:17 |
kbringard | to simplify your routing paths? | 17:17 |
vvuksan | basically on each node I built 10.1.2.3 <=> 192.168.2.1 , 10.1.2.4 <=> 192.168.2.2 | 17:17 |
vvuksan | kbringard: how does that help me though since say two nodes only share eth0 interfaces | 17:17 |
kbringard | or is that because each interface is hardwired to a single subnet? | 17:17 |
zigo-_- | Oups, intended for channel... :) | 17:17 |
vvuksan | i mean eth0 are on the same 10.0.x.x network | 17:18 |
kbringard | zigo-_-: you're having misfire issues today dude :-p | 17:18 |
vvuksan | yes | 17:18 |
zigo-_- | My unit tests are failing here on my test server. We believe this is because of missing build-depends. alekibango got it to work on his Debian, and we managed to make a diff of packages he got installed: http://paste.debian.net/114138/ Does any of the core team dev know what package in this list are missing from my test server??? | 17:18 |
kbringard | vvuksan: ah, ok, then ignore my previous comment :-D | 17:18 |
alekibango | its not working yet, just tests passed and packages built | 17:18 |
zigo-_- | kbringard: I'd like to be able to use a better client than BitchX, but I get logged out all the time if I do from my box. | 17:18 |
vvuksan | kbringard: the question is how can I limit number of guests that will run on aparticular node | 17:18 |
zigo-_- | s/box/laptop/ | 17:18 |
zigo-_- | So I just IRC from shell on a server ... | 17:19 |
alekibango | zigo-_-: show me your log... | 17:19 |
vvuksan | since e.g. I got only .1-.8 that is attached to host1, .9-.15 to host2 | 17:19 |
vvuksan | kbringard: am I making sense ? | 17:19 |
kbringard | vvuksan: hmmm, that's a good question... I would say projects are a good way to go... but then you run into VLAN issues | 17:19 |
zigo-_- | alekibango: Hang on... | 17:19 |
zigo-_- | Then BitchX isn't great... | 17:19 |
kbringard | zigo-_-: try irssi | 17:20 |
*** freeflying has quit IRC | 17:20 | |
zigo-_- | http://ftparchive.gplhost.com/debian/pool/openstack/main/n/nova/build.log | 17:20 |
alekibango | yes, irssi is very good | 17:20 |
BK_man | winston-d: on what version? | 17:21 |
alekibango | i consider using it as this kde application is taking huge ammounts of memory | 17:21 |
zigo-_- | It doesn't have color, hilighting, multi-window, shortcuts, etc. | 17:21 |
zigo-_- | In fact, I should just setup an IRC bouncer... | 17:21 |
*** freeflying has joined #openstack | 17:21 | |
winston-d | BK_man: i'm using your package, 2011.1.1-4 | 17:21 |
BK_man | winston-d: consider upgrading to Cactus | 17:22 |
zigo-_- | test_get_foxnsocks ERROR | 17:22 |
BK_man | winston-d: I released packages couple hours ago | 17:22 |
Nick0la | vvuksan: May I ask? Why a different network range for each host machine? wouldn't you have the same ranges over all the hosts so if you have to move a guest from one node to another, it won't have to change IP addresses? | 17:22 |
zigo-_- | test_bad_login_both_bad ERROR | 17:22 |
zigo-_- | etc... | 17:22 |
*** antenagora has quit IRC | 17:22 | |
Nick0la | vvuksan: I am so not experienced with nova, so I hope that is not a silly question.... | 17:23 |
vvuksan | Nick0la: it's not a silly question. Problem is that what I'm trying to do is perhaps not something Nova was designed for | 17:23 |
vvuksan | Nick0la: so I'm trying to work around it | 17:24 |
Nick0la | vvuksan: ahhh. okay. :-) What are you trying to accomplish? | 17:24 |
* Nick0la is curious | 17:24 | |
zigo-_- | Is it fine to run the unit tests on a server that is already running nova ? | 17:24 |
vvuksan | i don't manage my network infrastructure. Only machines that run on top of it | 17:24 |
vvuksan | it's all internal private stuff | 17:24 |
vvuksan | so I can get any number of IPs but getting my own VLAN would be tough | 17:25 |
vvuksan | plus there are other logistical issues | 17:25 |
winston-d | BK_man : do you mean that your Bexar release doesn't support 'network injection'? I'd love to upgrade the Cactus but I prefer fixed the issues I met with Bexar first. | 17:25 |
vvuksan | so I figured that I could simply on each host run my own private network in e.g. 192.168.x.0/24 range then NAT to it | 17:25 |
vvuksan | from the public interface | 17:25 |
Nick0la | vvuksan: I understand. | 17:26 |
vvuksan | that way it's fairly "clean" | 17:26 |
BK_man | winston-d: Bexar version is supporting network injection too | 17:26 |
vvuksan | i have concept in place | 17:26 |
*** Gaelfr has joined #openstack | 17:26 | |
vvuksan | trouble is I need a way to limit how many VMs get kicked of on the machine | 17:26 |
BK_man | winston-d: you can setup it in MySQL db (networks table) | 17:26 |
winston-d | BK_man : do I have to do anything to enable that or it was already default? | 17:26 |
vvuksan | i would e.g. add static NAT mapping | 17:26 |
Nick0la | vvuksan: I still do not undertand the need for a differnt subnet on each host. :-) | 17:26 |
winston-d | BK_man : ok, i'm listening. | 17:27 |
Nick0la | I am in the same boat as you. I cannot use VLAN's. | 17:27 |
BK_man | winston-d: there is a column and you should set it to 1 for your particular network. It's off by default | 17:27 |
vvuksan | Nick0la: you could bridge directly on eth0 | 17:27 |
vvuksan | however that is fraught with peril | 17:27 |
Nick0la | our network guru would have me fired instantly if I even thought seriously about adding a vlan without 3 weeks of change control. | 17:27 |
vvuksan | cause you need to specify a default gw | 17:28 |
Nick0la | I am planning on using dhcp and bridges to do my network. | 17:28 |
vvuksan | and nova-network unless you are really careful will try to assume the default gw IP | 17:28 |
Nick0la | I have a single range of external IP's that have been granted to me (after much wheedling and no small amount of bribery) | 17:29 |
winston-d | BK_man : which column? | 17:29 |
Nick0la | and I'm gonna let all my hosts use that range and let DHCP dish them out. | 17:29 |
BK_man | winston-d: could you please dump your iptables and ebtables config and sent it to me for an analysis? | 17:29 |
winston-d | BK_man : oh i see, the 'injected' | 17:29 |
vvuksan | Nick0la: you don't have any other DHCP servers on that network segment ? | 17:29 |
Nick0la | ...that's the plan, anyways. | 17:29 |
*** reldan has quit IRC | 17:29 | |
BK_man | winston-d: we are unable to reproduce dnsmasq bug in our lab | 17:29 |
vvuksan | Nick0la: trouble is I can't use DHCP on the public network | 17:30 |
vvuksan | since there is one there already | 17:30 |
Nick0la | the IP range I was given is in a specific VLAN set up for me specifically for this project. Same with the IP's, so I have complete control over DHCP. | 17:30 |
*** Gaelfr has quit IRC | 17:30 | |
vvuksan | that's good | 17:30 |
Nick0la | vvuksan: ugh! that sucks. | 17:30 |
winston-d | BK_man : sure. but how can i make sure the problem i met is caused by 'dnsmasq'? what iptable/ebtables config do you want ? the one on compute node or on network node? | 17:31 |
Nick0la | but...you can control dhcp for your 192.blah addressing, yes? | 17:31 |
vvuksan | sure | 17:31 |
vvuksan | since I run my own "walled" off bridge :-) | 17:32 |
Nick0la | so...why do you not give all the nodes access to the entire /24? | 17:32 |
vvuksan | you can't | 17:33 |
Nick0la | it is a nova limitation? or just the way you wanna do it? | 17:33 |
vvuksan | it's a networking limitation | 17:33 |
Nick0la | ah. okay. | 17:33 |
vvuksan | unless I span my own virtual network | 17:33 |
vvuksan | across all my compute nodes | 17:33 |
Nick0la | I understand now. | 17:34 |
Nick0la | yah...that could be..problematic. | 17:34 |
vvuksan | unfortunately it's a tricky issue :-( | 17:34 |
vvuksan | perhaps not soo tricky | 17:34 |
*** bcwaldon has joined #openstack | 17:34 | |
vvuksan | what is tricky is telling Nova to dish out only X IPs in 192.168.2.0/26 range | 17:34 |
vvuksan | i will have to play with it | 17:34 |
Nick0la | I was curious because isn't the benefit of nova that if one host dies, the other hosts can pick up the slack? | 17:35 |
*** mahadev_ has joined #openstack | 17:35 | |
Nick0la | and if the hosts each hand out different IP ranges, it would be difficult to migrate a machine to a new host. | 17:35 |
*** mahadev has quit IRC | 17:35 | |
Nick0la | migrate a 'guest', sorry. | 17:35 |
Nick0la | my clients are most certainly gonna require a static IP address. | 17:36 |
Nick0la | so making sure their IP moves with the guest is very important to me. | 17:36 |
*** reldan has joined #openstack | 17:38 | |
Nick0la | what does this directive mean: FAKE_subdomain=ec2 | 17:38 |
Nick0la | ? | 17:38 |
BK_man | winston-d: from both if you can | 17:39 |
kbringard | vvuksan: sorry, got pulled away | 17:39 |
vvuksan | kbringard: no worries | 17:39 |
kbringard | do you know if you're network gear passes like, vlan0 | 17:39 |
kbringard | as "untagged" | 17:39 |
vvuksan | dunno | 17:39 |
winston-d | Hey guys. It took quite long for Nova to do 'networking' for a newly created instance here. Is that normal? How long does your Nova take? | 17:39 |
kbringard | maybe you could create projects | 17:39 |
kbringard | assign them all to the same "untagged" vlan | 17:39 |
kbringard | then you just launch instances in each project | 17:40 |
kbringard | since IPs are allocated to projects | 17:40 |
winston-d | BK_man : OK. I've just altered the database and am trying to launch a new instance to see if networking issue is gone. | 17:40 |
BK_man | winston-d: I need to go home, it's late evening in Russia. Please send you logs to abrindeyev@griddynamics.com | 17:40 |
kbringard | so you basically have one project per host | 17:40 |
vvuksan | kbringard: hmm | 17:40 |
kbringard | and when that host is out of IPs, if more try to get launched in that project, they'll just get stuck in networking | 17:41 |
* BK_man off to home | 17:41 | |
*** rds__ has joined #openstack | 17:41 | |
winston-d | BK_man : ok. It's already 1:40 in the morning here. :) I'll send the logs to you. | 17:41 |
*** elasticdog has quit IRC | 17:41 | |
vvuksan | kbringard: looking | 17:41 |
*** BK_man has quit IRC | 17:41 | |
vvuksan | kbringard: you got a link to projects docs ? | 17:43 |
kbringard | hmmm | 17:44 |
*** kashyap has joined #openstack | 17:44 | |
kbringard | http://wiki.openstack.org/NovaAdminManual#Managing_Projects | 17:44 |
kbringard | that's the admin manual | 17:44 |
vvuksan | mmm | 17:44 |
vvuksan | that may not cut it for me | 17:44 |
vvuksan | since projects are associated with users | 17:45 |
kbringard | http://nova.openstack.org/runnova/network.vlan.html#index-0 | 17:45 |
kbringard | I think you can have users in multiple projects | 17:45 |
kbringard | the only thing is, you have to add :project_name at the end of the API key | 17:45 |
kbringard | when making your call | 17:45 |
*** reldan has quit IRC | 17:45 | |
kbringard | so you'd probably need to write some middleware to manage that | 17:46 |
vvuksan | right | 17:46 |
kbringard | I'll keep thinking too... that's just one idea that comes to mind | 17:47 |
kbringard | not ideal I know :-/ | 17:47 |
*** bcwaldon_ has joined #openstack | 17:49 | |
*** adiantum has quit IRC | 17:49 | |
dspano | Can you only allocate floating ips with the VLAN manager? | 17:49 |
dspano | I'm using flatDHCP. | 17:49 |
kbringard | you should be able to allocate floating to flatdhcp | 17:50 |
*** hagarth has joined #openstack | 17:52 | |
*** mramige has joined #openstack | 17:53 | |
*** mgoldmann has quit IRC | 17:54 | |
*** mgoldmann has joined #openstack | 17:54 | |
Nick0la | hmmm....what does this directive mean: --routing_source_ip | 17:55 |
Nick0la | (in the nova config) | 17:55 |
*** elasticdog has joined #openstack | 17:55 | |
*** elasticdog has joined #openstack | 17:56 | |
Nick0la | oops...nevermind...I just found the comprehensive list... | 17:59 |
winston-d | kbringard : how long does your Nova take to boot a new instance? | 17:59 |
vvuksan | winston-d: mine come up quickly | 17:59 |
vvuksan | 30 seconds | 17:59 |
Nick0la | hrm...nevermind my 'nevermind'. --routing_source_ip shown in an example is not on the comprehensive list..... | 18:01 |
winston-d | vvuksan : wow, really? it takes almost 3 mintues for a guest with 5GB image | 18:01 |
winston-d | vvuksan : and from both euca-describe-instance and nova-compute log, 'networking' stage took almost half of the time | 18:02 |
*** mahadev_ has quit IRC | 18:02 | |
winston-d | vvuksan : scheduling is quick, networking takes forever, launching is OK. altogether, it takes forever to boot a instance. | 18:03 |
vvuksan | doesnt' seem right | 18:03 |
vvuksan | what you could do is | 18:03 |
*** mahadev has joined #openstack | 18:03 | |
vvuksan | watch the console log for the instance | 18:03 |
vvuksan | to see what it's doing | 18:03 |
winston-d | well, by the time console log appears, it already passes the 'networking' stage. | 18:04 |
vvuksan | hmm | 18:05 |
winston-d | 'networking' is compute node building up VLAN interface, bridge and stuff, then setting up IPTALBES rules, etc. | 18:06 |
vvuksan | hmm | 18:07 |
vvuksan | i'm not using VLAN | 18:08 |
vvuksan | so I may not be seeing the issue you are seeing :-/ | 18:08 |
winston-d | that's possible. hmmm | 18:08 |
*** stewart has quit IRC | 18:10 | |
*** stewart has joined #openstack | 18:11 | |
dspano | kbringard: Weird. I'm getting this error. File "/usr/lib/pymodules/python2.6/nova/db/sqlalchemy/api.py", line 444, in floating_ip_allocate_address raise db.NoMoreAddresses() | 18:11 |
*** syah_ has quit IRC | 18:12 | |
*** rcc has quit IRC | 18:15 | |
*** syah has joined #openstack | 18:16 | |
uvirtbot | New bug: #762047 in nova "Several modules don't pass pep8" [Undecided,New] https://launchpad.net/bugs/762047 | 18:16 |
zigo-_- | FYI: I have uploaded the Cactus release for Debian at http://ftparchive.gplhost.com/debian/pool/openstack/main/n/nova/ (deb http://ftparchive.debian.org/debian openstack main) | 18:17 |
zigo-_- | That's only Nova for the moment. | 18:17 |
ryker | if anyone going to the conference needs a ride from the SF airport to their hotel, I will be arriving at SFO on the 25th @10:45am. Just email me to set something up: john.m.alberts -at- gmail.com | 18:20 |
*** antenagora has joined #openstack | 18:24 | |
*** clauden has joined #openstack | 18:25 | |
*** stewart has quit IRC | 18:35 | |
*** zigo-_- has quit IRC | 18:38 | |
*** joearnold has quit IRC | 18:38 | |
Nick0la | my photography gear shipped this afternoon! | 18:39 |
Nick0la | wrong window...sorry | 18:40 |
*** mahadev has quit IRC | 18:40 | |
*** mahadev has joined #openstack | 18:41 | |
kpepple | can someone tell Stephen Spector that his link to the Cactus release announcements in his Community Weekly Newsletter is 404 ? | 18:44 |
kbringard | dspano: do you have a project associated with the floating ip? | 18:44 |
kbringard | did we talk about this already? My brain is mush today | 18:44 |
dspano | kbringard: I figured it out. I was being a dummy. I thought the hostname argument was for injecting the hostname in to the instance. | 18:45 |
kbringard | ah | 18:45 |
kbringard | hehe, not being a dummy, it's not super clear | 18:45 |
dspano | kbringard: Once I changed the hostname to cld01, it worked. Is there a way to inject the hostname? | 18:45 |
kbringard | dspano: naw, I think the idea is that the floating_ip is an "external" IP that is managed in your out of band (to nova) DNS | 18:46 |
uvirtbot | New bug: #762071 in nova "Several modules use "== None" instead of "is None"" [Undecided,New] https://launchpad.net/bugs/762071 | 18:46 |
*** elambert has quit IRC | 18:52 | |
*** joshfng has joined #openstack | 18:52 | |
dspano | kbringard: Do you know why it would raise db.NoMoreAddresses here: /usr/lib/pymodules/python2.6/nova/db/sqlalchemy/api.py", line 620, in fixed_ip_associate_pool | 18:57 |
kbringard | dspano: in my experience that happens when they're not associated with the project you're authenticating as... | 18:58 |
kbringard | if you're not using projects then it may be a bug :-/ | 18:58 |
*** reldan has joined #openstack | 18:59 | |
*** joshfng has quit IRC | 19:00 | |
kbringard | does anyone know if the XML returned changes format if there are more than X number of instances running? | 19:01 |
kbringard | for euca-describe-instances? | 19:01 |
kbringard | I'm seeing a weird error where my tests start failing if there are more than a certain number (which I have not pinned down yet) | 19:01 |
annegentle | +1 for Gina Trapani being a Sphinx promoter: https://groups.google.com/group/thinkupapp/browse_thread/thread/aee02b16d968c8ed?pli=1 | 19:03 |
*** dragondm has quit IRC | 19:04 | |
*** Ryan_Lane is now known as Ryan_Lane|food | 19:10 | |
*** photron has joined #openstack | 19:10 | |
*** dragondm has joined #openstack | 19:14 | |
*** jkoelker has quit IRC | 19:14 | |
*** kashyap has quit IRC | 19:15 | |
*** allsystemsarego has quit IRC | 19:22 | |
dspano | kbringard: I got it to work. I needed to delete the network that the nova-CC-install-v1.1.sh script created and create a new one for my test project. | 19:25 |
dspano | kbringard: Thanks for you help man. | 19:26 |
kbringard | ah, good work and np | 19:26 |
*** dodeeric has quit IRC | 19:27 | |
*** camm has quit IRC | 19:27 | |
kbringard | for anyone with kids and an i* device: http://dealnews.com/Mc-Graw-Hill-iPhone-iPad-apps-for-free-Monster-Squeeze-Tric-Trac-more/453026.html | 19:28 |
*** mgoldmann has quit IRC | 19:28 | |
kbringard | or, I guess if you just want to brush up on your basic arithmetic | 19:29 |
vvuksan | nice | 19:30 |
vvuksan | kbringard: btw I recommend http://launchpadtoys.com/toontastic/ | 19:30 |
kbringard | ah, I don't have an iPad yet | 19:31 |
kbringard | but I'll bookmark that for when I do :-) | 19:31 |
kbringard | thanks | 19:31 |
alekibango | kbringard: it seems to be very easy | 19:31 |
alekibango | maybe for my almost 3 yo daughter :) | 19:32 |
alekibango | 5yo can count better | 19:32 |
kbringard | hehe, yea, I've a 2.5 year old and a 5m old | 19:32 |
kbringard | so they're perfect for my kid | 19:32 |
kbringard | I should have said "young" kids | 19:32 |
alekibango | still, no eyedevice | 19:32 |
alekibango | :D | 19:33 |
kbringard | hehe, you're probably better off | 19:33 |
alekibango | how far away from head do you hold it when talking? | 19:33 |
alekibango | :))) | 19:33 |
kbringard | ok, so dspano got me thinking... what should go in the host section of the floating_ips? | 19:34 |
alekibango | kbringard: i will try to make my sons (5,7) write similar games... | 19:34 |
kbringard | is that the host where the network controller lives? | 19:34 |
alekibango | :) | 19:34 |
kbringard | alekibango: hah, I'll beta test if they do | 19:34 |
alekibango | very nice images btw | 19:34 |
alekibango | kbringard: try gvrng | 19:36 |
alekibango | my kids love that.. and gcompris... tuxpaint... and (heh) openttd and warzone2100 | 19:36 |
alekibango | even the 5yo can play that! | 19:37 |
*** kakella has joined #openstack | 19:39 | |
*** kakella has left #openstack | 19:39 | |
*** rsaidan has quit IRC | 19:40 | |
*** rsaidan has joined #openstack | 19:41 | |
dspano | To whomever put up the new documentation. Thank you. | 19:46 |
kbringard | that's probably annegentle | 19:46 |
kbringard | she is the awesome keeper of the docs | 19:47 |
*** ctennis has quit IRC | 19:49 | |
*** Zangetsue has quit IRC | 19:49 | |
*** Zangetsue has joined #openstack | 19:50 | |
*** joearnold has joined #openstack | 19:52 | |
*** MarkusT has joined #openstack | 19:53 | |
MarkusT | I can't seem to find an simple tutorial on how to modify (update + minor changes) an existing image. Is it really that complicated? My main problem involves regular updates of the instances, how do you do that easily? | 19:54 |
kbringard | MarkusT: that's something I'm struggling with myself | 19:56 |
kbringard | what I've resorted to doing is | 19:56 |
kbringard | launch the instance | 19:56 |
kbringard | make the changes | 19:56 |
kbringard | log into the compute node (the host the instance was running on) | 19:56 |
kbringard | shutdown -h the VM (from within the VM) | 19:57 |
kbringard | then cd /var/lib/nova/instances/$instance_id | 19:57 |
kbringard | bundle it (if you use euca-bundle, etc) or glance-upload it | 19:57 |
kbringard | then launch it | 19:58 |
kbringard | the new one | 19:58 |
*** BK_man has joined #openstack | 19:58 | |
kbringard | not the cleanest, but it works | 19:58 |
kbringard | then you can virsh start $instance_id and it should fire the VM back up | 19:58 |
kbringard | or you can euca-terminate-instances it | 19:58 |
* annegentle shakes fist at wiki spammers | 19:59 | |
*** iammartian has joined #openstack | 19:59 | |
MarkusT | kbringard: I'm an idiot. I tried "that" (at least thought so), but just copied the original image instead of the instance image. Will test it (and report back), thanks! | 19:59 |
annegentle | dspano: no, thank you for noticing! :) | 19:59 |
iammartian | ugh: http://wiki.openstack.org/Payday%20Loans | 19:59 |
kbringard | MarkusT: make sure you're uploading disk, and not disk.local | 19:59 |
kbringard | also make sure you link the kernel and ramdisk back up (if necessary) | 19:59 |
iammartian | N<, already deleted it | 20:00 |
iammartian | NM | 20:00 |
*** brd_from_italy has joined #openstack | 20:00 | |
kbringard | generally I just use the same kernel I launched it with originally, unless a kernel update is part of what you're trying to update | 20:00 |
kbringard | in which case it's a bit more complicated | 20:00 |
*** ctennis has joined #openstack | 20:10 | |
kbringard | iammartian: how am I going to find out about how nova can help me with payday loans now? | 20:12 |
kbringard | :-p | 20:12 |
iammartian | heh | 20:12 |
*** koolhead17 has quit IRC | 20:18 | |
*** grapex1 has joined #openstack | 20:29 | |
*** grapex has quit IRC | 20:32 | |
*** jkoelker has joined #openstack | 20:33 | |
*** bcwaldon_ has quit IRC | 20:34 | |
*** bcwaldon has quit IRC | 20:34 | |
MarkusT | kbringard: Might be a dumb question, but if I run "euca-upload-bundle -b mybucket -m /tmp/disk.manifest.xml" it seems to work fine, but doesn't show up on describe-images. What am I doing wrong? :-) | 20:34 |
kbringard | did you euca-register? | 20:35 |
kbringard | when you upload it should give you an id | 20:35 |
kbringard | then you have to euca-register $id | 20:35 |
kbringard | before it'll show up in describe-images | 20:35 |
*** dendrobates is now known as dendro-afk | 20:52 | |
MarkusT | kbringard: Thanks, I did not register it. A (last) one: I'm now able to start the instance, but can't log in (and can't get any console output). nova-compute.log says: "ignoring error injecting data into image ami-[...] (Mapped device was not found (we can only inject raw disk images): /dev/mapper/nbd15p1". Any ideas what is causing this? | 20:53 |
vvuksan | you can ignore that | 20:54 |
MarkusT | vvuksan: O.k., but I still can't log in :-) | 20:54 |
kbringard | yea, at this point it get's tricky | 20:54 |
kbringard | what network mode are you using? | 20:54 |
MarkusT | kbringard: FlatDHCP | 20:54 |
kbringard | ah, OK | 20:55 |
vishy | uksysadmin: euca-allocate-address before euca-associate-address | 20:55 |
kbringard | MarkusT: is your image set to get it's network info via DHCP (and can you ping the IP nova allocated it?) | 20:55 |
MarkusT | kbringard: I was using a Ubuntu 10.04 image and only updated it. I'm not aware of any changes, which might cause it to fail now. The original image still works fine. | 20:56 |
kbringard | can you ping it? | 20:57 |
vishy | MarkusT: injection isn't expected to work for whole disk images | 20:57 |
kbringard | vishy is the man, if anyone can help you, it's him | 20:57 |
MarkusT | kbringard: Thanks for helping me this far! | 20:58 |
kbringard | hehe, you're welcome | 20:58 |
kbringard | I have to go to a meeting, I'll bbl | 20:58 |
kbringard | I would say it's probably one of 2 things... either it's not getting an IP | 20:58 |
kbringard | or the credentials aren't being pulled correctly | 20:59 |
MarkusT | vishy: I'm just trying to figure out how to update (and modify) an image. kbringard helped me to copy (and republish) the running instane, I just can't ssh back into the modified image | 20:59 |
kbringard | you can also check the compute node it started on, and look for the VNC port that was assigned to this instance (it starts at 5900 and goes up one for every instance) | 20:59 |
vishy | how did you update it? bundle and upload? | 21:00 |
kbringard | and VNC to the instance to see if it hung somewhere and didn't start | 21:00 |
*** dendro-afk is now known as dendrobates | 21:00 | |
MarkusT | vishy: Yes. bundle, upload, register, run | 21:00 |
MarkusT | vishy: Or complete: Run original instance, update, shutdown -h, bundle, upload, register, run | 21:01 |
vishy | was the original image a whole disk image? | 21:01 |
vishy | becacuse it sounds like it is trying to boot without a kernel and ramdisk | 21:01 |
vishy | are you using Glance? or local image service | 21:01 |
vishy | ? | 21:01 |
MarkusT | It's lucid-server-uec-amd64.img, so: no. I wgetted it and uploaded it to mybucket. | 21:02 |
kbringard | seems likely that it's not linked to the original kernel and ramdisk, like vishy said | 21:02 |
vishy | oh | 21:02 |
MarkusT | How do I link it? | 21:02 |
vishy | you didn't give it a kernel and ramdisk? | 21:03 |
vishy | well easiest way is to specify them all at once with nova-manage combined_register | 21:03 |
vishy | but you could upload the others and manually edit the json | 21:03 |
MarkusT | vishy: I just bundled the "disk" image of the updated instances. I was under the impression not to loose kernel and ramdisk, since it's just a modified disk image. | 21:04 |
vishy | where did you get this "disk" image? | 21:05 |
*** dmshelton has left #openstack | 21:06 | |
uvirtbot | New bug: #762182 in nova "Instance_type_id refactor broke XS Resizes" [Undecided,New] https://launchpad.net/bugs/762182 | 21:06 |
MarkusT | vishy: /var/lib/nova/instances/$instance_id from the (updated) (halted) instance. | 21:08 |
vishy | yeah that won't work | 21:08 |
vishy | unless you are in --nouse_cow_images_mode | 21:09 |
vishy | if you want to snag the modified image from outside | 21:09 |
kbringard | vishy: ok, sorry, that's my bad | 21:09 |
*** antenagora has quit IRC | 21:10 | |
vishy | you need to snapshot it with qemu-img, then use the qemu-img snapshot convert command, and when you upload it, you need to specify the kernel and ramdisk id | 21:10 |
vishy | wow it is too bad snapshotting for kvm didn't make it in because this would be really easy :) | 21:10 |
MarkusT | vishy: kbringard uses it :-). I'm just trying to find an easy way to modify and update. I got the suggestion for "--nouse_cow" which didn't work, does --nouse_cow_images_mode work with bexxar? Is there an easy tutorial on the issue? I tried dozens, but each time it fails :-) | 21:11 |
kbringard | vishy: I was able to glance-upload the raw disk image from instances | 21:11 |
kbringard | when I saw "raw disk image" i mean the qcow2 | 21:11 |
vishy | sorry it is --nouse_cow_images | 21:11 |
vishy | kbringard: sounds dangerous. Might work if the backing file happens to be there already | 21:11 |
kbringard | yea, I was having weird issues with it | 21:12 |
kbringard | the image would run fine on the compute node that had the glance server running on it | 21:12 |
kbringard | but not on any of the other compute nodes | 21:12 |
kbringard | but, it's a new image_id, so the backing image changed | 21:12 |
kbringard | it was very odd | 21:12 |
kbringard | so I'll just chalk it up to "that doesn't work" and find a different way to do this | 21:13 |
kbringard | sorry MarkusT, didn't mean to give you bad info :-) | 21:13 |
MarkusT | vishy: Do I need --nouse_cow_images when using qemu-img? | 21:13 |
MarkusT | kbringard: No problem. :-) | 21:13 |
*** johnpur has quit IRC | 21:13 | |
vishy | MarkusT: no | 21:13 |
*** drico has joined #openstack | 21:13 | |
vishy | kbringard: you're going to have chained backing images in that case | 21:14 |
kbringard | ah, that would explain the problem I was having | 21:14 |
vishy | kbringard: so the snapshotting feature which does this essentially does the following: | 21:14 |
*** elambert has joined #openstack | 21:16 | |
*** h0cin has quit IRC | 21:18 | |
vishy | qemu-img snapshot -c snap <file> | 21:19 |
*** photron has quit IRC | 21:19 | |
*** ppetraki has quit IRC | 21:21 | |
vishy | qemu-img convert -s snap <new_file> | 21:21 |
vishy | will give you a combined image | 21:22 |
vishy | you need to use qemu-common 14 which is in the ppa | 21:22 |
*** Zangetsue_ has joined #openstack | 21:25 | |
MarkusT | vishy: I'm using ppa:nova-core/release, there's no qemu-common (as far as I an see). Which ppa are you referring to? | 21:26 |
vishy | ppa:nova-core/trunk | 21:27 |
MarkusT | vishy: Can I mix both? (qemu-common from trunk and the rest release?). I'd love to have a stable system, since trunk did already cost me weeks of fiddling around so far :-) | 21:28 |
vishy | right now they should be equivalent. I don't know why qemu wasn't copied into release | 21:28 |
vishy | let me look | 21:28 |
*** stewart has joined #openstack | 21:29 | |
*** Zangetsue has quit IRC | 21:29 | |
*** Zangetsue_ is now known as Zangetsue | 21:29 | |
vishy | should be fine for the moment | 21:30 |
vishy | maybe I'll see if there is a reason soren didn't copy stuff over | 21:30 |
kbringard | vishy: awesome, thanks | 21:30 |
*** hggdh has quit IRC | 21:30 | |
*** hggdh has joined #openstack | 21:31 | |
*** drico has quit IRC | 21:31 | |
*** kbringard has quit IRC | 21:34 | |
*** keekz_ is now known as keekz | 21:36 | |
*** ppetraki has joined #openstack | 21:37 | |
*** antenagora has joined #openstack | 21:37 | |
*** antenagora_ has joined #openstack | 21:37 | |
*** antenagora has quit IRC | 21:37 | |
*** antenagora_ is now known as antenagora | 21:37 | |
MarkusT | vishy: I don't get it. Everytime I run "qemu-img convert -s snap output" it shows me the help page (no error message). Am I missing something. What I've done so far: Run instance, shutdown -h, cd instance directory, snapshot -c snap disk, and then trying to convert. Right? (Thanks for far for your help! :-)) | 21:38 |
*** Zangetsue has quit IRC | 21:38 | |
vishy | MarkusT: what version does qemu-img show | 21:38 |
*** Zangetsue has joined #openstack | 21:38 | |
MarkusT | vishy: qemu-img version 0.13.91 | 21:39 |
vishy | you need 14 | 21:39 |
*** vvuksan has quit IRC | 21:39 | |
MarkusT | vishy: Thought I got all files from the ppa. Will check again. | 21:39 |
*** MarkAtwood has joined #openstack | 21:41 | |
vishy | MarkusT: my bad | 21:42 |
*** dspano has quit IRC | 21:42 | |
vishy | MarkusT: it is 13.91 | 21:42 |
*** ppetraki has quit IRC | 21:42 | |
vishy | the command is qemu-img convert -s snap disk <new_file> | 21:43 |
soren | vishy: "Copy stuff over"? to the release ppa, you mean? | 21:43 |
vishy | soren: yeah, there is a bunch of stuff in the trunk ppa that isn't in the release ppa | 21:43 |
vishy | qemu, suds, etc. | 21:44 |
soren | vishy: Yeah, we opened diablo to soon. Before we copied over the packages. | 21:44 |
*** drico has joined #openstack | 21:44 | |
soren | vishy: I've been reassembling the pieces (fixing up the versioning in the process). | 21:44 |
*** bcwaldon has joined #openstack | 21:44 | |
*** bcwaldon_ has joined #openstack | 21:44 | |
soren | vishy: It's in nova-core/tmp right now. I'll probably copy everything over in a minute. | 21:44 |
vishy | soren: ah ok | 21:45 |
*** stewart has quit IRC | 21:46 | |
*** dragondm has quit IRC | 21:48 | |
*** reldan has quit IRC | 21:54 | |
*** bcwaldon has quit IRC | 21:59 | |
*** bcwaldon_ has quit IRC | 21:59 | |
soren | vishy: The deed is done. | 22:00 |
vishy | soren: coolness | 22:00 |
soren | The publisher runs every 5 minutes, so it'll be another minute-ish before it's apt-get'able from there. | 22:04 |
*** elambert has quit IRC | 22:06 | |
*** antenagora has quit IRC | 22:07 | |
*** drico has quit IRC | 22:09 | |
MarkusT | vishy: The new image fails to start (pending). compute log says: "Error: internal error process exited while connecting to monitor: char device redirected to /dev/pts/1" "Two devices with same boot index 0" | 22:09 |
vishy | MarkusT: how did you upload the new image? | 22:10 |
MarkusT | vishy: "euca-upload-bundle -b mybucket -m /tmp/<myoutputfilename>.manifest.xml" | 22:11 |
vishy | this might be a heck of a lot easier | 22:12 |
vishy | ./nova-manage image image_register <path> admin T x86_64 ami ami <kernel_id> <ramdisk_id> | 22:12 |
*** reldan has joined #openstack | 22:12 | |
MarkusT | vishy: It seems I fucked up something else. Now no image is able to start (with the same error). Argl, need to check that first. :-) | 22:15 |
*** jheiss has quit IRC | 22:15 | |
*** reldan has quit IRC | 22:17 | |
MarkusT | vishy: Any chance this might be related to updating kvm-qemu after nova was already installed? It's the only "bigger" change I'm able to identify in history | 22:18 |
vishy | hehe | 22:19 |
vishy | i would make sure libvirt-bin is updated | 22:19 |
*** jkoelker has quit IRC | 22:19 | |
vishy | and restart libvirt-bin | 22:19 |
vishy | an | 22:19 |
vishy | d | 22:19 |
vishy | restart nova-compute | 22:19 |
*** dragondm has joined #openstack | 22:19 | |
*** dragondm has joined #openstack | 22:20 | |
MarkusT | I already restarted the whole system. With updated you mean install libvirt-bin from nova/trunk? | 22:20 |
*** mahadev_ has joined #openstack | 22:27 | |
*** jfluhmann has quit IRC | 22:29 | |
MarkusT | vishy: Yeeaaaah! It works! I can't believe it finally works! :-) I needed to update libvirt-bin to trunk. So I now hold my breath I won't experience problems with that later on. But for now it works and I'm finally able to modify images. Thanks for all your help! I really appreciate it! :-) | 22:29 |
*** jheiss has joined #openstack | 22:29 | |
vishy | MarkusT: np, hopefully we'll have snapshotting support soon so you can do that all with one command :) | 22:30 |
*** mahadev has quit IRC | 22:31 | |
*** MarkAtwood has quit IRC | 22:33 | |
*** mray has quit IRC | 22:38 | |
*** dendrobates is now known as dendro-afk | 22:45 | |
_vinay | Hi | 22:51 |
_vinay | does lc-tools work with openstack ? | 22:52 |
_vinay | anyone know if there is a openstack driver in lc-tools ? .. thx | 22:52 |
*** mramige has quit IRC | 22:56 | |
*** rnirmal has quit IRC | 23:00 | |
*** Zangetsue has quit IRC | 23:00 | |
*** iammartian has left #openstack | 23:01 | |
*** aliguori has quit IRC | 23:05 | |
*** arun_ has quit IRC | 23:09 | |
*** mahadev_ has quit IRC | 23:13 | |
*** mahadev has joined #openstack | 23:14 | |
vishy | _vinay: never heard of lc-tools so doubtful | 23:14 |
vishy | ah looks like it is libcloud | 23:14 |
vishy | since openstack has an ec2-api, you should be able to get it to work with openstack, but it might require a bit of configuration | 23:16 |
*** clauden has quit IRC | 23:17 | |
_vinay | yes libcloud ... sorry | 23:19 |
_vinay | vishy... I see. will keep that in mind when I will try it.... thanks | 23:20 |
mirrorbox | _vinay: I'm the author of lc-tools. Libcloud probably will not work with your openstack right of the box | 23:22 |
mirrorbox | _vinay: as hostname is hardcoded | 23:22 |
*** shentonfreude has quit IRC | 23:22 | |
mirrorbox | _vinay: but once you change it I think it will work most likely. Let me know if you need any help | 23:23 |
_vinay | oh | 23:31 |
_vinay | well I have yet to look at libcloud ... but when you say hostname is hardcoded.. thats in libcloud right? | 23:32 |
_vinay | mirrorbox: and how do I change the hostname? | 23:33 |
_vinay | I will probably ping you once I start working on it | 23:33 |
mirrorbox | _vinay: yeah, it's hardcoded in libcloud | 23:33 |
mirrorbox | _vinay: find . -name ec2.py in libcloud dir and you will see it | 23:34 |
*** rsaidan has quit IRC | 23:36 | |
_vinay | okay | 23:40 |
_vinay | there are 4 hostname variables in there | 23:40 |
_vinay | so I guess just change all of them to point to the one of my nova installation ... right? | 23:40 |
mirrorbox | _vinay: yeah, try that and see how it goes | 23:41 |
_vinay | cool.. thanks mirrorbox for the info.. will try that and let you know | 23:41 |
*** Ryan_Lane|food has quit IRC | 23:45 | |
*** reldan has joined #openstack | 23:50 | |
*** world_weapon1 has joined #openstack | 23:51 | |
*** j05h has quit IRC | 23:52 | |
*** world_weapon has quit IRC | 23:53 | |
*** j05h has joined #openstack | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!