*** negronjl has joined #openstack-dev | 00:07 | |
*** alfred_mancx has quit IRC | 00:10 | |
*** rjimenez has quit IRC | 00:37 | |
*** mattray has joined #openstack-dev | 01:11 | |
*** mattray has quit IRC | 01:11 | |
*** BK_man_ has quit IRC | 01:22 | |
*** martine has joined #openstack-dev | 01:28 | |
*** Tushar has quit IRC | 01:33 | |
*** alfred_mancx has joined #openstack-dev | 01:58 | |
*** mfer has joined #openstack-dev | 02:22 | |
*** BK_man_ has joined #openstack-dev | 02:22 | |
*** rohitk has quit IRC | 02:34 | |
*** mfer has quit IRC | 02:35 | |
*** mfer has joined #openstack-dev | 02:36 | |
*** mfer has quit IRC | 02:38 | |
*** nci has left #openstack-dev | 03:10 | |
*** pyhole has quit IRC | 03:10 | |
*** pyhole has joined #openstack-dev | 03:11 | |
*** negronjl has quit IRC | 03:19 | |
*** creiht has quit IRC | 03:26 | |
*** creiht has joined #openstack-dev | 03:28 | |
*** ChanServ sets mode: +v creiht | 03:28 | |
*** alfred_mancx has quit IRC | 03:50 | |
*** zaitcev has quit IRC | 04:14 | |
openstackjenkins | Project keystone build #55: SUCCESS in 52 sec: http://jenkins.openstack.org/job/keystone/55/ | 04:21 |
---|---|---|
openstackjenkins | Monty Taylor: #16 Changes to remove unused group clls. | 04:21 |
openstackgerrit | Ziad Sawalha proposed a change to openstack/keystone: Add unittest2 to pip requires for testing https://review.openstack.org/110 | 04:53 |
*** nci has joined #openstack-dev | 05:01 | |
*** martine has quit IRC | 05:06 | |
*** nmistry has joined #openstack-dev | 05:26 | |
*** mnaser has quit IRC | 05:30 | |
*** nmistry has quit IRC | 05:36 | |
*** openpercept_ has joined #openstack-dev | 06:19 | |
ttx | jaypipes, vishy: starting diablo-3 release process in 10min. | 06:21 |
openstackjenkins | Project nova-milestone build #15: FAILURE in 7.5 sec: http://jenkins.openstack.org/job/nova-milestone/15/ | 06:32 |
*** mnaser has joined #openstack-dev | 06:32 | |
*** mnaser has quit IRC | 07:20 | |
*** BK_man_ has quit IRC | 08:06 | |
ttx | Nova and Glance diablo-3 are out ! | 08:28 |
*** BK_man_ has joined #openstack-dev | 08:30 | |
*** chomping has quit IRC | 08:38 | |
*** royh has joined #openstack-dev | 08:38 | |
royh | o/ hi guys :) | 08:38 |
*** mnaser has joined #openstack-dev | 08:42 | |
*** mnaser has quit IRC | 09:02 | |
*** darraghb has joined #openstack-dev | 09:05 | |
*** mnaser has joined #openstack-dev | 10:33 | |
*** thickskin has quit IRC | 10:38 | |
*** thickskin has joined #openstack-dev | 10:41 | |
BK_man | ttx: khm... got an exception with released version of Glance. Let me check it again... | 10:41 |
BK_man | ttx: it might be a regression here. | 10:42 |
BK_man | ttx: yep. rolling back glance - and it working again | 10:49 |
BK_man | ttx: this is a trace from released version: http://paste.openstack.org/show/1972/ | 10:49 |
*** TimR has quit IRC | 10:51 | |
BK_man | ttx: that tarball working ok: http://glance.openstack.org/tarballs/glance-2011.3~d3~20110727.r144.tar.gz | 10:52 |
*** markvoelker has joined #openstack-dev | 11:28 | |
*** BK_man has quit IRC | 11:32 | |
*** BK_man_ is now known as BK_man | 11:32 | |
*** openpercept_ has quit IRC | 11:40 | |
*** lorin1 has joined #openstack-dev | 11:42 | |
ttx | BK_man: there is an added dependency, python-xattr | 11:44 |
*** martine has joined #openstack-dev | 11:45 | |
ttx | BK_man: maybe that's the issue ? | 11:45 |
*** mfer has joined #openstack-dev | 11:58 | |
*** markvoelker has quit IRC | 12:31 | |
*** markvoelker has joined #openstack-dev | 12:36 | |
*** mnaser has quit IRC | 12:39 | |
*** bsza has joined #openstack-dev | 12:49 | |
*** bcwaldon has joined #openstack-dev | 13:08 | |
zul | soren: ping are you around? | 13:21 |
zul | soren: unping | 13:22 |
*** donald650 has joined #openstack-dev | 14:00 | |
*** BK_man has quit IRC | 14:04 | |
*** kbringard has joined #openstack-dev | 14:04 | |
*** amccabe has joined #openstack-dev | 14:06 | |
*** donald650 has quit IRC | 14:11 | |
*** donald650 has joined #openstack-dev | 14:11 | |
*** gaitan has joined #openstack-dev | 14:12 | |
*** amccabe has quit IRC | 14:14 | |
*** amccabe has joined #openstack-dev | 14:15 | |
*** donald650 has quit IRC | 14:28 | |
*** jkoelker has joined #openstack-dev | 14:28 | |
*** tudamp has joined #openstack-dev | 14:43 | |
*** cp16net_ has joined #openstack-dev | 14:47 | |
*** dragondm has joined #openstack-dev | 14:58 | |
*** rnorwood has joined #openstack-dev | 15:04 | |
tr3buchet | vishy: care to revisit https://code.launchpad.net/~klmitch/nova/glance-private-images/+merge/69661 | 15:14 |
ttx | jaypipes: saw BK_man's issue from ~3 hours ago ? | 15:14 |
blamar | vishy: functionally testing keystone-migration, hopefully should be able to get it in | 15:30 |
*** chemikadze has quit IRC | 15:32 | |
jaypipes | ttx: no, looking now | 15:40 |
jaypipes | ttx: just need to add python-xattr to package dependencies | 15:40 |
ttx | jaypipes: yes, that's what I thought. The exception looked a bit funny | 15:41 |
ttx | i.e. <'module' object has no attribute 'xattr'> instead of <cannot find module xattr> -- that's why I waneted you to doublecheck | 15:42 |
*** vladimir3p has joined #openstack-dev | 15:45 | |
jaypipes | ttx: when I see BK_man reappear, I'll address it | 15:52 |
ttx | jaypipes: I told him already, before he left | 15:53 |
ttx | no ack though | 15:53 |
*** mnaser has joined #openstack-dev | 15:53 | |
kbringard | do I need to do nova-manage network create for flat DHCP? | 15:53 |
vishy | kbringard: yup | 15:55 |
kbringard | cool, and it just ignores the vlan stuff, or I set that argv entry to 0? | 15:56 |
vishy | it ignores vlan | 15:58 |
kbringard | ah, sweet, thanks :-) | 15:58 |
kbringard | is this still accurate, as far as the flags and whatnot that are required? | 15:59 |
kbringard | http://docs.openstack.org/cactus/openstack-compute/admin/content/configuring-flat-dhcp-networking.html | 15:59 |
kbringard | I would think that if the network is created and fixed_ips is populated, you wouldn't need the flat_network_dhcp_start option... | 15:59 |
kbringard | also, since it's flat, the gateway is the actual upstream router, and not some address that is going to come up with the network controller, right? | 16:04 |
*** tudamp has left #openstack-dev | 16:06 | |
openstackgerrit | Yogeshwar Srikrishnan proposed a change to openstack/keystone: #16 Fixing pylint issues.Changes to remove unused group calls. https://review.openstack.org/111 | 16:20 |
vishy | blamar: doesn't it make more sense to get rid of _convert_timeformat and replace the calls with utils.isotime | 16:28 |
blamar | vishy: sure, just offering up a fix without really looking into it :) | 16:29 |
vishy | ok | 16:29 |
kbringard | it looks like you have to create a project, even if you're not using vlan modeā¦ is that correct? | 16:29 |
vladimir3p | guys, sorry for repeating it, but is there any chance to get more eyes on our VSA proposal? https://code.launchpad.net/~vladimir.p/nova/vsa/+merge/68987 | 16:31 |
*** lorin1 has left #openstack-dev | 16:37 | |
*** lorin1 has joined #openstack-dev | 16:38 | |
vishy | blamar: fix pushed and tests pass | 16:39 |
vishy | vladimir3p: I will be looking at it again today | 16:39 |
blamar | vishy: k, running tests again then it'll be good | 16:39 |
vishy | vladimir3p: been busy with d3 milestone so haven't had a chance to dive in | 16:39 |
vladimir3p | vishy: thanks a lot. I will update the BP spec and wil add some implementation details | 16:40 |
vladimir3p | vishy: sure, np | 16:40 |
openstackgerrit | Yogeshwar Srikrishnan proposed a change to openstack/keystone: #16 Changes to remove unused group calls.Pylint fixes. https://review.openstack.org/112 | 16:40 |
*** zaitcev has joined #openstack-dev | 16:43 | |
openstackgerrit | Yogeshwar Srikrishnan proposed a change to openstack/keystone: #16 Changes to remove unused group clls. https://review.openstack.org/113 | 16:43 |
jaypipes | dolphm: heyo. got a few minutes today to check out Keystone testing? | 17:05 |
jaypipes | bcwaldon: I thought *you* were Brian II and blamar was Brian I? :) | 17:12 |
blamar | jaypipes: that is correct | 17:13 |
jaypipes | blamar: :) | 17:13 |
*** yogirackspace has joined #openstack-dev | 17:14 | |
bcwaldon | jaypipes: don't you start | 17:29 |
jaypipes | lol | 17:29 |
jaypipes | annegentle: is the openstack.org wiki CSS editable on LP or GH? :) | 17:38 |
annegentle | nope, good idea though | 17:39 |
annegentle | not that I know of anyway | 17:39 |
*** darraghb has quit IRC | 17:42 | |
vishy | vladimir3p: ping | 17:43 |
vladimir3p | vishy: pong :-) | 17:43 |
vishy | so i want to go over some questions here because it is faster | 17:43 |
vishy | did you rename volume_type to drive_type? | 17:44 |
vladimir3p | vishy: sure | 17:44 |
vladimir3p | vishy: hmm... it was drive type from the beginning | 17:44 |
vishy | ok I thought there was a volume_type in there at some point | 17:44 |
vishy | apparently i missed it | 17:44 |
vladimir3p | visy: but it is pretty close to volume type | 17:45 |
vladimir3p | vishy: what is going on is: we create a volume from drives of particular type | 17:45 |
vladimir3p | vishy: the idea is to create a VSA (comput instances) and associate multiple drives to them | 17:45 |
vladimir3p | vishy: for these drives inside of nova we create special volumes | 17:46 |
vishy | vladimir3p: so I'm concerned that these additions are creating a bunch of specific code paths throughout the system for one driver | 17:46 |
vladimir3p | vishy: it is a nova-volume + special SW responsibility to decide either to dedicate full physical drive or some virtualized form of it | 17:46 |
vishy | vladimir3p: and I'd like to figure out how we can somehow decouple things a bit more | 17:47 |
vladimir3p | vishy: ok | 17:47 |
vladimir3p | vishy: in general we thought that VSAs might be provided by different vendors | 17:47 |
vishy | vladimir3p: it would be great if we could be at the point where there are generalizations in nova to support all of the stuff that you need | 17:47 |
vladimir3p | vishy: exactly | 17:47 |
vishy | vladimir3p: and nova-vsa + driver could be a separate component | 17:48 |
vishy | a plug-in if you will | 17:48 |
vladimir3p | vishy: ok... plug-in works ... will be better to see it as a part of nova | 17:48 |
vladimir3p | vishy: but I definitely understand your concerns | 17:49 |
vishy | vladimir3p: core will need to decide whether it is a nova-component or not, but I think we could do it in a way | 17:49 |
vishy | that would allow you to still do what you need whether or not it is in core | 17:49 |
vishy | so i'd like to get it into multiple proposals | 17:50 |
vishy | 1) all of the stuff in nova needed to make it work | 17:50 |
vishy | 2) the added nova-vsa / schedulers / drivers | 17:51 |
vishy | so let me go through the integration points step by step | 17:51 |
vishy | 1) cloud.py -> not sure about the addition here. Are you planning on writing command line tools to use these commands via the ec2 api? | 17:52 |
vladimir3p | vishy: we have our application using Openstack APIs, ... and have all the changes in nova-manage using cloud... we were thinking about creating a separate CLI utility for all our stuff | 17:53 |
vladimir3p | vishy: (or to integrate it with nova-manage completely) | 17:53 |
vladimir3p | vishy: in general we can discard ec2 part ... probably. | 17:54 |
vishy | vladimir3p: cloud.py is supposed to be a compatibility layer for ec2, so if they aren't ec2 supported methods they shouldn't go there. | 17:54 |
vishy | perhaps those methods could move into their own file that is called via nova-manage or even a separate cli | 17:55 |
vishy | 2) -> the additions in compute_api. Why is that necessary? | 17:55 |
vladimir3p | vishy: I will need to consult with our PRD folks, but in general our GUI & apps are using Openstack APIs for now | 17:56 |
vladimir3p | vishy: (it was for EC2) | 17:57 |
vladimir3p | vishy: for compute APIs... Currently the main one is vsa_id | 17:58 |
vladimir3p | vishy: we can try to move it to meta-data, but it is way easier to operate when it is part of instances | 17:58 |
vladimir3p | vishy: also, the next step will be to perform all necessary filterings and hide admin (VSA) instances | 17:58 |
vishy | vladimir3p: i think we should do that via metadata | 17:59 |
*** lorin1 has quit IRC | 17:59 | |
vladimir3p | vishy: ok... I suppose it deneds if we will introduce VSAs as part of nova or if it will be some separate plug-in | 18:01 |
vladimir3p | vishy: IMHO, if (when) VSA will go inside, it will make sense to have this on top-level (instances) and not in meta-data | 18:02 |
vishy | vladimir3p: why do you think it should be a field in instances? | 18:02 |
vladimir3p | vishy: I suppose that in general VSA could replace Lunr and provide block storage for cloud | 18:03 |
vladimir3p | vishy: wrt: field in instances vs metadata ... such instances will not be regular instances | 18:03 |
vladimir3p | (that user can operate with). | 18:03 |
vishy | it is a specific kind of instance not regular instance though | 18:03 |
vladimir3p | yeah, in general we wanted to intorduce the concept of instace categories ... but not sure if/when it will go through | 18:04 |
vishy | if we have nova-db for dbaas do we add db_id? | 18:04 |
vladimir3p | if categories will be added to nova, such field (vsa_id) might be category-dependant | 18:04 |
vladimir3p | good point :-) | 18:05 |
vishy | so categories are just a type of metadata | 18:05 |
vishy | so we need InstanceMetadata and VolumeMetadata | 18:05 |
openstackgerrit | Yogeshwar Srikrishnan proposed a change to openstack/keystone: #16 Changes to remove unused group clls. https://review.openstack.org/107 | 18:06 |
vladimir3p | vishy: ok, makes sense - we can move it out there to special metadata | 18:06 |
openstackjenkins | Project nova build #1,170: SUCCESS in 4 min 24 sec: http://jenkins.openstack.org/job/nova/1170/ | 18:06 |
openstackjenkins | Tarmac: --Stolen from https://code.launchpad.net/~cerberus/nova/lp809909/+merge/68602 | 18:06 |
openstackjenkins | Fixes lp809909 | 18:06 |
openstackjenkins | Migrate of instance with no local storage fails with exception | 18:06 |
openstackjenkins | Simply checks to see if the instance has any local storage, and if not, skips over the resize VDI step. | 18:06 |
*** lorin1 has joined #openstack-dev | 18:07 | |
*** lorin1 has left #openstack-dev | 18:07 | |
vladimir3p | vishy: for volume meta-data ... we've added drive_type ref there | 18:08 |
vladimir3p | vishy: if you want this one to go to metadata as well it will be quite problematic to perform joins | 18:08 |
vishy | vladimir3p: what type of join might you need to do? | 18:08 |
vladimir3p | volume to drive_type | 18:09 |
vladimir3p | vishy: the idea is to create volumes from particular drive_types | 18:09 |
vladimir3p | actually our volumes also have "categories" | 18:09 |
vladimir3p | FrontEnd volumes vs BackEnd volumes | 18:09 |
vladimir3p | first ones are presented by VSA and latter are consumed by VSAs | 18:10 |
vishy | i'm looking at this to try and figure out how we could do this... | 18:12 |
*** rnorwood has quit IRC | 18:12 | |
vladimir3p | the request to allocate storage for VSA is translated into volumes, goes to scheduler (who knows what drive types are present where) and based on that sent to appropriate node | 18:13 |
vladimir3p | in general, we could perform an additional query to DB (or two) and retrieve this info | 18:13 |
vladimir3p | but wanted to avoid it | 18:14 |
vishy | i'm considering the way that computes report capabilities | 18:14 |
vishy | and then the scheduler makes decisiions based on the reported capabilities | 18:14 |
vladimir3p | we implemented it exactly in the same way | 18:15 |
vladimir3p | nova-volume retrieves capabilities from driver and reports to scheduler | 18:15 |
vishy | so the drive_type is used by the driver? | 18:15 |
vladimir3p | by nova-volume driver - yes. It provides it to SW responsible for volume creation | 18:16 |
vladimir3p | each node may have different types of storage | 18:16 |
vladimir3p | and when volume is created it should know from what pool (with what QoS props) it should do it | 18:17 |
vishy | so get_all_by_vsa is troublesome | 18:19 |
vishy | if the volumes are reporting capabilities to the scheduler | 18:20 |
vishy | can't the scheduler determine this without making a db call? | 18:20 |
vladimir3p | let me see where scheduler is calling the DB ... | 18:21 |
dabo | vishy: don't know if the volume capabilities is done yet, but yes, that's the idea | 18:21 |
vladimir3p | vishy: in our version each nova-volume report its capabilities to scheduler. Based on that scheduler knows where (and how many) drives of particular type are | 18:23 |
vladimir3p | when volume creation request reachs scheduler, it checks what is the requested drive type and based on that performs selection | 18:24 |
vladimir3p | here it operates with specified drive type | 18:24 |
vladimir3p | where drive type includes type (SATA/SAS/etc), capacity, etc... | 18:24 |
vladimir3p | we have 2 entry points for scheduler: create_volume & create_volumes | 18:25 |
vladimir3p | by default the latter one is used (as scheduler should create a complex decision based on # of requested drives/volumes) | 18:25 |
vladimir3p | in a single create_volume call we go to the DB in order to retrieve all vol's info ... we can probably avoid it by putting all volume parameters into the mesage, but we need there drive type info as well | 18:26 |
vishy | vladimir3p: in general i feel like there is some underlying changes we could make to make all this stuff work more generally | 18:27 |
vladimir3p | vishy: it would be nice. I agree that we need to generalize it a bit | 18:27 |
vishy | i think i'm going to have to dive in a little more and come up with a specific set of changes and suggestions | 18:27 |
vladimir3p | vishy: but we also want it to be the part of nova... | 18:28 |
vladimir3p | Awesome!!! | 18:28 |
vladimir3p | vishy: pls let me know if you need any clarifications | 18:29 |
*** markvoelker has quit IRC | 18:30 | |
*** mnaser has quit IRC | 18:31 | |
*** mnaser has joined #openstack-dev | 18:31 | |
dolphm | is there a pylint command to reduce it's output to simply the total message count? | 18:34 |
dolphm | jaypipes: i'm back, if you want to work on tests | 18:35 |
jaypipes | dolphm: cool, in about 30 minutes/ | 18:35 |
jaypipes | ? | 18:35 |
dolphm | jaypipes: that works | 18:35 |
bcwaldon | jaypipes: will jenkins kick back your S3 branch if I send it in? | 18:45 |
jaypipes | bcwaldon: see latest comment on MP... just pushed afix. I ran a param build and saw the problem. | 18:46 |
bcwaldon | jaypipes: Ah, I ran the tests and Approved before you commented | 18:47 |
jaypipes | bcwaldon: approve again then please :) grr. boto bugs. | 18:48 |
jaypipes | bcwaldon: http://code.google.com/p/boto/issues/detail?id=540 | 18:48 |
jaypipes | bcwaldon: I will be *so* happy when 713154 lands. | 18:49 |
bcwaldon | jaypipes: oh, I know | 18:50 |
bcwaldon | jaypipes: me, too | 18:50 |
bcwaldon | me too... | 18:50 |
*** mwhooker_ has joined #openstack-dev | 18:51 | |
bcwaldon | jaypipes: on its way in | 18:57 |
Vek | vishy: you around? | 19:01 |
*** jhtran has joined #openstack-dev | 19:01 | |
bcwaldon | jaypipes: ! | 19:02 |
jhtran | anyone using osx as their dev env w/ xcode 4.x or higher? ever since i got xcode 4, greenlet has probs compiling everytime i have to do virtualenv for a branch | 19:05 |
*** mdomsch has joined #openstack-dev | 19:10 | |
bcwaldon | jaypipes: need to add python-boto to glance packages | 19:13 |
openstackgerrit | Dolph Mathews proposed a change to openstack/keystone: Added support for versioned openstack MIME types https://review.openstack.org/114 | 19:19 |
jaypipes | bcwaldon: gotcha. thx. | 19:22 |
*** rnorwood has joined #openstack-dev | 19:23 | |
vishy | Vek: heyo | 19:24 |
openstackjenkins | Project nova build #1,171: SUCCESS in 4 min 16 sec: http://jenkins.openstack.org/job/nova/1171/ | 19:31 |
openstackjenkins | Tarmac: Fixes issue with OSAPI passing compute API a flavorid instead of an instance identifier. Added tests. | 19:31 |
*** AhmedSoliman has joined #openstack-dev | 19:32 | |
*** negronjl has joined #openstack-dev | 19:33 | |
openstackjenkins | Project nova build #1,172: SUCCESS in 4 min 27 sec: http://jenkins.openstack.org/job/nova/1172/ | 19:43 |
openstackjenkins | Tarmac: This change creates a minimalist API abstraction for the nova/rpc.py code so that it's possible to use other queue mechanisms besides Rabbit and/or AMQP, and even use other drivers for AMQP rather than Rabbit. The change is intended to give the least amount of interference with the rest of the code, fixes several bugs in the tests, and works with the current branch. I also have a small demo driver+server for | 19:43 |
blamar | dolphm: "pylint -rn <files> | wc -l" might do it | 19:53 |
openstackjenkins | Project nova build #1,173: SUCCESS in 4 min 20 sec: http://jenkins.openstack.org/job/nova/1173/ | 19:56 |
openstackjenkins | Tarmac: Fix various errors discovered by pylint and pyflakes. | 19:56 |
openstackjenkins | Project nova-tarball-bzr-delta build #404: FAILURE in 25 sec: http://jenkins.openstack.org/job/nova-tarball-bzr-delta/404/ | 19:57 |
openstackjenkins | * Tarmac: Round 1 of changes for keystone integration. | 19:57 |
openstackjenkins | * Modified request context to allow it to hold all of the relevant data from the auth component. | 19:57 |
openstackjenkins | * Pulled out access to AuthManager from as many places as possible | 19:57 |
openstackjenkins | * Massive cleanup of unit tests | 19:57 |
openstackjenkins | * Made the openstack api fakes use fake Authentication by default | 19:57 |
openstackjenkins | There are now only a few places that are using auth manager: | 19:57 |
openstackjenkins | * Authentication middleware for ec2 api (will move to stand-alone middleware) | 19:57 |
openstackjenkins | * Authentication middleware for os api (will be deprecated in favor of keystone) | 19:57 |
openstackjenkins | * Accounts and Users apis for os (will be switched to keystone or deprecated) | 19:57 |
openstackjenkins | * Ec2 admin api for users and projects (will be removed) | 19:57 |
openstackjenkins | * Nova-manage user and project commands (will be deprecated and removed with AuthManager) | 19:57 |
openstackjenkins | * Tests that test the above sections (will be converted or removed with their relevant section) | 19:57 |
*** AhmedSoliman has quit IRC | 19:57 | |
openstackjenkins | * Tests for auth manager | 19:57 |
openstackjenkins | * Pipelib (authman can be removed once ec2 stand-alone middleware is in place) | 19:57 |
openstackjenkins | * xen_api (for getting images from old objectstore. I think this can be removed) | 19:57 |
openstackjenkins | Vish | 19:57 |
openstackjenkins | * Tarmac: Fix various errors discovered by pylint and pyflakes. | 19:57 |
vishy | sandywalsh: ping | 19:58 |
* Vek waves at vishy | 19:59 | |
Vek | now that your branch is merged, mind reviewing mine? :) | 19:59 |
vishy | sure sure | 20:00 |
Vek | cool, thanks :) | 20:00 |
vishy | i just noticed that the tarball failed | 20:00 |
openstackjenkins | Project nova build #1,174: SUCCESS in 4 min 32 sec: http://jenkins.openstack.org/job/nova/1174/ | 20:02 |
openstackjenkins | Tarmac: Round 1 of changes for keystone integration. | 20:02 |
openstackjenkins | * Modified request context to allow it to hold all of the relevant data from the auth component. | 20:02 |
openstackjenkins | * Pulled out access to AuthManager from as many places as possible | 20:02 |
openstackjenkins | * Massive cleanup of unit tests | 20:02 |
openstackjenkins | * Made the openstack api fakes use fake Authentication by default | 20:02 |
openstackjenkins | There are now only a few places that are using auth manager: | 20:02 |
openstackjenkins | * Authentication middleware for ec2 api (will move to stand-alone middleware) | 20:02 |
openstackjenkins | * Authentication middleware for os api (will be deprecated in favor of keystone) | 20:02 |
openstackjenkins | * Accounts and Users apis for os (will be switched to keystone or deprecated) | 20:02 |
openstackjenkins | * Ec2 admin api for users and projects (will be removed) | 20:02 |
openstackjenkins | * Nova-manage user and project commands (will be deprecated and removed with AuthManager) | 20:02 |
openstackjenkins | * Tests that test the above sections (will be converted or removed with their relevant section) | 20:02 |
openstackjenkins | * Tests for auth manager | 20:02 |
openstackjenkins | * Pipelib (authman can be removed once ec2 stand-alone middleware is in place) | 20:02 |
openstackjenkins | * xen_api (for getting images from old objectstore. I think this can be removed) | 20:02 |
openstackjenkins | Vish | 20:02 |
vishy | Vek: there are merge errors | 20:04 |
vishy | Vek: perhaps you merged a little too quickly? Try grabbing trunk and merging again | 20:04 |
Vek | vishy: quite possible... | 20:04 |
Vek | and indeed it was a little too quickly. | 20:05 |
*** mdomsch has quit IRC | 20:05 | |
*** mwhooker_ has quit IRC | 20:06 | |
vishy | dabo: ping | 20:08 |
dabo | vishy: pong | 20:09 |
vishy | dabo: question about current state of the scheduler code | 20:09 |
dabo | i'll try | 20:09 |
vishy | dabo: say i want to have two different hypervisors in the same zone | 20:09 |
vishy | is there enough information reported to the scheduler to make a decision about where to send a vm | 20:10 |
dabo | vishy: hmmm... I've only seen the code for how xen handles this | 20:11 |
dabo | i don't think 'hypervisor_type' is included | 20:11 |
vishy | i'm asking because there is a request for volumes to allow different drivers in the same zone | 20:11 |
dabo | the idea was that each hypervisor would report its own state in its own way | 20:11 |
dabo | but I can see how having some required data points (i.e., type) would make sense | 20:12 |
vishy | i'm considering something like the following: adding an optional required_hypervisor to the InstanceTypes table | 20:12 |
dabo | would there ever be an "optional" hypervisor for an instance type? | 20:13 |
vishy | well a type could be only xen hypervisor | 20:13 |
dabo | iow, would 'hypervisor' be sufficient for a name | 20:13 |
vishy | or it could run on all | 20:13 |
dabo | ah, I didn't know that the same type could run on all | 20:14 |
vishy | but as i was thinking about it i've been trying to generalize it | 20:14 |
vishy | so say you could have an arbitrary blob associated with a type that would be passed to the scheduler | 20:14 |
vishy | inside this blob you could put things like required hypervisor, etc. | 20:15 |
kbringard | quick ? about flat DHCPā¦ can public_interface and flat_interface be the same? | 20:15 |
dabo | vishy: that sounds like sandywalsh's json-based requirements matching | 20:16 |
vishy | kbringard: yes | 20:16 |
vishy | kbringard: actually no | 20:16 |
kbringard | yea, that's what I was starting to wonder | 20:16 |
vishy | kbringard: you will want to set public_interface to br100 | 20:16 |
kbringard | ah so | 20:17 |
vishy | or whatever bridge you are using | 20:17 |
kbringard | even if br100 is attached to eth0, which is the flat_interface? | 20:17 |
dabo | vishy: my concern is that currently there is no defined requirements that each host must report | 20:17 |
dabo | s/is no/are no/ | 20:17 |
dolphm | blamar: ended up using "pylint --output-format=parseable keystone | grep 'keystone/' | wc -l" thanks! | 20:17 |
vishy | hmm, the requirements matching could work, but where do the requirements come from? | 20:18 |
dabo | vishy: so hosts of one hypervisor may report a 'hypervisor_type', while another may not | 20:18 |
vishy | sure but you can default to unknown | 20:18 |
*** cp16net_ has quit IRC | 20:19 | |
dabo | requirements would be the flexible part that each deployment could customize | 20:19 |
vishy | basically i think we need to be able to pass some arbitrary data to the scheduler from the type | 20:19 |
vishy | and also to the driver | 20:19 |
vishy | so maybe I should just call it scheduler_data | 20:19 |
dabo | you may want to talk with sandywalsh about this. He's the one that did the work on that part. He could explain the thought behind it better | 20:19 |
dabo | Except that I think he's gone for the day | 20:19 |
vishy | yeah i pinged him but he isn't around :) | 20:20 |
dabo | Have you looked at nova/scheduler/least_cost.py? | 20:20 |
Vek | vishy: conflicts fixed. | 20:20 |
vladimir3p | vishy/dabo: there should be some sort of compromise. either we define type of scheduler per type of data or standartize on requirements and in this case one scheduler can handle some standartized data | 20:20 |
vladimir3p | vishy: thanks a lot for looking at it | 20:21 |
dabo | vladimir3p: What I was thinking was a set of basic data points that each hypervisor type should report. Each type could then add any additional information that it wanted | 20:21 |
dabo | vladimir3p: things like 'hypervisor_type', 'enabled', etc. | 20:22 |
vladimir3p | dabo: exactly. this scenario is more aligned with "standartizing" types of data reported | 20:22 |
dabo | vladimir3p: the other side of this was that at the summit, it was made very clear that different people had very different needs | 20:23 |
vladimir3p | dabo: we can go another route and define different schedulers.. but it means that we need to have multiple scheed in the system | 20:23 |
dabo | the ability to customize the selection criteria is critical | 20:23 |
dabo | vladimir3p: not separate schedulers | 20:23 |
dabo | scheduler gets info about its compute nodes (or volumes) from those nodes | 20:24 |
kbringard | thanks vishy, that fixed it | 20:24 |
sandywalsh | sorry vishy just following thread now | 20:24 |
dabo | each node would report its own capabilities | 20:24 |
vishy | oh hey sandywalsh | 20:24 |
dabo | we hadn't dealt with heterogenous hypervisor deployments | 20:25 |
sandywalsh | vishy, each hypervisor reports their status to the ZoneManager periodically. | 20:25 |
sandywalsh | vishy, currently they don't say exactly what hypervisor they are, but they could easily enough | 20:25 |
sandywalsh | I filed a bug to standardize what gets reported | 20:25 |
vishy | so we have requirements in volume to support volume types that can a) go to specific backends and b) pass arbitrary metadata to backends | 20:25 |
vishy | so I was looking at the stuff that was done for instances and see how much we have already | 20:26 |
vishy | this would be the equivalent of putting some data in instance type that would do the same thing | 20:26 |
sandywalsh | yes, it's very compute-centric right now, but could easily be expanded to support volume/network I think | 20:26 |
sandywalsh | after all, it's just key-value matching essentially | 20:26 |
vishy | sandywalsh: where does the metadata come from? | 20:27 |
sandywalsh | each virt layer reports it | 20:27 |
vishy | sandywalsh: is the plan to expose this to the user? | 20:27 |
vishy | i mean from the request side | 20:27 |
sandywalsh | ah | 20:27 |
sandywalsh | currently it's all InstanceType based, but we added /zones/create to handle new request formats | 20:27 |
sandywalsh | (like a JSON query-based request) | 20:28 |
sandywalsh | kind of a lisp-like grammar for more expressive requests | 20:28 |
vishy | how do you feel about putting a json blog in instancetype table that is passed to the scheduler | 20:28 |
Vek | don't you think a full-up blog is overkill? :) | 20:28 |
* Vek ducks rapidly | 20:28 | |
vishy | s/blog/blob | 20:29 |
vishy | :p | 20:29 |
vishy | thx Vek | 20:29 |
sandywalsh | well, I thought instancetype was supposed to be rather rigid | 20:29 |
sandywalsh | there was a change recently to add extra-attributes to InstanceType table | 20:29 |
vishy | sandywalsh: I'm thinking for third party schedulers and drivers | 20:29 |
Vek | :) | 20:30 |
vishy | basically I'm thinking of: adding some extra metadata that is stuffed into request context | 20:30 |
vishy | so that the scheduler and driver can do the right thing | 20:30 |
sandywalsh | could you put it in the extra-attributes? | 20:31 |
sandywalsh | and that makes it optional | 20:31 |
sandywalsh | rather than extend InstanceType table for the blob? | 20:31 |
vishy | did extra-attributes actually show up? | 20:31 |
sandywalsh | looking | 20:32 |
vishy | we have instance_metadata | 20:32 |
vishy | i don't see anything else | 20:32 |
sandywalsh | InstanceTypeExtraSpecs | 20:32 |
vishy | ah cool that is what i was looking for | 20:33 |
sandywalsh | that should work and not upset the apple cart. then your custom drivers/etc can pick it up and use it | 20:33 |
sandywalsh | and if you wanted it more dynamic you could modify OS API /zones/boot | 20:34 |
sandywalsh | to accept the JSON query | 20:34 |
vishy | sandywalsh: how do the extra specs get to the scheduler? | 20:34 |
vishy | are they loaded from the db? One sec, checking | 20:35 |
sandywalsh | I'm pretty sure they're loaded from the db ... perhaps not in nova.compute.api, but it may need to be done explicitly in the scheduler | 20:35 |
sandywalsh | InstanceType is passed to the scheduler via the RPC call from compute.api | 20:35 |
vishy | doesn't look like there is a scheduler that is doing this | 20:35 |
sandywalsh | I think most of that is done in HostFilter currently | 20:36 |
sandywalsh | that's where the hard requirements are filtered out (can_boot_windows, etc) | 20:37 |
vishy | sandywalsh: ok i think this will work for VolumeTypes. I think we'll have to use a default flavor for ec2 | 20:37 |
vishy | sandywalsh: next part: metadata for the driver | 20:38 |
sandywalsh | scheduling (using the zone scheduler) is in two parts: host filtering and host weighing | 20:38 |
vishy | i could query at the driver level for extra specs too i guess | 20:38 |
sandywalsh | yes, I was going to say ... you can stuff what you need in extra-specs | 20:38 |
vishy | but it seems like the scheduler should be able to pass extra data to the driver | 20:38 |
sandywalsh | yes, correct, or the driver can look it up itself | 20:39 |
sandywalsh | pie/cake really | 20:39 |
vishy | so I guess the larger question is: is there any shared concept of volume_type | 20:40 |
vishy | or is volume_type just id / name and everything else goes in VolumeExtraSpecs | 20:40 |
sandywalsh | currently there is no concept of volume in the zone aware scheduler ... it's compute only right now. But it shouldn't be hard to change. | 20:42 |
sandywalsh | and actually, I wonder if it needs to be zone-aware | 20:43 |
sandywalsh | since the volume will stem from where the host is placed | 20:43 |
vishy | sandywalsh: it will at the very least need to be able to be aware of zones | 20:43 |
vishy | so that someone can create a volume local to a compute | 20:43 |
sandywalsh | yes, certainly, and the more complex schedulers are available in that framework | 20:44 |
vishy | but I'm more concerned about using the spec stuff from schedulers | 20:44 |
vishy | speaking of which, the model of using one scheduler class for both compute and volume is terrible | 20:44 |
dolphm | mtaylor: is jenkins asleep or what? | 20:44 |
mtaylor | dolphm: aroo? | 20:44 |
sandywalsh | vishy, well, now's a good time for us to address that if need be. | 20:45 |
dolphm | mtaylor: we have a review (114) that's been at +2 for about 40 minutes, and jenkins hasn't stepped in | 20:45 |
mtaylor | dolphm: looking | 20:45 |
dolphm | mtaylor: https://review.openstack.org/#change,114 | 20:45 |
mtaylor | dolphm: +1 and +1 != +2 ... (I'm not crazy about the +2 naming, but oh well) | 20:46 |
mtaylor | dolphm: you need someone in keystone-core to actually give it a +2 vote | 20:46 |
dolphm | mtaylor: oh cool, didn't know that rule had been implemented :) | 20:47 |
mtaylor | dolphm: yup! | 20:47 |
sandywalsh | vishy sorry, but I have to run ... chat later on this? | 20:47 |
vishy | sandywalsh: sure | 20:47 |
mtaylor | dolphm: should you or yogi be able to do that? right now only ziad and jesse have that power | 20:47 |
sandywalsh | cool, thx | 20:47 |
vishy | Vek: where is auth_token actually passed into RequestContext? | 20:47 |
dolphm | mtaylor: i think it's fine as-is, we talked about at least one us and one of them having to review everything | 20:48 |
Vek | vishy: in the nova_auth_token.py plug-in in keystone. | 20:48 |
vishy | ah | 20:48 |
Vek | I have a patch waiting to go in | 20:48 |
vishy | no wonder i couldn't find it ) | 20:48 |
Vek | (i.e., I'm sitting on it until this goes into nova) | 20:48 |
vishy | Vek: is it a string? | 20:48 |
Vek | :) | 20:48 |
Vek | yep. | 20:48 |
mtaylor | dolphm: k. ziad should probably re-approve 107 as well now that yogi re-submitted | 20:49 |
vishy | ok cool, just making sure it would get in | 20:49 |
vishy | s/in/transferred through the queue properly | 20:49 |
dolphm | mtaylor: will gerrit eventually send a reminder to reviewers? | 20:49 |
vishy | Vek: this doesn't fix the glance support for kvm right? | 20:50 |
Vek | "kvm"? | 20:50 |
mtaylor | dolphm: it _should_ be sending emails already BUT, I'll put that on the list | 20:50 |
Vek | no, I don't think it does. | 20:50 |
Vek | (libvirt, right?) | 20:50 |
Vek | I pass the context in to those functions, now, but I didn't know where to send it from there. | 20:50 |
vishy | Vek: right | 20:50 |
vishy | lemme look real quick | 20:51 |
Vek | same with vmwareapi, of course. | 20:51 |
dolphm | mtaylor: I'm just used to Code Collaborator, which will send daily reminders if you're needed as a reviewer | 20:51 |
mtaylor | dolphm: it's a good idea - and I see no reason why we can't do it | 20:52 |
vishy | hehe you called it cxt? | 20:52 |
kbringard | isn't that what that weiner guy did that got him in trouble? | 20:52 |
openstackgerrit | Dolph Mathews proposed a change to openstack/keystone: Tweaked the sample dtest to avoid pylint errors https://review.openstack.org/115 | 20:53 |
openstackgerrit | Dolph Mathews proposed a change to openstack/keystone: pylint fix: avoiding overriding the built in next() function https://review.openstack.org/116 | 20:54 |
vishy | Vek: doesn't rescue need a context as well? | 21:00 |
* Vek looks... | 21:03 | |
vishy | it does | 21:03 |
vishy | rescue -> spawn_rescue -> spawn | 21:03 |
bcwaldon | vishy: I'm not quite sure how how to move forward with https://code.launchpad.net/~rackspace-titan/nova/remove_xenapi_image_service_flag_lp708754/+merge/69193 | 21:04 |
bcwaldon | thoughts? | 21:04 |
Vek | where does it call spawn? What file? | 21:05 |
Vek | yeah, you're right, I totally missed that instance of spawn | 21:07 |
Vek | of course, there's no way that it could work anymore | 21:07 |
Vek | even minus my addition of the ctx argument, the network_info argument isn't passed to it. | 21:07 |
vishy | Vek: lp:~vishvananda/nova/glance-private-images | 21:09 |
vishy | there is libvirt support | 21:09 |
vishy | we need to make a bug for hyperv and vmware | 21:09 |
vishy | bcwaldon: we should get in touch with the citrix team and make sure they don't mind. Also it will need a trunk merge as well | 21:10 |
Vek | *nod* | 21:10 |
openstackgerrit | Dolph Mathews proposed a change to openstack/keystone: simple pylint fixes https://review.openstack.org/117 | 21:10 |
bcwaldon | vishy: sounds good. Do you habe a contact I can hit up? | 21:10 |
vishy | i will ping someone | 21:11 |
bcwaldon | vishy: thank you | 21:11 |
*** dolphm has quit IRC | 21:13 | |
Vek | vish: I recommend using "ctx" in place of "context" as function arguments, btw, just to make sure you don't accidentally shadow the context module. | 21:14 |
bcwaldon | ctxt | 21:14 |
dabo | contxt | 21:14 |
dabo | :) | 21:14 |
vishy | Vek: the problem is that we are using context everywhere and some things are called via kwargs | 21:15 |
* Vek has been using "ctx" since he worked on Kerberos | 21:15 | |
Vek | vishy: yeah :/ | 21:15 |
bcwaldon | vishy: that doesn't make it right ;) | 21:16 |
vishy | so it is dangerous. can we stay consistent change them all at once | 21:16 |
bcwaldon | I mean Vek | 21:16 |
vishy | i was using ctxt where needed, but I think it is confusing to have context in most places but then ctx in the driver | 21:16 |
*** negronjl has quit IRC | 21:17 | |
Vek | the other alternative is to import the context module as something else | 21:17 |
*** negronjl has joined #openstack-dev | 21:17 | |
Vek | "from nova import context as nova_context" | 21:17 |
Vek | thoughts? | 21:21 |
Vek | I know of at least one place where both the context and the context module are referred to within a single def... | 21:22 |
bcwaldon | vishy: problem! | 21:23 |
bcwaldon | vishy: can't build a venv because http://nova.openstack.org/Twisted-10.0.0Nova.tar.gz returns a 404 | 21:23 |
vishy | yeah i'm aware | 21:24 |
vishy | does mainline twisted have the fix now? | 21:24 |
vishy | maybe we can just up the twisted version and grab it from pipi | 21:24 |
bcwaldon | vishy: I never knew about this, so I'm of no help | 21:24 |
vishy | honestly we don't even use twisted anymore so maybe we should just remove it | 21:24 |
*** martine has quit IRC | 21:26 | |
bcwaldon | vishy: its used in nova-instancemonitor | 21:26 |
bcwaldon | vishy: maybe not | 21:26 |
vishy | yeah nova-instancemonitor is dead | 21:27 |
bcwaldon | vishy: yeah, its definitely still used in the code. Maybe it can be removed at this point | 21:27 |
vishy | lets do a merge that kills it and removes twistd.py and twisted dep | 21:27 |
bcwaldon | sure | 21:27 |
bcwaldon | you want to do it, or me | 21:28 |
vishy | bcwaldon: can you do it? Tracking down other stuff atm | 21:28 |
bcwaldon | vishy: yep | 21:28 |
*** alekibango has quit IRC | 21:37 | |
*** yogirackspace has left #openstack-dev | 21:42 | |
bcwaldon | vishy: https://code.launchpad.net/~rackspace-titan/nova/remove-twistd/+merge/69865 | 21:43 |
bcwaldon | vishy: building venv and running tests now | 21:43 |
s1rp | just for fun: here's a profile of nova's current slowest test: http://paste.openstack.org/show/1980/ | 22:00 |
vishy | s1rp: which test is it? | 22:03 |
s1rp | too_many_cores | 22:03 |
*** mfer has quit IRC | 22:03 | |
s1rp | wondering if a judiciously placed stub will speed it up | 22:04 |
*** amccabe has quit IRC | 22:07 | |
vishy | stub out the db calls yeah :) | 22:10 |
vishy | bcwaldon: got an ok from citrix guys on removing objectstore code | 22:10 |
vishy | bcwaldon: you will need to do a trunk merge though, I moved some of that stuff around in the authmanager branch | 22:10 |
bcwaldon | vishy: will do momentarily | 22:10 |
bcwaldon | vishy: fighting with venv crap | 22:11 |
bcwaldon | vishy: apparently we dont need to patch eventlet anymore | 22:11 |
vishy | no we don't :) | 22:13 |
bcwaldon | vishy: it'll come out in this twisted branch if thats okay | 22:13 |
bcwaldon | vishy: and do you want me to delete all of objectstore? | 22:13 |
vishy | no no just the imageservice stuff in xen | 22:14 |
bcwaldon | vishy: check | 22:14 |
s1rp | vishy: strangely self.context = context.get_admin_context() is the line that seems to slow the test down.... trying to wrap my head around that | 22:16 |
s1rp | anticipating some eventlet wonkiness | 22:16 |
s1rp | nm, misread the number, context line is fine | 22:18 |
*** kbringard has quit IRC | 22:21 | |
s1rp | heh, just noticed we have two (!) tests called test_too_many_cores | 22:26 |
s1rp | must have been a bad merge | 22:26 |
Vek | vishy: my branch is ready to review again. | 22:26 |
bcwaldon | vishy: remove_xenapi_image_service_flag should be good to go | 22:30 |
*** bsza has quit IRC | 22:32 | |
vladimir3p | Glance/image question: we are trying to register a raw image in glance and it keeps failing with MemoryError. We are using command like: glance add name="vc-psp4-test" is_public=True container_format=bare image_state=available project_id=nubeblog architecture=x86_64 < /vc-dev-psp4-3.img any ideas? | 22:33 |
vladimir3p | same image works fine on other configs | 22:33 |
vladimir3p | (with exactly same cmd) | 22:34 |
*** bcwaldon has quit IRC | 22:36 | |
*** dragondm has quit IRC | 22:42 | |
*** jhtran has quit IRC | 22:47 | |
s1rp | vladimir3p: MemoryError sounds like it's buffering the entire image in memory as it's reading it into glance (thought we had fixed that)... | 22:47 |
s1rp | vladimir3p: how large is the image? | 22:47 |
vladimir3p | s1rp: it is more than 2GB and these nodes are just desktops | 22:50 |
vladimir3p | s1rp: other nodes are normal servers with enough memory | 22:50 |
s1rp | vladimir3p: which backend are you using, filesystem, swift? | 22:51 |
vladimir3p | filesystem | 22:51 |
s1rp | vladimir3p: are you running a pretty recent version of glance? | 22:52 |
vladimir3p | s1rp: hmm... need to check, but should be relatively recent | 22:52 |
vladimir3p | s1rp: glance 2011.3-dev | 22:52 |
s1rp | vladimir3p: gonna dd out some random data on my machine and try to replicate | 22:52 |
vladimir3p | s1rp: can send you the backtrace of this raise if you need it | 22:53 |
s1rp | vladimir3p: yeah that could be helpful, thanks! | 22:53 |
vladimir3p | s1rp: email? | 22:53 |
s1rp | you can drop it in paste.openstack.org | 22:54 |
s1rp | and drop it in here if you'd like | 22:54 |
vladimir3p | s1rp: sure | 22:54 |
vladimir3p | s1rp: http://paste.openstack.org/show/1981/ | 22:56 |
vladimir3p | s1rp: this particular host has 2GB of RAM | 22:57 |
s1rp | uhoh, looks like when we tried to make_body_seekable, we ended up introducing a copy of the entire image... | 22:58 |
s1rp | vladimir3p: looks like this a bug that was introduced relatively recently (i think) | 22:58 |
vladimir3p | s1rp: ok. any workarounds? | 22:59 |
s1rp | vladimir3p: not sure of a workaround yet; this appears related to https://bugs.launchpad.net/glance/+bug/794718 | 23:03 |
uvirtbot | Launchpad bug 794718 in glance "S3 requires seekable file. webob versions 0.9.8 through 1.0.7 make_body_seekable() method broken for chunked transfer requests" [Low,Fix committed] | 23:03 |
* s1rp wishes webob had a __version__ | 23:03 | |
*** dantoni has joined #openstack-dev | 23:04 | |
*** gaitan has quit IRC | 23:06 | |
vladimir3p | s1rp/uvirtbot: just went over the code of this fix ... seems like it is related to s3 only, isn't it? | 23:07 |
s1rp | vladimir3p: sort of, that particular issue intersected with the s3 backend, but in general i think webob has some serious issues with chunked transfers, and that what's causing this... | 23:07 |
s1rp | vladimir3p: could you go ahead and make a bug for this so it's tracked... gotta run at the moment, but should be able to take a look at this later tonight | 23:08 |
vladimir3p | s1rp: ok, thanks. should we try to update webob? | 23:09 |
s1rp | vladimir3p: certainly worth a shot | 23:10 |
vladimir3p | s1rp: ok, thanks | 23:11 |
*** bengrue has joined #openstack-dev | 23:15 | |
*** jkoelker has quit IRC | 23:30 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!