*** masber has quit IRC | 00:33 | |
*** btully has joined #openstack-glance | 00:40 | |
*** masber has joined #openstack-glance | 00:41 | |
*** danpawlik has joined #openstack-glance | 00:42 | |
*** danpawlik_ has quit IRC | 00:44 | |
*** btully has quit IRC | 00:45 | |
*** masber has quit IRC | 00:47 | |
*** gyee has quit IRC | 00:56 | |
*** zhurong has joined #openstack-glance | 01:01 | |
*** mvk has quit IRC | 01:57 | |
*** mvk has joined #openstack-glance | 01:59 | |
*** zhurong has quit IRC | 02:00 | |
*** dalgaaf has quit IRC | 02:04 | |
*** dalgaaf has joined #openstack-glance | 02:05 | |
*** gcb has joined #openstack-glance | 02:06 | |
*** pbourke_ has quit IRC | 02:23 | |
*** pbourke_ has joined #openstack-glance | 02:25 | |
*** zhurong has joined #openstack-glance | 02:30 | |
openstackgerrit | Tin Lam proposed openstack/glance master: Admin API policy enforcement contingent on is_admin_project https://review.openstack.org/384655 | 02:37 |
---|---|---|
*** masber has joined #openstack-glance | 02:39 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/glance master: Updated from global requirements https://review.openstack.org/525374 | 03:02 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/glance_store master: Updated from global requirements https://review.openstack.org/525003 | 03:02 |
*** links has joined #openstack-glance | 03:07 | |
*** MattMan has quit IRC | 03:09 | |
*** MattMan has joined #openstack-glance | 03:10 | |
*** abhishekk has joined #openstack-glance | 03:23 | |
*** threestrands has joined #openstack-glance | 03:27 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/python-glanceclient master: Updated from global requirements https://review.openstack.org/519151 | 03:31 |
*** rosmaita has quit IRC | 03:37 | |
*** pdeore has joined #openstack-glance | 04:10 | |
*** zhurong has quit IRC | 04:27 | |
abhishekk | smcginnis, by any chance you know updated url for http://docs.openstack.org/ops-guide/ops_user_facing_operations.html | 04:43 |
*** nicolasbock has quit IRC | 04:48 | |
*** udesale has joined #openstack-glance | 04:53 | |
*** zhurong has joined #openstack-glance | 04:54 | |
*** ratailor has joined #openstack-glance | 04:56 | |
*** pdeore_ has joined #openstack-glance | 05:01 | |
*** pdeore has quit IRC | 05:03 | |
*** threestrands has quit IRC | 05:25 | |
*** pdeore has joined #openstack-glance | 05:59 | |
*** pdeore_ has quit IRC | 06:01 | |
*** pdeore has quit IRC | 06:18 | |
*** alexchadin has joined #openstack-glance | 06:32 | |
*** pdeore has joined #openstack-glance | 06:38 | |
*** arcolife has joined #openstack-glance | 06:45 | |
*** mine0901 has quit IRC | 06:57 | |
*** udesale__ has joined #openstack-glance | 06:58 | |
*** udesale has quit IRC | 06:58 | |
*** udesale has joined #openstack-glance | 07:00 | |
*** udesale__ has quit IRC | 07:02 | |
*** zhurong has quit IRC | 07:16 | |
*** pcaruana has joined #openstack-glance | 07:32 | |
*** haibinhuang has joined #openstack-glance | 07:33 | |
*** haibinhuang has quit IRC | 07:33 | |
*** rcernin has quit IRC | 07:48 | |
*** zhurong has joined #openstack-glance | 07:50 | |
*** NostawRm has quit IRC | 08:10 | |
*** NostawRm has joined #openstack-glance | 08:10 | |
*** pdeore has quit IRC | 08:15 | |
*** pdeore has joined #openstack-glance | 08:16 | |
*** tesseract has joined #openstack-glance | 08:24 | |
*** rcernin has joined #openstack-glance | 08:33 | |
*** tshefi has joined #openstack-glance | 08:36 | |
*** belmoreira has joined #openstack-glance | 08:37 | |
*** pcaruana has quit IRC | 08:40 | |
*** alexchadin has quit IRC | 08:42 | |
*** e0ne has joined #openstack-glance | 09:39 | |
*** btully has joined #openstack-glance | 09:45 | |
*** namnh has joined #openstack-glance | 09:45 | |
*** btully has quit IRC | 09:49 | |
*** mvk has quit IRC | 09:54 | |
*** arcolife has quit IRC | 10:02 | |
amito | Hi, our cinder CI is failing in the last couple of days. looked in the logs and it seems glance is failing consistently on "BackendException: Cannot find swift service endpoint : The request you have made requires authentication. (HTTP 401)". Any idea? | 10:04 |
kairat | you need to check glance_store settings | 10:06 |
kairat | amito, was it broken recently? | 10:06 |
kairat | i pushed one patch to master recently but there was no release so.. | 10:06 |
amito | Define recently? It worked ok until early morning yeterday | 10:07 |
kairat | amito, check glance_store settings and ensure you are using keystone v3 api | 10:07 |
kairat | i mean how long it has been broken? | 10:08 |
kairat | amito, ^ | 10:08 |
kairat | ah, got it | 10:08 |
amito | Since yesterday morning-noon (UTC+2). It's an automated Jenkins-zuul setup. brings up devstack and the builds are triggered by patch sets. About the usual. | 10:08 |
kairat | check glance_store settings and ensure you are using keystone v3 api | 10:09 |
kairat | this errors says that there is no swift endpoint in service catalog | 10:10 |
amito | it should be under /etc/glance/glance-swift-store.conf ? | 10:10 |
kairat | yep | 10:10 |
amito | auth_version = 3 | 10:10 |
amito | auth_address = http://172.16.82.212/identity/v3 | 10:10 |
kairat | do you have swift service in services& | 10:10 |
kairat | ? | 10:10 |
amito | I'll check, one sec | 10:10 |
kairat | what is the service name | 10:11 |
kairat | the errors says you need to check service catalog | 10:11 |
kairat | then check glance_store settings | 10:11 |
kairat | service name, region, etc | 10:12 |
kairat | and clarify why swift endpoint cannot be found | 10:12 |
amito | we have OVERRIDE_ENABLED_SERVICES, but I don't think the swift ones are listed there | 10:13 |
kairat | it is your cloud specific config=) | 10:14 |
kairat | so only keystone service list can help here | 10:14 |
*** arcolife has joined #openstack-glance | 10:14 | |
kairat | the way it was broken says it is related to your conf | 10:15 |
kairat | because there was no release recently for glance store | 10:15 |
kairat | soo | 10:15 |
kairat | unless you are installing from master=) | 10:16 |
amito | I am bringing up my environment from devstack | 10:17 |
amito | I believe devstack takes from master... | 10:19 |
kairat | i don't think so | 10:19 |
kairat | it is for glance_store | 10:20 |
kairat | bot glance | 10:20 |
kairat | IIUC it is release based | 10:20 |
kairat | you need to check your conf | 10:20 |
kairat | sorry i have meeting | 10:20 |
kairat | hope i get right direction | 10:21 |
kairat | *gave | 10:21 |
*** mvk has joined #openstack-glance | 10:22 | |
*** namnh has quit IRC | 10:29 | |
*** rcernin has quit IRC | 10:30 | |
*** arcolife has quit IRC | 10:34 | |
*** arcolife has joined #openstack-glance | 10:36 | |
*** arcolife has quit IRC | 10:36 | |
*** arcolife has joined #openstack-glance | 10:37 | |
*** trungnv has quit IRC | 10:50 | |
*** trungnv has joined #openstack-glance | 10:51 | |
*** abhishekk has quit IRC | 10:56 | |
*** udesale has quit IRC | 11:29 | |
*** btully has joined #openstack-glance | 11:34 | |
*** btully has quit IRC | 11:38 | |
*** alexchadin has joined #openstack-glance | 11:42 | |
bhagyashris | kairat: Hi, Just want to disscuss regarding the patch https://review.openstack.org/#/c/524060/3/glance/api/v2/image_data.py | 11:47 |
kairat | yep | 11:48 |
bhagyashris | As There are two methods to create images using the current master:- | 11:48 |
bhagyashris | Method A) | 11:48 |
bhagyashris | POST /v2/images | 11:48 |
bhagyashris | PUT /v2/images/{image_id}/file | 11:48 |
bhagyashris | Method B) | 11:48 |
bhagyashris | POST /v2/images | 11:48 |
bhagyashris | PUT /v2/images/{image_id}/stage | 11:48 |
bhagyashris | POST /v2/images/{image_id}/import | 11:48 |
bhagyashris | The traditional image upload API (PUT /v2/images/{image_id}/file) | 11:48 |
bhagyashris | uses 'upload_image' policy which is same for | 11:48 |
bhagyashris | Method B (POST /v2/images/{image_id}/import) | 11:48 |
bhagyashris | image-create-via-import(new API for image create) API using the set_data() method https://github.com/openstack/glance/blob/master/glance/api/policy.py#L193 | 11:48 |
kairat | sorry | 11:49 |
kairat | i meant another thing | 11:49 |
bhagyashris | kairat: so this set_data() method is common for both the import and upload case | 11:49 |
kairat | if you look at the architecture of v2 | 11:49 |
bhagyashris | so that's why | 11:49 |
kairat | you can see it is layered | 11:50 |
bhagyashris | the policy is enforce at the controller | 11:50 |
kairat | with some pattern | 11:50 |
bhagyashris | yeah | 11:50 |
kairat | so all policy checks better to have in policy layer | 11:50 |
kairat | everytime i see different it breaks incapsulation IMO | 11:51 |
kairat | is set_data is common | 11:52 |
bhagyashris | but as the set_data() method where the policy is enforce is common for the cases import and upload | 11:52 |
bhagyashris | yup | 11:52 |
kairat | it doesn't mean you need to use it in another layer | 11:52 |
kairat | itis premature optimization | 11:52 |
kairat | IMO | 11:52 |
bhagyashris | but as in the set_data() method the policy name is hard coded and that is applied to the import case | 11:53 |
kairat | what I really want is policy checks in policy proxy=) | 11:53 |
bhagyashris | https://github.com/openstack/glance/blob/master/glance/api/policy.py#L193 | 11:54 |
kairat | sorry then you better split it somehow | 11:54 |
bhagyashris | kairat: could you elaborate | 11:55 |
kairat | if image feature implementers used the same method for import and upload | 11:56 |
kairat | and we need different policies for them | 11:56 |
kairat | we need to use two different methods in proxies IMO | 11:57 |
kairat | because it is different operation | 11:57 |
kairat | OR use the same policy | 11:57 |
kairat | it is just my opinion | 11:57 |
kairat | i am bit of context TBH | 11:57 |
kairat | *Out of context | 11:57 |
bhagyashris | ok. but if we use two different methods in proxies then this will need lot of refactoring in the code because the set_data() is used in lot many places | 11:59 |
kairat | but in ideal world it would be better to have set_data() and set_data_async() | 11:59 |
bhagyashris | kairat: so i have chose this approach to make it as simple | 12:00 |
kairat | it all can be resolved | 12:00 |
kairat | i don't know what the others core says | 12:00 |
kairat | *will say | 12:00 |
kairat | but this simplicity will lead to big mess | 12:01 |
kairat | IMO | 12:01 |
kairat | instead of layers we break encapsulation | 12:02 |
kairat | if we use two different methods in proxies then this will need lot of refactoring in the code because the set_data() is used in lot many places | 12:02 |
bhagyashris | kairat: yeah | 12:03 |
kairat | that ^ should have been done when we merged initial feature | 12:03 |
*** nicolasbock has joined #openstack-glance | 12:03 | |
kairat | but as i said you can solve it for ewxample | 12:04 |
kairat | we can introduce set_data_sync and set_data_async | 12:04 |
kairat | and one of this to be set_data | 12:05 |
kairat | it is just my opinion, nevrermind=) | 12:05 |
kairat | but I feel tech debt here | 12:06 |
bhagyashris | kairat: and also the set_data() is define in different modules like location.py, proxy.py, policy.py and notifier.py so this will also used so if create the set_data_async() for the import case then in all this module we will need the implementation and that will the big change | 12:14 |
kairat | so requires big change is not a reason for bad approach, yeah? | 12:15 |
kairat | we need trying to do the proper thing IMO | 12:16 |
bhagyashris | kairat: yeah, i would also like to hear other cores opinion | 12:18 |
kairat | ++ | 12:19 |
kairat | i am former core TBH | 12:19 |
kairat | so nevermind =) | 12:19 |
bhagyashris | kairat: ok. thank you for review and your inputs :) | 12:21 |
openstackgerrit | Bhagyashri Shewale proposed openstack/glance master: Add 'import_image' policy for import image API https://review.openstack.org/524060 | 12:23 |
openstackgerrit | Bhagyashri Shewale proposed openstack/glance master: Add 'stage_image' policy https://review.openstack.org/525578 | 12:23 |
*** alexchadin has quit IRC | 12:34 | |
*** alexchadin has joined #openstack-glance | 12:35 | |
*** zhurong has quit IRC | 12:36 | |
*** alexchadin has quit IRC | 12:40 | |
*** alexchadin has joined #openstack-glance | 12:41 | |
*** ratailor has quit IRC | 12:43 | |
*** mvenesio has joined #openstack-glance | 12:43 | |
*** zhurong has joined #openstack-glance | 12:45 | |
*** zhurong has quit IRC | 13:02 | |
*** zhurong has joined #openstack-glance | 13:02 | |
*** rosmaita has joined #openstack-glance | 13:03 | |
mvenesio | Hi guys, i'm having some issues setting the glance-scrubber over SSL i'm getting this error "Can not get scrub jobs from queue: [SSL: UNKNOWN_PROTOCOL]" | 13:03 |
mvenesio | anyone knows how to do it correctly ? | 13:04 |
*** alexchadin has quit IRC | 13:09 | |
kairat | mvenesio, afaik it just do not support ssl | 13:11 |
kairat | wait a sec | 13:12 |
kairat | i will check the patch tht supposed to do it | 13:12 |
mvenesio | kairat: thanks, thats weird because there're many options that seems to help to do it like https_ca_certificates_file = https_insecure or https_insecure = true | 13:14 |
mvenesio | keekz: but no one is working for me | 13:14 |
kairat | sorry, i can't find this patch, i just remember that we needed port scrubber to requests library to support ssl | 13:19 |
kairat | and it had not happened | 13:20 |
kairat | i can recommend to open a bug or find a similar one | 13:20 |
*** alexchadin has joined #openstack-glance | 13:21 | |
mvenesio | kairat: do you have some documentation, or bug report that inform this SSL issue with the scrubber ? | 13:21 |
kairat | no | 13:21 |
*** links has quit IRC | 13:22 | |
kairat | it seems a good point to raise on glance meeting | 13:26 |
*** zhurong has quit IRC | 13:27 | |
*** udesale has joined #openstack-glance | 13:32 | |
jokke_ | mvenesio: there is lots of unrelated clutter in that scrubber config file due to automatic config generation :( | 13:36 |
jokke_ | I think kairat is right, it's not supported | 13:36 |
*** e0ne_ has joined #openstack-glance | 13:40 | |
mvenesio | jokke_: i understand, just that would be great to have some kind of bug report or blueprint to show to my client that is not supported. | 13:41 |
mvenesio | jokke_: maybe i can create a new bug report and wait an answer | 13:41 |
kairat | i remember there was a patch | 13:42 |
kairat | but have no time to find it | 13:42 |
mvenesio | kairat: no problem if you can do it later or tomorrow i'll be around here for a couple of days | 13:43 |
mvenesio | kairat: thanks for your help | 13:43 |
*** e0ne has quit IRC | 13:43 | |
jokke_ | mvenesio: yeah, makes sense ... the scrubber documentation is not really that helpful | 13:46 |
bhagyashris | Hi all, can any help me How to define property protection roles for service user (like nova or cinder) in the property-protection-roles.conf? for specific property | 13:47 |
jokke_ | mvenesio: the scrubber is currently under quite a bit of refactoring ... not sure if the change merged yet | 13:47 |
bhagyashris | I mean I want to give the access to for example: xyz property to create, update and delete for admin user and service user | 13:48 |
rosmaita | bhagyashris: does the service user have any particular roles in keystone? you could pick one of those | 13:49 |
rosmaita | or if it doesn't you could create one and assign it to the service user | 13:49 |
jokke_ | bhagyashris: the service user awareness has not been implemented into glance yet so unless oslo_policy can do it without us intervening/providing any extra info I think you can't | 13:50 |
*** arcolife has quit IRC | 13:52 | |
rosmaita | jokke_: i added a note about what's going on with python-glanceclient at the bottom of https://etherpad.openstack.org/p/glance-queens-Q2 | 13:53 |
jokke_ | rosmaita: oh damn | 13:59 |
rosmaita | no kidding | 14:00 |
rosmaita | i'm working on fixing the "legacy" tests now | 14:00 |
rosmaita | but i wasted a lot of time yesterday with the upstream json patch stuff | 14:01 |
jokke_ | and because the legacy is failing we can even block that from the requirements | 14:02 |
bhagyashris | rosmaita: ok. I have just check by creating the snapshot/image of the instance from nova and just checked the req.context coming from nova in the create image and it shows the req.context.roles as [u'anotherrole', u'member'] | 14:02 |
bhagyashris | rosmaita: so is that mean another and member is service user roles | 14:03 |
rosmaita | bhagyashris no, those are sample roles created in devstack | 14:05 |
rosmaita | i think all the default users have them | 14:05 |
rosmaita | as a short term thing to do your testing, you could create a 'service_user' role in keystone and only assign it to the users you want | 14:06 |
rosmaita | or, you could ask in keystone channel what is the best way to recognize a service user | 14:06 |
bhagyashris | rosmaita: ohh ok. | 14:06 |
bhagyashris | rosmaita, jokke_: thank you :) | 14:07 |
jokke_ | rosmaita: do you have moment? trying to get some sense of this fecking mess again :P | 14:08 |
rosmaita | which mess would that be? :) | 14:08 |
rosmaita | (we have several) | 14:09 |
jokke_ | rosmaita: our favorite, onion layers | 14:09 |
jokke_ | so that's related to Abhishek's patch 523366 | 14:11 |
jokke_ | I think that ImageProxy which our quota implements is just fundamentally broken | 14:14 |
rosmaita | ok, give me a minute to pull up the patch | 14:15 |
*** abhishekk has joined #openstack-glance | 14:15 | |
rosmaita | found it, give me a few min to look it over before hearing your concerns | 14:18 |
rosmaita | jokke_ : ready to discuss | 14:24 |
jokke_ | so this is fecking horrible no matter how you look at it | 14:26 |
jokke_ | in our location code we implement one version of that set_data, that checks that our image data does not exceed the max configured image size | 14:27 |
jokke_ | with LimitingReader | 14:28 |
jokke_ | we do that same in our quota code to check that we don't exceed the data quota set | 14:28 |
jokke_ | these passes the exactly same ImageSizeLimitExceeded exception up the layers | 14:29 |
jokke_ | so if we go through that quota layer, currently we just get StorageQuotaFull exception which might be either the quota check or the ImageSizeCap, with Abhishek's patch we wil get the ImageSizeLimitExceeded to the upload call in image_data but we still don't know which one it was that failed | 14:31 |
rosmaita | i think abhishekk 's patch will give us the correct exception though? is that right? | 14:33 |
jokke_ | nope | 14:33 |
rosmaita | oh | 14:33 |
jokke_ | that's my point | 14:33 |
rosmaita | give me a minute to look again | 14:33 |
jokke_ | if we go through the quota layer with that patch we will never get the quota full exception | 14:33 |
openstackgerrit | Merged openstack/glance_store master: Updated from global requirements https://review.openstack.org/525003 | 14:34 |
jokke_ | we will just get the image size limit exceeded exception but we still don't know if it came from quota or from the image size cap limiter | 14:34 |
abhishekk | jokke_: if you check line #336 you will get QuotaFullException from their | 14:34 |
*** alexchadin has quit IRC | 14:35 | |
jokke_ | abhishekk: that gets raised only if that limiting reader set on line 302 doesn barf first | 14:36 |
jokke_ | as in if we don't know the size of the data coming in | 14:36 |
abhishekk | why we are catching ImageSizeLimitExceeded and throwing StorageQuotaFull which is quite confusing | 14:38 |
jokke_ | abhishekk: abhishekk because that is setting the limit to the quota left | 14:38 |
jokke_ | on the LimitingReader | 14:38 |
*** pdeore has quit IRC | 14:39 | |
jokke_ | now the problem is that that LimitingReader is used twice once on that quota implementation and second time in our location code that checks the image size cap from configuration. Now regardless if we use the current or the proposed code we have no idea which one actually raised the exception | 14:40 |
*** udesale has quit IRC | 14:40 | |
abhishekk | ohh | 14:41 |
jokke_ | because it's nesting the same function inside itself with two different limit values | 14:41 |
jokke_ | like said ... the shit is fundamentally broken there | 14:42 |
abhishekk | sorry, need to rush, please add your comments on the patch, I will have a look once reached home | 14:42 |
jokke_ | abhishekk: sure | 14:42 |
abhishekk | jokke_: thank you | 14:42 |
*** abhishekk has quit IRC | 14:43 | |
jokke_ | rosmaita: are you on board on that analysis? | 14:43 |
rosmaita | i'm on board with the "shit is fundamentally broken" analysis | 14:43 |
rosmaita | yeah, the limiting reader is designed to detecte image size limit exceeded, but it' | 14:44 |
rosmaita | s being used to detect storage quota full | 14:44 |
rosmaita | jeez | 14:45 |
rosmaita | someone did a bad hack there, i hope it wasn't me | 14:45 |
jokke_ | rosmaita: yup ... you see, great java coding ... lets reuse and reimplement everything everywhere and not care about the consequencies | 14:45 |
rosmaita | so you are saying we should wrap the LimitingReader inside a QuotaLimitingReader to fix this? | 14:47 |
rosmaita | :) | 14:47 |
jokke_ | rosmaita: so one way to fix this is that we pass the exception class to LimitingReader (and make sure those exceptions are initialized same way) so in the quota code we would do something like "data = utils.LimitingReader(data, remaining, exc=exception.StorageQuotaFull)" | 14:49 |
jokke_ | and the limiting reader would initialize and raise exc instead of hardcoded ImageSizeLimitExceeded | 14:49 |
*** udesale has joined #openstack-glance | 14:50 | |
jokke_ | that way if we utilize it somewhere else we could define what we want it to throw at us when it barfs | 14:50 |
jokke_ | but yeah, multiple nested layers of the same object rewritten and no-one bothered to write functional tests for them to see that we ever get the excepton out we expect | 14:52 |
jokke_ | fecking proxyclasses | 14:52 |
*** gcb has quit IRC | 14:53 | |
rosmaita | ok, we need to think about this some more | 14:53 |
rosmaita | but yeah, there is bad mojo going on there | 14:53 |
*** gcb has joined #openstack-glance | 14:54 | |
*** udesale has quit IRC | 14:59 | |
jokke_ | rosmaita: there is couple of Abhishek's fix patches waiting for +2A if you have time | 15:07 |
rosmaita | ok, will make that a priority for next few hours | 15:07 |
*** btully has joined #openstack-glance | 15:11 | |
*** btully has quit IRC | 15:16 | |
openstackgerrit | Merged openstack/glance master: Updated from global requirements https://review.openstack.org/525374 | 15:16 |
jokke_ | afk for ~20min | 15:16 |
*** udesale has joined #openstack-glance | 15:36 | |
*** udesale has quit IRC | 15:41 | |
*** udesale has joined #openstack-glance | 15:42 | |
jokke_ | b | 15:47 |
*** openstackgerrit has quit IRC | 15:48 | |
*** belmoreira has quit IRC | 16:04 | |
*** udesale has quit IRC | 16:23 | |
*** e0ne_ has quit IRC | 16:33 | |
*** mvk has quit IRC | 16:42 | |
*** tshefi has quit IRC | 17:10 | |
*** mvk has joined #openstack-glance | 17:14 | |
*** pcaruana has joined #openstack-glance | 17:22 | |
*** pcaruana has quit IRC | 17:26 | |
*** pcaruana has joined #openstack-glance | 17:27 | |
*** pcaruana has quit IRC | 17:58 | |
*** jose-phillips has quit IRC | 18:02 | |
*** jose-phillips has joined #openstack-glance | 18:04 | |
*** mvenesio has quit IRC | 18:16 | |
*** mvenesio has joined #openstack-glance | 18:17 | |
*** tesseract has quit IRC | 18:21 | |
*** jose-phillips has quit IRC | 18:23 | |
*** btully has joined #openstack-glance | 18:49 | |
*** btully has quit IRC | 18:53 | |
*** mvenesio has quit IRC | 19:10 | |
*** jose-phillips has joined #openstack-glance | 19:20 | |
*** jose-phillips has quit IRC | 19:24 | |
*** jose-phillips has joined #openstack-glance | 19:27 | |
*** kuzko has quit IRC | 19:54 | |
*** kuzko has joined #openstack-glance | 19:56 | |
*** kuzko has quit IRC | 20:04 | |
*** kuzko has joined #openstack-glance | 20:14 | |
*** e0ne has joined #openstack-glance | 20:25 | |
*** gyee has joined #openstack-glance | 20:44 | |
*** nicolasbock has quit IRC | 21:10 | |
*** e0ne has quit IRC | 21:25 | |
*** threestrands has joined #openstack-glance | 22:05 | |
*** threestrands has quit IRC | 22:05 | |
*** threestrands has joined #openstack-glance | 22:05 | |
*** rcernin has joined #openstack-glance | 22:05 | |
*** McClymontS has joined #openstack-glance | 22:32 | |
*** McClymontS has quit IRC | 22:33 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!