b-prajakta | HI , I am new to Launchpad | 07:18 |
---|---|---|
b-prajakta | I want to contribute to openstack be resolving few bugs | 07:25 |
*** marlinc- is now known as marlinc | 07:26 | |
b-prajakta | But when I try self -assign bugs , I am not able to do it , even if the Assiggnee is set to None | 07:26 |
b-prajakta | Please can someone help me with this | 07:26 |
b-prajakta | Is there any step that I am missing | 07:26 |
b-prajakta | I am not able to see the assign yourself option when i right click on assignee - None | 07:26 |
frickler | b-prajakta: depending on the project you may need additional privileges in order to do that. you can try to talk to people in that project in their channel about it. or just write a comment in the bug report. or simply submit a review referencing the bug id | 07:46 |
marlinc | What could be the cause of 'greenlet.error: cannot switch to a different thread' error messages? Seeing this in the Cinder API I can imagine, especially when using something like uWSGI but this is happening when running cinder-volume and cinder-scheduler | 08:42 |
*** rlandy|out is now known as rlandy | 10:23 | |
marlinc | Looks like this might be the cause https://github.com/eventlet/eventlet/issues/432 | 10:45 |
jamesbenson | jrwr: ping | 15:24 |
*** rlandy is now known as rlandy|ruck | 16:25 | |
*** rlandy|ruck is now known as rlandy|rover | 18:30 | |
muzicar | Hi. I've been reading around a bit and haven't yet come to a conclusion about swauth and keystone - I inherited a 24TB openstack switf object storage that has been neglected - still on ubuntu 14.04 and on swift 1.3.1 . I have a few projects in queue that would require keystone as auth instead of swauth but there are quite a few projects still running in production that use swauth. My question is - can both keystone and swauth be used a | 19:23 |
*** timburke__ is now known as timburke | 19:24 | |
timburke | muzicar, yes, keystone and swauth can both be used in the same proxy pipeline. it's been a while since i did it, but i wouldn't be surprised if the pipeline order was significant | 19:29 |
timburke | probably want it like '... authtoken keystoneauth swauth ... proxy-server' but if you've got a dev or staging environment, it's worth verifying | 19:33 |
timburke | note that you'll definitely want delay_auth_decision=true for authtoken | 19:35 |
muzicar | great cheers for that. I currently don't have any test/dev env but I could probably spin something up. I checked the swift changelog and I didnt notice any clear "deprecation" of swauth - I know the project itself is deprecated - but from what I gather swauth works with the latest openstack swift without any issues - am I on the right track here? | 19:37 |
muzicar | (the track being that I would also bump swift before adding keystone) | 19:38 |
timburke | as far as i'm aware -- but that's not saying much | 19:41 |
timburke | i *do* know that if you're looking to bump python versions, too, it may get hairy. swauth was retired in part because it hadn't been ported to py3 | 19:41 |
muzicar | ah good point. the latest swift is the last one that supports python 2.7 so that is indeed something to keep in mind. | 19:44 |
timburke | still, getting to 2.29.1 is going to be *way* better than being stuck on 1.3, even if you have to be there a while | 19:47 |
timburke | is there any plan to migrate the swauth users to keystone, once you've got both enabled? | 19:48 |
muzicar | would there be any benefits to that since all the swauth users would keep on using swauth for auth? | 19:50 |
timburke | muzicar, main thing i was thinking of was being able to some day get off py2. fwiw, i took a stab at adding py3 support to swauth at https://github.com/tipabu/swauth -- i haven't actually functionally tested it, though | 20:13 |
muzicar | that would be wise indeed. Is there a way to migrate swauth users to keystone? | 20:54 |
timburke | muzicar, hand out new keystone creds and move data between accounts. could be a little painful, though at 24T, it could be worse. if you create a "reseller admin" on the keystone side, that user will be able to do cross-account copies so you don't even need to stream the data through the client | 21:47 |
timburke | at 24T, copying all data down to a couple hard drives then wiping and rebuilding the cluster isn't off the table | 21:50 |
muzicar | didnt even think about rebuilding tbh but I suppose that would also be viable. I guess Ill have a better view of things once I get a full picture on which projects exactly have this and which can either be retired or someone would still be willing to switch it to keystone. | 22:04 |
muzicar | ah sry... I forgot the 0 at the end :) its a 240TB cluster not 24TB and its at about 23% | 22:06 |
timburke | that makes more sense :-) i was going to say, 24T's not so far off from what i've got in my garage | 22:12 |
timburke | 23% full is a great spot to be, though -- you've got the option to do a complete copy without deleting as you go | 22:13 |
*** rlandy|rover is now known as rlandy|rover|bbl | 22:29 | |
muzicar | valid point. Cheers for all the info. I might swing by here again once I get a bit deeper into this but I definetly have a better picture as to what can be done :) | 22:35 |
timburke | sounds good. also feel free to drop in #openstack-swift -- you'll probably catch some other swift folks that might not be in here | 23:42 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!