*** lajoskatona_ is now known as lajoskatona | 00:31 | |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: Add upload-logs-ibm role https://review.opendev.org/c/zuul/zuul-jobs/+/826158 | 00:40 |
---|---|---|
*** luigit is now known as luigi | 07:59 | |
*** amoralej|off is now known as amoralej | 08:05 | |
opendevreview | Merged opendev/infra-manual master: Update recommended ACL for createSignedTag keyword https://review.opendev.org/c/opendev/infra-manual/+/826358 | 08:29 |
*** anbanerj is now known as frenzyfriday | 08:35 | |
*** jpena|off is now known as jpena | 08:38 | |
*** rlandy|out is now known as rlandy|ruck | 11:12 | |
*** dviroel|afk is now known as dviroel | 11:21 | |
mnasiadka | Does anybody have an idea why DIB CI nodepool-functional jobs are timing out (all of them) in https://review.opendev.org/c/openstack/diskimage-builder/+/825957 ? | 11:39 |
*** odyssey4me is now known as Guest652 | 12:40 | |
*** sboyron_ is now known as sboyron | 12:41 | |
lourot | o/ not sure if you have noticed https://review.opendev.org/c/openstack/project-config/+/825089 - thanks! | 12:50 |
*** amoralej is now known as amoralej|lunch | 13:09 | |
fungi | mnasiadka: if it's the one i'm thinking of, it needs updating to do zk over tls. i started trying to update it here: https://review.opendev.org/790826 | 13:10 |
*** amoralej|lunch is now known as amoralej | 13:59 | |
*** dviroel is now known as dviroel|lunch | 15:01 | |
clarkb | fungi: mnasiadka https://zuul.opendev.org/t/openstack/build/e60e7cd382834a679b22f419947a4c93/log/docker/nodepool_nodepool-launcher_1.txt#10470 it couldn't launch a node because of a python error. I strongly suspect this is an openstacksdk bug | 16:16 |
clarkb | because they apparently rewrote everything | 16:17 |
*** dviroel|lunch is now known as dviroel | 16:23 | |
fungi | artom: ^ maybe worth a look | 16:27 |
artom | fungi, I think you mean gtema? | 16:28 |
fungi | oh, yes sorry! gtema ^ | 16:28 |
*** artom is now known as artom_not_the_sdk_one | 16:28 | |
artom_not_the_sdk_one | ;) | 16:28 |
*** artom_not_the_sdk_one is now known as artom | 16:29 | |
fungi | d'oh ;) | 16:29 |
gtema | oki, looking. | 16:29 |
gtema | I expect I would need to make some patches to nodepool | 16:30 |
gtema | I guess it would anyway make sense to cap version of sdk for zuul, cause ansible 2.9 will stop working properly with new version (as announced some months ago) | 16:32 |
clarkb | maybe I'm crazy, but this sort of thing makes me think we need to fork shade back out of sdk | 16:33 |
clarkb | the whole point of shade was to avoid these problems | 16:33 |
clarkb | and now we've reintroduced them | 16:33 |
clarkb | note that nodepool and ansible don't interact. It is zuul and ansible and the sdk installed for that that would need to be pinned for those issues | 16:34 |
clarkb | But again, shade existed because these problems in the tools were pervasive and users (us) found it extremely frustrating to use the tools | 16:34 |
gtema | but nobody is going to further maintain shade - it is nightmare looking to the evolution of all services | 16:35 |
clarkb | well I think the real nightmare is breaking users constantly | 16:36 |
clarkb | because it doesn't matter if no one wants to use your software | 16:36 |
gtema | what you say is more or less then "never touch a running system" (no offence) | 16:36 |
clarkb | no thats not true | 16:37 |
clarkb | we made this work for many years with shade | 16:37 |
clarkb | papering over all the warts (which absolutely do exist) | 16:37 |
clarkb | it was work and it wasn't always fun (glance image upload tasks ugh) | 16:37 |
clarkb | But it was also important that end users could use cloud apis in a portable manner without thinking too hard or needing various pins. I mean what are we supposed to do if one version of sdk supports cloud foo and another supports cloud bar | 16:38 |
fungi | in actuality it was code inside nodepool. we extracted it from nodepool and named it shade | 16:38 |
gtema | and that is exactly what I am trying to improve. When there are 20000 different interfaces (and mostly for the doing the same) that are not possible to be maintained you will come to point of not having fun with it | 16:39 |
fungi | we could in theory just re-absorb it into nodepool | 16:39 |
clarkb | gtema: I don't think shade had 2000 different interfaces though. It had "upload image" "boot instance" and so on. It was very high level and intentionally so with an attempt to try and paper over this stuff | 16:39 |
gtema | fungi - it will not help for ansible. | 16:39 |
clarkb | and it was my udnerstanding that the sdk intended to keep that functionality otherwise I would never have been on board with the two merging | 16:39 |
gtema | clarkb: shade - no, but once it was spinned off it absorbed tons of other cases for ansible | 16:40 |
fungi | gtema: that wasn't ansible where we errored | 16:41 |
gtema | and since we are not able to update modules in ansible 2.9 any more we also can not make any changes in sdk | 16:41 |
clarkb | it is also possible nodepool grew more non shade uses of the api | 16:41 |
gtema | fungi - this particular case sure. I am talking generally | 16:41 |
clarkb | But I do think it is worth thinking about the impact to end users here if they need different sdk versions to talk to two different clouds | 16:41 |
clarkb | Specifically if we are breaking interfaces then that becomes much more difficult for end users to deal with | 16:41 |
gtema | I am not capable maintaining multiple different interfaces in sdk for the same while extending coverage | 16:42 |
fungi | the reason that code came into existence at all is that we are operating a multi-cloud service which talks to probably a dozen different versions of openstack simultaneously. we thought it might be nice if there was an external api others could use to do the same thing, but if it's not really externally maintainable then we should reconsider that choice and possibly put it back inside nodepool | 16:42 |
fungi | where it came from | 16:42 |
clarkb | ya I guess tahts reasonable. Basically move the two different versions of sdk concerns back into a porcelain library | 16:42 |
gtema | correct, but once there are few ways to achieve the same you can not maintain it reasonably anymore | 16:43 |
gtema | i really try to keep it as much backward compatible as possible, but there are many dead bodies that are not funny | 16:43 |
clarkb | gtema: I agree it is difficult. But at the same time if no one wants to use the tools beacuse they break often or don't supports the clouds they need to talk to we aren't really accomplishing much either. It is definitely frustrating that despite having common APIs clouds are able to differ in common tasks so much (image upload is an excellent example) | 16:44 |
gtema | "often" - what exactly you want to say here? | 16:44 |
clarkb | gtema: interfaces change so my code no logner works (what happened to nodepool), and new cloud has new APIs but to support that I need to update sdk but if I do that my existing old cloud will stop being able to be talked to | 16:45 |
gtema | and going back to the image upload case: It is not sustainable to maintain image upload for nodepool, image upload for osc, image upload for ansible | 16:45 |
clarkb | The first happens due to changes in the library interfaces. The second happens when APIs/functionality for older installations is removed | 16:45 |
clarkb | gtema: why do you need an image upload for each of them? | 16:45 |
clarkb | gtema: there should be one high level interface that works for the 80% case | 16:46 |
clarkb | (this is what shade did) | 16:46 |
gtema | because it is currently similar to that: there is "cloud" layer interface, there is "proxy" interface, and then there is still possibility to do direct operations bypassing all resources | 16:46 |
clarkb | it made reasonable decisions based on the cloud you were talking to. It didn't need to do everything | 16:46 |
gtema | and nodepool/ansible/osc are currently all using different things | 16:46 |
clarkb | gtema: so are we removing everything ut the low level primitives then? | 16:47 |
gtema | I decided to normalise this by switching all those usages to proxy | 16:47 |
gtema | to have real support for microversions | 16:47 |
clarkb | ok so shade really is being removed (that was the cloud layer) | 16:48 |
gtema | no, we are not removing, but just inside of the call I rewrote it to use proxy layer (what was done not consitently before) | 16:48 |
clarkb | ah ok | 16:48 |
gtema | example with limit is from another side - every service implements limits differently | 16:48 |
fungi | so for that specific error, is max_total_instances no longer an available attribute? was it deprecated previously and we missed it? or not supposed to be part of the public api? | 16:49 |
gtema | also here I worked on introducing common class for that so that every user can "trust" the interface of limit/quota/etc for any service | 16:49 |
gtema | it is there, but most likely need to be fetched from different place | 16:49 |
gtema | lemme check pls | 16:49 |
clarkb | fwiw I'm more worried about how changing interfaces causes me to have to choose between different clouds if I want ot use sdk than fixing specific updates for an itnerface | 16:50 |
fungi | so just a simple case of missing a backward-compatible alias for that attribute? | 16:50 |
clarkb | they are related problems though | 16:50 |
gtema | fungi: that's possible, but not necessarily | 16:51 |
clarkb | also this isn't a theoreticaly concern. To talk to rackspace volume api we need cinder v1 which isn't in openstacksdk anymore aiui | 16:53 |
clarkb | this means that any tools we build to do volume management (like our cloud launcher script) will need to run with different versions of sdk potentially | 16:54 |
clarkb | and either have different implementations or some sort of shade like layer to accomodate differences | 16:54 |
gtema | volumev1 was actually never in sdk | 16:54 |
clarkb | hrm are you sure? our cloud launcher uses sdk and definitely makes volumes against rax | 16:55 |
gtema | I never seen it | 16:55 |
clarkb | oh it might just rely on the nova api for boot from volume though | 16:55 |
clarkb | since nova can request a volume and delete it when the instance is deleted | 16:56 |
gtema | and the issue on that side is that we are not even able to test things cause services dropped them completely | 16:56 |
gtema | nova support for networking is still in | 16:56 |
fungi | yes, the bigger pain is getting the openstack cli to work with volume subcommands for rackspace's cinder v1 api | 16:56 |
fungi | i've resorted to having multiple versions of the cli installed for talking to different clouds | 16:57 |
gtema | so, max_total_instances: in reality resource has been changed (to accomodate services evolutions) - it now has absoluteLimits and rateLimits | 16:57 |
fungi | with no backward-compatibility alias for the old attribute, i guess | 16:58 |
gtema | I can make a "special" conversion for nodepool case and mark them "red" in the code to keem them working | 16:58 |
fungi | that probably should have followed deprecation guidelines if it was intentional | 16:58 |
fungi | nodepool may not be the only consumer of that attribute | 16:59 |
gtema | fungi - deprecation once we have never reached major release is also something complex | 16:59 |
gtema | the work is exactly to finally normalize interfaces and make r1 | 16:59 |
gtema | nodepool is easy to fix, I have more worries on zuul (ansible) side | 17:00 |
fungi | ansible... testing side? zuul doesn't use openstacksdk | 17:01 |
gtema | modules were written not really correctly (or basically ansible makes some assumptions on objects) - it tries to modify them | 17:01 |
gtema | and we may have attributes that are not allowed to be modified (read only) | 17:01 |
gtema | fungi - yes, ansible on the jobs side | 17:02 |
fungi | got it | 17:02 |
clarkb | fwiw I don't think many jobs use ansible + openstacksdk without nested ansible | 17:02 |
clarkb | but I could be wrong about that | 17:02 |
clarkb | Its definitely a concern for ansible + sdk wherever that may be though | 17:03 |
gtema | not in opendev afaik, but users might do this | 17:03 |
fungi | gtema: anyway, i'm happy to propose a patch to nodepool for the new openstacksdk interfaces if you have a list of which old attributes were removed/replaced | 17:03 |
fungi | in the meantime we likely do need to pin openstacksdk to the previous release | 17:03 |
gtema | fungi - I have EOB here, will doublecheck over night/early tomorrow | 17:04 |
clarkb | well I think those jobs intentionally deploy sdk from source | 17:04 |
clarkb | so pinning might be the wrong choice. Not sure | 17:04 |
fungi | oh, got it if that's not yet in 0.61.0 then we have some time | 17:04 |
gtema | that's in master currently | 17:05 |
fungi | we can probably make nodepool flexibly use the old and new versions with a try/except or something so that it supports multiple sdk versions | 17:05 |
gtema | and I wanted to have some time exactly to catch issues like that before making rc | 17:05 |
clarkb | right the jobs are testing sdk from source so that we find these issues early and call them out | 17:05 |
clarkb | since the idea is/was nodepool is the canary for shade | 17:06 |
gtema | sadly some time ago we disabled nodepool jobs cause they were permanently failing and therefore this was not noticed on our side | 17:06 |
clarkb | gtema: they run reliably for nodepool and dib | 17:06 |
clarkb | and I think glean | 17:06 |
fungi | but yes, if you want users to have fewer surprises, an audit of the interfaces which have gone missing since the last release and then adding compat aliases for them for at least the next cycle while calling them out as deprecated would be great | 17:06 |
gtema | now yes, but 5-6 month ago they were not | 17:06 |
gtema | as I said, I will think and maybe return this limits stuff (in the nodepool usage way, which was itself really specific). Or I propose fix into nodepool to address new structure | 17:08 |
clarkb | I think we should consider an sdk pin to the previous version now since we know some things are going to change. Then going forward try and keep up with the updates | 17:09 |
gtema | yes, this sounds great. Thanks | 17:09 |
gtema | one more time - please do not think I like to break things. I only to try to make this (in the meanwhile) beast maintainable | 17:10 |
clarkb | gtema: yup I understand. Its mostly that we've spent yaers with openstack repeatedly breaking users on these api interfaces and wondering why we get frustrated | 17:11 |
gtema | I know, and this is also one of the things that I want to make sure through the single interface will be more reliable | 17:12 |
clarkb | I think it would be good to think about who the end user is for the sdk in particular when making changes and designing things. Another complaint I've long had with many of the library tools (python-*client, sdk etc) is they use a form of method lookup that makes it almost impossible to know what a function I call in my code ends up doing. Maybe this refactoring will help that as it | 17:12 |
clarkb | will be more consistent | 17:12 |
clarkb | I find it really frustrating when an SDK for end users isn't greppable for the function names they call | 17:12 |
clarkb | I think that should be a requirement for any end user facing tool | 17:12 |
fungi | at least open source ones ;) | 17:12 |
clarkb | if I call getAttribute() and 'def getAttribute' doesn't existing in the code base I have a sad | 17:13 |
gtema | correct - that is exactly what I try to address: make sure nodepool/ansible/osc/etc are all landing in the same piece of the code | 17:13 |
clarkb | I think it is fine for code bases that are largely internal facing to have their fancy magic, but anything that random people on the Internet (me!) are expected to look at, use and occasionally debug shoudl be greppable | 17:14 |
gtema | that's why cloud layer (a.k.a. old shade) is not maintainable anymore - it gives you 200-300 functions back filling whole screen | 17:15 |
gtema | this isn't working anymore | 17:15 |
clarkb | gtema: well those functions are all greppable though right? it is the proxy layer that does magic based on the name of the api you are interfaces with and it composes function names dynamically before invoking them? I admit I get lost in all that and it may be a different layer that does that | 17:17 |
gtema | that is what happens mostly in proxy layer | 17:17 |
opendevreview | Marcin Juszkiewicz proposed opendev/system-config master: reprepro: mirror Ubuntu UCA Yoga https://review.opendev.org/c/opendev/system-config/+/826500 | 17:18 |
clarkb | I've always felt that was a bug as it meant people using the SDK had to become experts in how the SDK is built in order to understand the few methods that they are using | 17:18 |
gtema | conn=openstack.connect() | 17:18 |
gtema | dir(conn) | 17:18 |
gtema | this gives you a lot back | 17:19 |
gtema | and `dir(conn.compute)` now gives you only compute methods back | 17:19 |
gtema | the issue was that those (i.e. create_server) were/are not doing same | 17:19 |
clarkb | gtema: no but if I grep create_server I get the function definition | 17:20 |
clarkb | and I think that is a basic fundamental requirement of any external user facing interface | 17:20 |
gtema | so depending on whether you call `conn.create_server` or `conn.compute.create_server` you are doing different things | 17:20 |
clarkb | It sounds like these improvemtns won't help with grep but basic lookup of functionality will at least be consistent so once you learn how to do that it becomes easier | 17:22 |
* gtema sent a code block: https://matrix.org/_matrix/media/r0/download/matrix.org/SYmhrGlDRokrrjEGGmprkNwS | 17:22 | |
gtema | and ```... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/QKSymMJMfnDPSymfAwHDnfdt) | 17:22 |
gtema | so yes - inspect now works | 17:22 |
clarkb | right but I'm not going to know to use these tools if I've never written python before (and in many cases even if I have) and am just trying to write a tool to talk to my openstack cloud | 17:24 |
clarkb | That is the target audience here | 17:24 |
fungi | i write python like it's a shell script ;) | 17:24 |
gtema | yeah, for those in sdk we have quite lot of api ref and some examples | 17:26 |
clarkb | might be good to add use of the repl + inspect to find method defs to the docs. I would find that useful for the next time i'm trying to hunt down some weird behavior | 17:28 |
fungi | so when you say you're getting rid of the cloud layer in the sdk, will the simple examples like https://docs.openstack.org/openstacksdk/latest/user/guides/intro.html still be as simple, or will doing simple things like authenticating and booting a server instance with external network access remain as simple as it has been? | 17:29 |
gtema | well, I have not got rid of it, but ensured that it will definitely land in proxy layer | 17:29 |
gtema | https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/cloud/_compute.py#L355 | 17:30 |
gtema | that is exactly touching the limits case | 17:30 |
clarkb | ya sounds like the cloud layer is being updated to call the proxy layer for all operations so that its behaviors match the proxy laye | 17:30 |
gtema | which ensure that whichever changes nova will implement and new microversions added it will always work like "we expect" | 17:30 |
fungi | the main things we wanted from shade were to hide differences between different openstack environments, and abstract away a lot of the complex orchestration around things like attaching networks or floating ips | 17:30 |
clarkb | fungi: and image uploads | 17:31 |
fungi | yep | 17:31 |
clarkb | fungi: I suspect the cloud layer can continue to do that but at the cost of type changes since the proxy layer is returning the results now | 17:31 |
gtema | this is still there, but to make it possible you sometimes need to finally sacrifice things that block you | 17:31 |
fungi | be able to say "i have this local file i want to upload to $cloud, please do whatever is necessary to make it happen" | 17:31 |
fungi | and let the sdk figure out the rest | 17:32 |
gtema | correct, and that if you tell me something is wrong I can pin to the place in the code where this is definitely happening | 17:32 |
gtema | cause this "and which interface have you used" question is making me sick | 17:32 |
fungi | yeah, i can understand that it's hard to field bug reports if people don't show you exactly how they're calling the lib | 17:33 |
clarkb | ya so to summarize the goals of the cloud layer remain the same, but the underlying implementation for teh cloud layer is being updated to use the proxy layer exclusively. This will make it more consistent with the rest of the SDK but will also result in type changes in places | 17:33 |
gtema | right | 17:33 |
gtema | and with this we also guarantee that the response from different methods (proxy or cloud) is always of the same type | 17:34 |
gtema | this is exactly what previously was not the case | 17:34 |
fungi | so may require some very careful explaining in release nodes/docs, and maybe finally a major version increase ;) | 17:34 |
fungi | er, release notes | 17:34 |
gtema | that is exactly the target | 17:34 |
clarkb | We know this will break some people (since it already broke nodepool and is expected to break ansible). Ya I think the key then becomes communication not just in "this has changed" but how to fix/update the usage that has broken | 17:34 |
gtema | right - is all on my desk | 17:35 |
clarkb | And if users run into problems where old sdk is needed for one cloud and new sdk is needed for another cloud. We're punting on that for now? | 17:35 |
fungi | for nodepool specifically, we ought to be able to support old and new sdk at least for some brief period to give people a chance to upgrade | 17:35 |
clarkb | This is still my biggest concern just with the problems we've had with things like cinder v1 and rackspace. But maybe there isn';t much to be done with that | 17:35 |
fungi | but yes, hopefully we can still find a way to be able to use the new sdk with old public cloud deployments | 17:36 |
gtema | wrt "transition" period it doesn't really feel possible with such huge change. because of that I plan exactly to cut major release to be able to have those "breaking" changes | 17:36 |
gtema | if required we can always make some interim release (from before the r1 merge state) | 17:37 |
fungi | openstack doesn't have much control over when major cloud providers do their upgrades, if ever. we can declare bankruptcy on some of them, and requre consumers of the software to implement their own support for old apis, but that's also going to alienate users of some popular hosting providers and maybe the providers themselves | 17:37 |
clarkb | fungi: ya thats the tension here and there aren't many good answers | 17:38 |
clarkb | basically someone has to take the pain (user, sdk devs, or cloud)_ | 17:38 |
gtema | so far I haven't seen any good answer | 17:38 |
gtema | but again - sdkr1 will still work on older clouds, but it breaks users in few cornercases | 17:39 |
*** jpena is now known as jpena|off | 17:44 | |
*** sshnaidm is now known as sshnaidm|afk | 17:49 | |
*** amoralej is now known as amoralej|off | 17:53 | |
corvus | i'm going to perform a rolling restart of zuul except the executors | 20:54 |
corvus | stopping zuul01 | 20:55 |
corvus | restarting it | 20:57 |
fungi | thanks! | 20:58 |
*** dviroel is now known as dviroel|afk | 21:06 | |
corvus | 01 is up, restarting 02 now. there will be a web outage. | 21:11 |
corvus | that's up. i'm going to restart the mergers now | 21:26 |
jrosser | i think that may have caused this https://zuul.opendev.org/t/openstack/build/d6643539f30e4dcb99bb67092dcf2688 | 21:26 |
jrosser | it likley doesnt matter but just an fyi | 21:27 |
corvus | yep. that can be made more robust by load-balancing to multiple zuul-web processes | 21:27 |
jrosser | again fyi but this is an odd failure probably related to the restart https://review.opendev.org/c/openstack/openstack-ansible/+/826381 | 21:34 |
jrosser | it suggests a rebase but as far as i can see the parent is already the most recent patch | 21:34 |
corvus | can happen with a merger restart; 'recheck' to double check | 21:36 |
ianw | on the gerrit avatars (https://review.opendev.org/c/opendev/system-config/+/826196) i somewhat agree with frickler on the concerns with gravatar | 23:07 |
ianw | i found https://phabricator.wikimedia.org/T191183 interesting; and i think hashar_ and co. went through just about everything i was thinking | 23:07 |
ianw | (have a job where people upload their own avatars to a repo, use a proxy, etc.) | 23:08 |
ianw | so far it actually seems to me that the grey circle should be disabled if you don't have a avatar plugin, so i'm not sure what's going on yet | 23:09 |
hashar_ | ianw: o/ | 23:09 |
hashar_ | oh yeah that was a bit of an epic discussion | 23:10 |
hashar_ | the gravatar based backend is all fine and work out of the box | 23:10 |
hashar_ | the problem Wikimedia has is that we have a very tight privacy policy and we don't want to leak any browsing activities to third parties | 23:10 |
hashar_ | well it is not a problem, more of a constraint | 23:10 |
hashar_ | so we can't use the gravatar backend cause that means leaking info to the company running it. Though we could have set up a local proxy or implement our own backend serving avatars people fill in their profile on our Phabricator | 23:11 |
hashar_ | then we dont have the same account on Gerrit and Phabricator | 23:12 |
hashar_ | so essentially after a few rounds of discussion we went with "sure it is a great idea, but that requires significant amount of work here and there for something that is not really critical to our mission" and it got declined | 23:13 |
clarkb | I personally prefer not to have avatars at all... | 23:13 |
hashar_ | that is the summary on the top of my head | 23:13 |
clarkb | ianw: ya if we can just disable it that might be best | 23:13 |
hashar_ | Google / Gerrit consider folks have avatars so there might be ui glitch when your instance do not have avatars enabled | 23:13 |
hashar_ | I had a bug filed cause the name was touching the left side of the gray ship with 0px padding which was a bit annoying | 23:14 |
ianw | yeah, i guess that comes from the google account | 23:14 |
hashar_ | probably yes | 23:14 |
hashar_ | the good news is that Gerrit is properly maintained at Google, they have a dedicated UX afaik and more than a few UI developers | 23:14 |
hashar_ | and I think they genuily try to be as reactive as possible regarding non google request. I had really positive experience since I took over maintenance of our gerrit | 23:15 |
hashar_ | disabling avatar might be made a per user option | 23:16 |
clarkb | yes, as I've gotten more involved I've been happy with working with upstream. Still trying to learn their expectations and processes though | 23:16 |
hashar_ | and if you have a source of avatars on your infra, I imagine it is not the end of the world to write the java adaptor in the plugin | 23:16 |
ianw | they do merge in a weird way :) but yeah, everyone has been helpful | 23:16 |
hashar_ | the "repo-discuss" list is the way to go it is quite great | 23:16 |
ianw | yeah, the source of avatars is the issue. we could have people upload to a specific repository (i noted that was an option wikimedia explored :) | 23:17 |
hashar_ | and a side channel on Slack https://gerritcodereview.slack.com/ (might require some invite, I can't remember how I joined it ) | 23:17 |
clarkb | I had to ask Luca but he got me on there | 23:17 |
ianw | but to not leak a list of usernames i guess we would have to have people hash their image, and then write something for the external-avatars plugin to hash the username in the url | 23:17 |
ianw | and also, we have to have someone to maintain that repo | 23:18 |
ianw | i think i'll try and get 3.5 into our infra, and we can look at the screenshots and see if the problem still occurs | 23:19 |
ianw | it does seem like it should be checking, e.g. https://gerrit.googlesource.com/gerrit/+/114690e49123898cedc07512cd3389a9c65b9b9b/polygerrit-ui/app/elements/shared/gr-account-label/gr-account-label.ts#193 | 23:19 |
hashar_ | feel free to reach out to tgr who filed the task ( https://phabricator.wikimedia.org/p/Tgr/ ). He is on libera.chat and might have some insights | 23:20 |
hashar_ | I am not sure in which timezone he is though | 23:20 |
*** hashar_ is now known as hashar | 23:21 | |
hashar | I am off it is past midnight here. Happy hacking! | 23:21 |
*** hashar is now known as Guest705 | 23:21 | |
Guest705 | clarkb: congratulations on the Gerrit 3.4 upgrade! | 23:22 |
clarkb | thanks! ianw did a lot of the work | 23:23 |
*** Guest705 is now known as hashar | 23:24 | |
hashar | well congratulations ianw for the Gerrit 3.4 upgrade | 23:25 |
hashar | I am still working on it and fixing some javascript here and there | 23:25 |
hashar | anyway, it is past midnight I said .So bed time! | 23:26 |
fungi | i agree that leaking browsing activity and content to a third party like gravatar is not great for a privacy-conscious deployment like we strive to maintain | 23:31 |
fungi | i'm not sure if we'd need to hash the lookups since gerrit already puts e-mail addresses for everyone in tooltips anyway, but it might be nice if we ended up using it for other services | 23:32 |
clarkb | fwiw I think gerrit has had a lookup for avatars for a long time it just 404s because our server has never been set up to return them. I think the difference now is you get teh grey orb | 23:32 |
clarkb | rather than just not rendering at all | 23:33 |
ianw | i feel like that only happens though if you have the avatars plugin installed, which is either gravatar or the "lookup a url" external one | 23:37 |
clarkb | ya I don't have any 404s in my request log. Old gerrit would 404 | 23:37 |
clarkb | I guess the grey orb is expecting a request and a response and then gets filled over like a template? | 23:38 |
clarkb | so ya a UI bug then | 23:38 |
*** rlandy|ruck is now known as rlandy|out | 23:41 | |
clarkb | there is a hideAvatar flag in the hard to read js for the names | 23:43 |
ianw | yeah, i think that comes from | 23:44 |
ianw | const hasAvatars = !!_config?.plugin?.has_avatars; | 23:44 |
ianw | this.cancelLeftPadding = !this.hideAvatar && !hasAttention && hasAvatars; | 23:44 |
ianw | ?. is a new one for my javascript box | 23:45 |
clarkb | it does default to False | 23:45 |
ianw | yeah, which suggests to me config.plugin.has_avatars is somehow true ... but when i look at | 23:46 |
ianw | (can't find the link, you can get a config json) | 23:48 |
ianw | curl https://review.opendev.org/config/server/info | 23:49 |
clarkb | ya that doesn't have it and if not set it is false according to the docs | 23:50 |
clarkb | I suspect that this is a bug in handling that somewhere then | 23:50 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!