fzzf | clarkb: thank u. I have solved it by replace pip source, it may be because of pip source. I'm using pip 21.1.2 | 01:23 |
---|---|---|
*** bhagyashris_ is now known as bhagyashris|ruck | 05:25 | |
*** pojadhav- is now known as pojadhav | 05:50 | |
opendevreview | Martin Kopec proposed openstack/patrole master: Skip test_force_detach_volume_from_instance in 2 jobs https://review.opendev.org/c/openstack/patrole/+/800594 | 06:30 |
*** rpittau|afk is now known as rpittau | 07:39 | |
opendevreview | Martin Kopec proposed openstack/patrole master: Fix updating network provider attributes https://review.opendev.org/c/openstack/patrole/+/660120 | 07:49 |
kopecmartin | soniya29: rechecks in patrole won't help, the tests are constantly failing .. recheck just one of the patches if you wanna verify the gates are passing | 07:56 |
soniya29 | kopecmartin, the failure of jobs is varying, in one recheck if a job fails then in other one it is passing | 08:46 |
soniya29 | kopecmartin, thats why i have given multiple rechecks, but still no luck | 08:46 |
*** akekane_ is now known as abhishekk | 09:38 | |
opendevreview | Martin Kopec proposed openstack/patrole master: Fix updating network provider attributes https://review.opendev.org/c/openstack/patrole/+/660120 | 09:43 |
opendevreview | Luigi Toscano proposed openstack/tempest master: Fix tempest-slow-py3: use the correct inheritance chain https://review.opendev.org/c/openstack/tempest/+/800614 | 10:16 |
opendevreview | Luigi Toscano proposed openstack/tempest master: Fix tempest-slow-py3: use the correct inheritance chain https://review.opendev.org/c/openstack/tempest/+/800614 | 10:21 |
tosky | uhm, what am I doing wrong? | 10:33 |
opendevreview | Luigi Toscano proposed openstack/tempest master: Fix tempest-slow-py3: use the correct inheritance chain https://review.opendev.org/c/openstack/tempest/+/800614 | 10:37 |
opendevreview | Luigi Toscano proposed openstack/tempest master: Fix tempest-slow-py3: use the correct inheritance chain https://review.opendev.org/c/openstack/tempest/+/800614 | 10:44 |
opendevreview | Martin Kopec proposed openstack/patrole master: Fix updating network provider attributes https://review.opendev.org/c/openstack/patrole/+/660120 | 10:55 |
*** dviroel|out is now known as dviroel | 11:20 | |
*** ricolin_ is now known as ricolin | 12:48 | |
opendevreview | Martin Kopec proposed openstack/patrole master: Fix updating network provider attributes https://review.opendev.org/c/openstack/patrole/+/660120 | 13:57 |
kopecmartin | #startmeeting qa | 14:00 |
opendevmeet | Meeting started Tue Jul 13 14:00:21 2021 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
opendevmeet | The meeting name has been set to 'qa' | 14:00 |
kopecmartin | #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours | 14:00 |
kopecmartin | agenda ^^^ | 14:00 |
gmann | o/ | 14:01 |
kopecmartin | o/ | 14:01 |
jparoly | o/ | 14:01 |
kopecmartin | let's start | 14:02 |
kopecmartin | #topic Announcement and Action Item | 14:02 |
kopecmartin | no announcements apart from PTG but that will be covered in a minute | 14:02 |
kopecmartin | #topic Xena Priority Items progress | 14:02 |
kopecmartin | any updates? regarding the priority items? | 14:02 |
gmann | nothing from my side | 14:03 |
kopecmartin | none from my side unfortunately | 14:03 |
kopecmartin | #topic OpenStack Events Updates and Planning | 14:04 |
kopecmartin | we can start planning for the next PTG happening in October | 14:04 |
kopecmartin | I created a doodle to find a good time for the sessions | 14:05 |
kopecmartin | #link https://doodle.com/poll/a8gakx4ca6zvtuem?utm_source=poll&utm_medium=link | 14:05 |
kopecmartin | please fill your time availability by 20th July | 14:05 |
gmann | +1 | 14:05 |
gmann | will do | 14:05 |
kopecmartin | I've created an etherpad as well for topics to discuss during the sessions | 14:05 |
kopecmartin | #link https://etherpad.opendev.org/p/qa-yoga-ptg | 14:05 |
paras333 | ++ | 14:05 |
kopecmartin | so if anyone has anything to discuss, please write it there | 14:05 |
kopecmartin | we need to fill a team survey .. i'll do that, however i need help with 2 questions | 14:06 |
kopecmartin | 1. Are there other groups we need to avoid scheduling conflicts with? | 14:06 |
kopecmartin | 2. Zoom, Meetpad or other platform? | 14:06 |
kopecmartin | any insight, opinions? | 14:06 |
kopecmartin | I personally liked meetpad more, but I don't mind zoom (if it will work in browser) | 14:08 |
jparoly | no preference for me | 14:09 |
gmann | kopecmartin: TC, Nova, policy for me | 14:09 |
kopecmartin | ack | 14:09 |
kopecmartin | for me it's refstack only i think | 14:10 |
gmann | but keeping it in first 2 days should work, though need to check the time of PTG | 14:10 |
gmann | refstack does not have PTG right? | 14:10 |
kopecmartin | we had last time | 14:10 |
gmann | ohk | 14:10 |
kopecmartin | we had to sync with the teams | 14:10 |
gmann | oh you mean interop | 14:10 |
kopecmartin | a lot of changes needed from the guidelines perspective .. oh, yeah, interop, sorry :) | 14:11 |
gmann | i see | 14:11 |
kopecmartin | i use them as synonyms :D | 14:11 |
kopecmartin | based on the doodle, i'll try to book slots in first 2 days | 14:11 |
jparoly | btw, that doodle poll once saved, the times changed | 14:11 |
kopecmartin | maybe got calculated to a different time zone? | 14:12 |
kopecmartin | #topic Gate Status Checks | 14:14 |
gmann | kopecmartin: may be we can collect the topic first and then decide the how many slot we need | 14:14 |
kopecmartin | there was an issue with cinder gates as tosky pointed out earlier | 14:14 |
jparoly | yeah i'll have to check time zone, thanks | 14:14 |
gmann | tempest-full-py3 ? | 14:14 |
kopecmartin | however i see his fix is already being erged | 14:14 |
kopecmartin | merged | 14:14 |
kopecmartin | #link https://review.opendev.org/c/openstack/tempest/+/800614 | 14:14 |
kopecmartin | gmann: yes | 14:14 |
kopecmartin | jparoly: np :) | 14:14 |
gmann | yeah, base jobs meshed up there. | 14:14 |
gmann | my bad | 14:15 |
tosky | thanks for merging | 14:15 |
tosky | I wasn't sure the fix was working though | 14:15 |
tosky | I've tested it but the change was not taken into account and I was puzzled | 14:15 |
tosky | let me find the review | 14:15 |
kopecmartin | gmann: true, we are supposed to book some slots by July 22nd , however i'm not sure if it's just a day preference or actual time slots | 14:15 |
gmann | tosky: ok, do you want to hold that until we test? i can push the testing patch | 14:15 |
tosky | here: https://review.opendev.org/c/openstack/cinder/+/800618 | 14:15 |
tosky | the problem is that, when I've checked in the zuul queue, the dependency on tempest wasn't visible | 14:16 |
tosky | you usually see the a chain of dependencies in zuul, but I could only see the dependency on https://review.opendev.org/c/openstack/cinder/+/800229/ and not for https://review.opendev.org/c/openstack/tempest/+/800614 | 14:16 |
tosky | and the job obviously failed as before | 14:16 |
gmann | USE_PYTHON3 is false there, let me check after meeting | 14:16 |
tosky | not sure what happened there | 14:17 |
gmann | humm | 14:17 |
tosky | maybe some issue with the way zuul applies the patches (and for some reason that one wasn't considered) | 14:17 |
tosky | I've run recheck on 800618, check the zuul queue (we can discuss it later) | 14:18 |
gmann | oh, i also just did | 14:18 |
gmann | let's see if it takes | 14:18 |
gmann | I am holding tempest fix until then | 14:19 |
dansmith | gmann: friendly reminder to look at these: https://review.opendev.org/q/topic:%2522bp/glance-unified-quotas%2522+status:open | 14:19 |
gmann | dansmith: yeah, will do today. | 14:19 |
dansmith | thanks | 14:19 |
kopecmartin | #topic Periodic jobs Status Checks | 14:20 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&job_name=tempest-full-train-py3&pipeline=periodic-stable | 14:20 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-all&job_name=tempest-full-oslo-master&pipeline=periodic | 14:20 |
kopecmartin | this seems all good | 14:20 |
kopecmartin | #topic Sub Teams highlights | 14:20 |
kopecmartin | #link https://review.openstack.org/#/q/project:openstack/tempest+status:open | 14:21 |
kopecmartin | #link https://review.openstack.org/#/q/project:openstack/patrole+status:open | 14:21 |
kopecmartin | #link https://review.openstack.org/#/q/project:openstack/devstack+status:open | 14:21 |
kopecmartin | #link https://review.openstack.org/#/q/project:openstack/grenade+status:open | 14:21 |
kopecmartin | #link https://review.opendev.org/#/q/project:openstack/hacking+status:open | 14:21 |
kopecmartin | anything from any subteam? | 14:21 |
kopecmartin | I started putting failing tests to an exclude list in patrole gates so that we can unblock them | 14:21 |
kopecmartin | #link https://review.opendev.org/c/openstack/patrole/+/660120 | 14:21 |
kopecmartin | some of them are flaky, some are failing constantly .. at the end the jobs are very unstable | 14:22 |
kopecmartin | and the queue of open patches is growing | 14:22 |
gmann | kopecmartin: but we should fix them instead if skipping | 14:22 |
kopecmartin | i would if i knew how , no idea what's going on there | 14:22 |
gmann | otherwise they are left like that | 14:23 |
gmann | kopecmartin: let me check this week. | 14:23 |
gmann | patrole is no urgent for any project so I think we should fix the failure instead of skip | 14:23 |
kopecmartin | yeah, that makes sense | 14:24 |
gmann | will check this week | 14:24 |
kopecmartin | gmann: thanks | 14:24 |
kopecmartin | cool, moving on, i don't see any patches with priority +1 nor +2 | 14:24 |
kopecmartin | #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking) | 14:24 |
kopecmartin | #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking) | 14:25 |
kopecmartin | #topic Open Discussion | 14:25 |
kopecmartin | anything from anyone? | 14:25 |
kopecmartin | seems not | 14:27 |
kopecmartin | #topic Bug Triage | 14:27 |
kopecmartin | #link https://etherpad.opendev.org/p/qa-bug-triage-xena | 14:27 |
kopecmartin | numbers are recorded as always | 14:27 |
kopecmartin | a small increase in patrole bugs (i created 2) | 14:27 |
kopecmartin | otherwise the numbers are quite steady | 14:28 |
kopecmartin | i don't have any specific bug to bring up | 14:29 |
kopecmartin | so unless no one else wanna bring something up, i think we can close this office hour | 14:29 |
jparoly | thank you kopecmartin | 14:29 |
kopecmartin | thanks for the attendance | 14:30 |
kopecmartin | #endmeeting | 14:30 |
opendevmeet | Meeting ended Tue Jul 13 14:30:28 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 14:30 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/qa/2021/qa.2021-07-13-14.00.html | 14:30 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/qa/2021/qa.2021-07-13-14.00.txt | 14:30 |
opendevmeet | Log: https://meetings.opendev.org/meetings/qa/2021/qa.2021-07-13-14.00.log.html | 14:30 |
dansmith | gmann: kopecmartin apologies for not realizing the meeting was in progress | 14:30 |
gmann | kopecmartin: thanks | 14:30 |
gmann | dansmith: np! | 14:30 |
kopecmartin | dansmith: np, it was the right time to bring up reviews :) | 14:31 |
dansmith | was it? I thought it was in the events section | 14:31 |
dansmith | well, anyway, even if lucky, I didn't realize, so still bad :/ | 14:32 |
gmann | dansmith: if i remember correctly on unified limit, flow is: 1. consider overridden limit if any otherwise 2. registered limit | 14:41 |
dansmith | gmann: yep | 14:42 |
gmann | there is no conf quota/limit are considered right | 14:42 |
dansmith | on the project side, correct | 14:42 |
dansmith | project provides usage, compares against limit found in keystone | 14:43 |
gmann | i remember in nova proposed patch john proposed to consider few resource limit as static from conf. is there such case in glance? | 14:43 |
dansmith | not really.. they had some global limits that apply to everyone (not per-tenant) and those are mostly still honored, but they're different | 14:43 |
gmann | ok | 14:44 |
dansmith | there is one limit that is disabled if unified limits are enabled in glance, but only because enforcing the two together makes no sense | 14:44 |
dansmith | there's no compatibility with existing things like nova though, because there really isn't anything in glance already | 14:44 |
gmann | i see | 14:44 |
dansmith | the current things are more "prevent runaway script from allocating everything" and not "make sure tenant X only uses N things" | 14:44 |
gmann | this you mean for global limit right *not "make sure tenant X only uses N things"*. | 14:47 |
gmann | dansmith: and if i remember there was no quota API in glance. | 14:51 |
dansmith | gmann: correct, I have a very rough proposed WIP patch to add a usage api so people can tell how much they're using (which will show their limits) but that's a ways off | 14:51 |
dansmith | but nothing currently | 14:51 |
gmann | dansmith: ok, but user supposed to use keystone API for that right ? | 14:52 |
gmann | or we are going to add in service side | 14:52 |
dansmith | gmann: apparently not | 14:52 |
dansmith | gmann: I think by default keystone doesn't let users see limits because "limits are service-level things and users don't have service-level access" | 14:53 |
dansmith | gmann: but even still, we need a usage api so they can see what they're using, and it makes sense to expose their limit there as well, like nova's quota api does | 14:53 |
gmann | dansmith: ok, so there is no keystone API for get musage at least yet https://docs.openstack.org/api-ref/identity/v3/index.html?expanded=get-enforcement-model-detail#unified-limits | 14:55 |
gmann | usage | 14:55 |
dansmith | gmann: there never will be | 14:55 |
gmann | humm | 14:55 |
dansmith | gmann: keystone doesn't know anything about your usage, that's up to the projects | 14:55 |
gmann | dansmith: yeah, I was thinking common place to get limit and usage but it will be proxy from service for usage | 14:56 |
dansmith | keystone just holds the limit, and the model for determining what your limit is (i.e. flat or hierarchical) | 14:56 |
gmann | yeah | 14:56 |
dansmith | gmann: right, the common place will be the project/service | 14:56 |
gmann | make sense | 14:56 |
gmann | I think we have not covered that in nova spec if i remember. at least on if we can use existing qupta API for that which we thought to deprecate. | 14:57 |
dansmith | AFAIK, the code doesn't plan to deprecate the API in nova, | 14:58 |
dansmith | just make it work with keystone as the limit source | 14:58 |
dansmith | but I might have missed something | 14:58 |
dansmith | also, it's possible that this confusion (which is legit) is held by other people | 14:58 |
gmann | dansmith: at some point I think we said to deprecate in spec. when people move to new limit API? | 14:58 |
dansmith | gmann: it's possible that we said we would deprecate *setting* the limit through nova, which makes sense for sure | 14:59 |
dansmith | but not users seeing their limits, AFAIK | 14:59 |
dansmith | but it's also possible that people don't realize that users can't hit the limits api in keystone | 14:59 |
dansmith | even lance didn't seem to know that, if I recall my first convo with him | 14:59 |
dansmith | because it took us a little bit for me to get the devstack patch to even be able to make glance *see* the limits | 14:59 |
gmann | ohk | 15:00 |
gmann | dansmith: as it is default to system-reader or project user https://github.com/openstack/keystone/blob/10057702ac361213e74472ec1d0d4e4c4a041f09/keystone/common/policies/limit.py#L24 | 15:04 |
gmann | no issue for project user to get limit | 15:04 |
gmann | or there are more hidden restriction or getting project-id or so? | 15:05 |
dansmith | gmann: ack, but just telling you our experience :) | 15:05 |
gmann | dansmith: ok. I think more clear doc on project side will help on what all are there and how to get. | 15:06 |
dansmith | gmann: maybe I'm confusing the ability to see registered limits.. so maybe project users can see their own calculated limit in isolation, but the glance user couldn't see everyone's limit or something like that | 15:06 |
dansmith | but even still, we have to have usage apis in the projects, it seems silly to make users hand-correlate each usage element to a separate display of limits | 15:06 |
gmann | dansmith: yeah, project-id is from limit so it should be 'can see only your own limit' | 15:07 |
gmann | dansmith: make sense on usage API. | 15:07 |
dansmith | gmann: I wonder if that shows you your limit even if one is not set (i.e. inheriting the registered limit), or only if you have one set for you specifically | 15:07 |
gmann | registered limit seems to open for everyone https://github.com/openstack/keystone/blob/10057702ac361213e74472ec1d0d4e4c4a041f09/keystone/common/policies/registered_limit.py#L20-L29 | 15:09 |
gmann | but i think GET limit should not fallback to registered one but not sure | 15:10 |
dansmith | ack | 15:12 |
tosky | gmann: (sorry for jumping in) about that tempest change, should we ask the zuul people why the depends-on is not considered? | 15:16 |
tosky | I didn't spot any syntax error in the usage of the Depends-On keyword itself | 15:17 |
gmann | tosky: yeah it should, let's wait if still do not pick up. | 15:17 |
gmann | tosky: ah, got it | 15:18 |
gmann | tosky: sorry it thought ussuri does not use master tempest. | 15:19 |
gmann | but it use master only and should pick depends-on | 15:19 |
tosky | the poor ussuri is still fully supported (and train too, I think the train-last tag hasn't been created yet) | 15:19 |
gmann | tosky: yeah train still not moved to EM | 15:22 |
gmann | oh that is, will create train-last after tempest-full-py3 fix is merged | 15:22 |
tosky | so, nag people on #zuul ? | 15:26 |
gmann | sure | 15:26 |
*** rpittau is now known as rpittau|afk | 16:23 | |
gmann | dansmith: reviewed tempest both patches, few comments. it seems depends-on did not work in current gate result which is issue from zuul side. once that is fixed we can recheck to see the results | 16:57 |
dansmith | thanks will look in a bit | 16:57 |
dansmith | it worked earlier, but that was ... a while ago :/ | 16:57 |
gmann | sure | 16:57 |
opendevreview | Vishal Manchanda proposed openstack/devstack-gate master: Retire django-openstack-auth https://review.opendev.org/c/openstack/devstack-gate/+/800693 | 17:25 |
-opendevstatus- NOTICE: Depends-On using https://review.opendev.org URLs are currently not working. This was due to a config change in Zuul that we are reverting and will be restarting Zuul to pick up. | 17:42 | |
gmann | dansmith: for this https://review.opendev.org/c/openstack/tempest/+/788346/10/tempest/api/image/v2/test_images.py#856 | 18:54 |
gmann | dansmith: at quota should fail right? like image create quota | 18:55 |
gmann | this is little confusing for me | 18:55 |
dansmith | gmann: no, because we call oslo.limit.Enforce with delta=0, | 18:56 |
dansmith | because we don't know the image size, so zero is the only thing we can do, and it won't fail *at* limit because you're not over it yet | 18:56 |
dansmith | gmann the image counting ones can fail *at* quota or before you go over, because you're trying to allocate an image (or a staging file) which is always n=1 | 18:58 |
gmann | dansmith: i see. though it is little confusing as I can upload 2 (or higher say 5 ) Mib image on 1 Mib quota | 19:05 |
gmann | so after at-quota any size image will be passing. | 19:06 |
gmann | dansmith: or first image of 5 Mib will also pass in case of quota 1Mib right | 19:07 |
dansmith | gmann: right, you can blow way past your quota if you time it right | 20:01 |
dansmith | gmann: we could be checking in the data pipeline, but it would involve calling to keystone *while* processing data, which would be terrible | 20:02 |
dansmith | gmann: unfortunately, glance decided long ago to allow unbounded uploads, so this is kinda what we have to deal with, unless we want an api change | 20:02 |
dansmith | and so far, it seems unpopular | 20:02 |
*** dviroel is now known as dviroel|out | 21:41 | |
gmann | dansmith: ack. | 22:09 |
opendevreview | Ghanshyam proposed openstack/tempest master: DNM: testing https://review.opendev.org/c/openstack/tempest/+/793632 | 23:09 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!