Wednesday, 2024-05-29

@dfajfer:fsfe.orgI'm clueless as why does percona operator gives me such errors (doesn't create any pdbs):09:39
```
{"level":"info","ts":1716973117.6496503,"logger":"setup","msg":"Runs on","platform":"kubernetes","version":"v1.27.12-gke.1115000"}
{"level":"info","ts":1716973117.6497424,"logger":"setup","msg":"Git commit: 46b71b736a7dcfde5e401f355a39eb8b8c65e449 Git branch: release-1-11-0 Build time: 2022-06-01T18:24:03Z"}
```
so, just upstream operator
```json
{
"level": "error",
"ts": 1716971616.5668037,
"logger": "controller.perconaxtradbcluster-controller",
"msg": "Reconciler error",
"name": "db-cluster",
"namespace": "zuul",
"error": "PodDisruptionBudget for db-cluster-pxc: reconcile pdb: get object: no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"",
"errorVerbose": "no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"\nget object\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).createOrUpdate\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:1304\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).reconcilePDB\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:1011\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:662\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:307\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571\nreconcile pdb\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).reconcilePDB\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:1011\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:662\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:307\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571",
"stacktrace": "sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227"
}
```
@dfajfer:fsfe.orgit's super weird that he expects an old version, I've checked the commit SHA and they pdbs are on v109:46
@dfajfer:fsfe.orgit's a fresh cluster, so I've never had an old operator there09:47
@dfajfer:fsfe.orghttps://forums.percona.com/t/i-cant-install-percona-xtradb-cluster-on-minikube/17774/410:27
```
In k8s 1.25 PodDisruptionBudget beta API was removed (deprecated in 1.21).
PXC Operator 1.11 was released before k8s 1.25 and we have not changed the API for PDBs. As a result you see this error in the logs
As a workaround please use k8s < 1.25 for now.
```
@dfajfer:fsfe.orgat least I'm not mad10:27
@dfajfer:fsfe.orgI think it's really safe to say noone uses it except for myself:p10:27
@dfajfer:fsfe.org> <@dfajfer:fsfe.org> I'm clueless as why does percona operator gives me such errors (doesn't create any pdbs):11:44
>
> ```
> {"level":"info","ts":1716973117.6496503,"logger":"setup","msg":"Runs on","platform":"kubernetes","version":"v1.27.12-gke.1115000"}
> {"level":"info","ts":1716973117.6497424,"logger":"setup","msg":"Git commit: 46b71b736a7dcfde5e401f355a39eb8b8c65e449 Git branch: release-1-11-0 Build time: 2022-06-01T18:24:03Z"}
> ```
> so, just upstream operator
>
> ```json
> {
> "level": "error",
> "ts": 1716971616.5668037,
> "logger": "controller.perconaxtradbcluster-controller",
> "msg": "Reconciler error",
> "name": "db-cluster",
> "namespace": "zuul",
> "error": "PodDisruptionBudget for db-cluster-pxc: reconcile pdb: get object: no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"",
> "errorVerbose": "no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"\nget object\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).createOrUpdate\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:1304\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).reconcilePDB\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:1011\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:662\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:307\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571\nreconcile pdb\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).reconcilePDB\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:1011\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:662\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:307\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571",
> "stacktrace": "sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227"
> }
> ```
simple update to 1.14 helped. Really did nothing except that and worked like a charm
@dfajfer:fsfe.org * simple update to 1.14 helped. Really did nothing except that and it worked like a charm11:45
@dfajfer:fsfe.org * it's super weird that he expects an old version, I've checked the commit SHA and the pdbs are on v112:08
-@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed: [zuul/zuul] 920690: wip: Pre-filter gerrit events based on triggers https://review.opendev.org/c/zuul/zuul/+/92069013:59
@agireesh:matrix.orgHi All,  our zuul CI/CD is failing while executing "prepare-workspace-git" role, before running the job, we are running below ansible yaml file 16:55
---
- hosts: localhost
roles:
- emit-job-header
- log-inventory
- hosts: all
roles:
- add-build-sshkey
- start-zuul-console
- ensure-output-dirs
- validate-host
- prepare-workspace-git
...
But job is failing with below error while executing "prepare-workspace-git" role.
ERROR
2024-05-22 08:55:47.862626 | controller | {
2024-05-22 08:55:47.862706 | controller | "msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'src_dir'\n\nThe error appears to be in '/var/lib/zuul/builds/af69b18406584a18bc4f0f7d1e0f54d7/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/prepare-workspace-git/tasks/main.yaml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# task startup time we incur.\n- name: Set initial repo states in workspace\n ^ here\n"
2024-05-22 08:55:47.862781 | controller | }
@agireesh:matrix.organy idea why this is failing ..?16:56
@agireesh:matrix.orgPlease refer, https://netappopenstacklogserver.s3-us-west-1.amazonaws.com/logs/97/918297/4/upstream-check/manila-tempest-plugin-ontap-dhss/af69b18/job-output.txt link for complete logs 16:56
@clarkb:matrix.orgagireesh: it seems your zuul projects list has entries without src_dir attributes. Can you/do you collect your zuul inventory file for the jobs and log them? Those variables are included in that file and you can cross check the values there16:59
@clarkb:matrix.orgthe log-inventory role is included in your paste, but I can't navigate your log dirs as I get invalid paths17:00
@agireesh:matrix.orgcollect your zuul inventory file for the jobs, how to get them, as I am new in zuul, please help me 17:00
@sylvass:albinvass.seClark: https://netappopenstacklogserver.s3-us-west-1.amazonaws.com/logs/97/918297/4/upstream-check/manila-tempest-plugin-ontap-dhss/af69b18/zuul-info/inventory.yaml17:01
@clarkb:matrix.orglooks like it is collected if you take a leap of faith17:01
@clarkb:matrix.orgAlbin Vass: yup just found that too17:01
@sylvass:albinvass.sethere's no zuul var however17:02
@clarkb:matrix.orgya that is interesting17:03
@clarkb:matrix.org`TASK [prepare-workspace-git : Don't filter zuul projects if flag is false]` doesn't fail implying there is a zuul var in the running job but not in that inventory file for some reason?17:04
@clarkb:matrix.orghrm that inventory file seems to be doctored as IPs and passwords are blobbed out17:05
@clarkb:matrix.orgso maybe whatever is doing the doctoring is removing the zuul var17:05
@sylvass:albinvass.seyeah I also cannot pipe it to yq because of that17:06
@clarkb:matrix.orgthere is an obfuscate-logs step in the job output17:08
@clarkb:matrix.orgagireesh: I suspect that you need to get the inventory.yaml contents before you obfuscate things then use that information to determine why the role is breaking. And/or fix the obfuscate logs step so that it doesn't remove important information like this that can be used for debugging17:09
@clarkb:matrix.org(not saying you shouldn't blob out secrets, but the project info and the zuul var should be relatively safe and are often useful for debugging zuul behavior)17:10
@sylvass:albinvass.seAre zuul vars usually set `!unsafe` ?17:11
```
The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object'
```
I know that job input variables are
@sylvass:albinvass.sethey are17:13
@agireesh:matrix.org> <@clarkb:matrix.org> agireesh: I suspect that you need to get the inventory.yaml contents before you obfuscate things then use that information to determine why the role is breaking. And/or fix the obfuscate logs step so that it doesn't remove important information like this that can be used for debugging17:18
These are the logs we are uploading to aws. We have logs in our zuul server with proper ip and password but before uploading the logs to AWS bucket we are masking ip address and password due to security reason
-@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed: [zuul/zuul] 920690: Pre-filter gerrit events based on triggers https://review.opendev.org/c/zuul/zuul/+/92069017:19
@clarkb:matrix.orgagireesh: yes I understand that. What I'm saying is that separate to the IPs and Passwords there should be a zuul variable in that file which includes project information and src_dir info for each project. That information is what the task failed to operate properly on and it is completely missing from your upload so we can't debug further17:19
@sylvass:albinvass.sein job-output.json you can see the items the set_fact loop worked with, and it builds the `_zuul_projects` structure correctly. However it's suspicious that the logs split out `controller | skipping: Conditional result was False` multiple times.17:23
@sylvass:albinvass.seOh! no that's the task above that logs that17:23
@agireesh:matrix.orgall:17:26
children:
tempest:
hosts:
controller: null
hosts:
controller:
ansible_connection: ssh
ansible_host: *****
ansible_port: 22
ansible_python_interpreter: auto
ansible_user: zuul
devstack_local_conf: &id001
post-config:
$MANILA_CONF:
DEFAULT:
server_migration_driver_continue_update_interval: 5
ontap3:
backend_availability_zone: manila-zone-1
$NEUTRON_CONF:
DEFAULT:
global_physnet_mtu: '{{ external_bridge_mtu }}'
test-config:
$TEMPEST_CONFIG:
compute:
min_compute_nodes: '{{ groups[''compute''] | default([''controller''])
| length }}'
compute-feature-enabled:
vnc_console: true
service_available:
manila: true
share:
backend_names: ontap,ontap3
backend_replication_type: dr
build_timeout: 300
capability_create_share_from_snapshot_support: true
capability_network_allocation_update_support: true
capability_share_server_multiple_subnet_support: true
capability_snapshot_support: true
default_share_type_name: default
enable_ip_rules_for_protocols: nfs
enable_protocols: nfs
image_password: manila
migration_enabled: false
migration_timeout: 300
multi_backend: true
multitenancy_enabled: true
run_create_share_from_snapshot_in_another_pool_or_az_tests: true
run_driver_assisted_migration_tests: false
run_extend_tests: true
run_host_assisted_migration_tests: false
run_ipv6_tests: false
run_manage_unmanage_snapshot_tests: true
run_manage_unmanage_tests: true
run_migration_with_preserve_snapshots_tests: false
run_mount_snapshot_tests: false
run_multiple_share_replicas_tests: true
run_network_allocation_update_tests: true
run_quota_tests: false
run_replication_tests: true
run_revert_to_snapshot_tests: true
run_share_group_tests: true
run_share_server_migration_tests: false
run_share_server_multiple_subnet_tests: true
run_shrink_tests: true
run_snapshot_tests: true
share_creation_retry_number: 2
share_server_migration_timeout: 1500
suppress_errors_in_cleanup: false
devstack_localrc: &id002
ADMIN_PASSWORD: secretadmin
DATABASE_PASSWORD: secretdatabase
DEBUG_LIBVIRT_COREDUMPS: true
ENABLE_TENANT_TUNNELS: false
ENABLE_TENANT_VLANS: true
ERROR_ON_CLONE: true
FIXED_RANGE: ****/20
FLOATING_RANGE: ******/24
HOST_IP: '{{ hostvars[''controller''][''nodepool''][''private_ipv4''] }}'
IPV4_ADDRS_SAFE_TO_USE: 10.1.0.0/20
LIBVIRT_TYPE: '{{ devstack_libvirt_type | default("kvm") }}'
LOGFILE: /opt/stack/logs/devstacklog.txt
LOG_COLOR: false
MANILA_ALLOW_NAS_SERVER_PORTS_ON_HOST: true
MANILA_CONFIGURE_DEFAULT_TYPES: true
MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS: snapshot_support=True create_share_from_snapshot_support=True
MANILA_ENABLED_BACKENDS: ontap,ontap2,ontap3
MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE: false
MANILA_OPTGROUP_DEFAULT_enabled_share_protocols: NFS
MANILA_OPTGROUP_ontap2_backend_availability_zone: null
MANILA_OPTGROUP_ontap2_driver_handles_share_servers: true
MANILA_OPTGROUP_ontap2_filter_function: 1
MANILA_OPTGROUP_ontap2_netapp_aggregate_name_search_pattern: null
MANILA_OPTGROUP_ontap2_netapp_api_trace_pattern: ^\(?!\(perf\)\).*$
MANILA_OPTGROUP_ontap2_netapp_login: null
MANILA_OPTGROUP_ontap2_netapp_password: null
MANILA_OPTGROUP_ontap2_netapp_root_volume_aggregate: null
MANILA_OPTGROUP_ontap2_netapp_server_hostname: null
MANILA_OPTGROUP_ontap2_netapp_trace_flags: api,method
MANILA_OPTGROUP_ontap2_netapp_vserver_name_template: null
MANILA_OPTGROUP_ontap2_replication_domain: null
MANILA_OPTGROUP_ontap2_reserved_share_percentage: 1
MANILA_OPTGROUP_ontap2_service_instance_password: manila
MANILA_OPTGROUP_ontap2_share_backend_name: ontap2
MANILA_OPTGROUP_ontap2_share_driver: manila.share.drivers.netapp.common.NetAppDriver
MANILA_OPTGROUP_ontap3_backend_availability_zone: null
MANILA_OPTGROUP_ontap3_driver_handles_share_servers: true
MANILA_OPTGROUP_ontap3_filter_function: 1
MANILA_OPTGROUP_ontap3_netapp_aggregate_name_search_pattern: null
MANILA_OPTGROUP_ontap3_netapp_api_trace_pattern: ^\(?!\(perf\)\).*$
MANILA_OPTGROUP_ontap3_netapp_login: null
MANILA_OPTGROUP_ontap3_netapp_password: null
MANILA_OPTGROUP_ontap3_netapp_root_volume_aggregate: null
MANILA_OPTGROUP_ontap3_netapp_server_hostname: null
MANILA_OPTGROUP_ontap3_netapp_trace_flags: api,method
MANILA_OPTGROUP_ontap3_netapp_vserver_name_template: null
MANILA_OPTGROUP_ontap3_replication_domain: null
MANILA_OPTGROUP_ontap3_reserved_share_percentage: 1
MANILA_OPTGROUP_ontap3_service_instance_password: manila
MANILA_OPTGROUP_ontap3_share_backend_name: ontap3
MANILA_OPTGROUP_ontap3_share_driver: manila.share.drivers.netapp.common.NetAppDriver
MANILA_OPTGROUP_ontap_backend_availability_zone: null
MANILA_OPTGROUP_ontap_driver_handles_share_servers: true
MANILA_OPTGROUP_ontap_filter_function: 1
MANILA_OPTGROUP_ontap_netapp_api_trace_pattern: ^\(?!\(perf\)\).*$
MANILA_OPTGROUP_ontap_netapp_login: null
MANILA_OPTGROUP_ontap_netapp_password: null
MANILA_OPTGROUP_ontap_netapp_root_volume_aggregate: null
MANILA_OPTGROUP_ontap_netapp_server_hostname: null
MANILA_OPTGROUP_ontap_netapp_trace_flags: api,method
MANILA_OPTGROUP_ontap_netapp_vserver_name_template: null
MANILA_OPTGROUP_ontap_replication_domain: null
MANILA_OPTGROUP_ontap_reserved_share_percentage: 1
MANILA_OPTGROUP_ontap_service_instance_password: manila
MANILA_OPTGROUP_ontap_share_backend_name: ontap
MANILA_OPTGROUP_ontap_share_driver: manila.share.drivers.netapp.common.NetAppDriver
MANILA_REPLICA_STATE_UPDATE_INTERVAL: 10
MANILA_SERVER_MIGRATION_PERIOD_TASK_INTERVAL: 10
MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL: 1
MANILA_USE_DOWNGRADE_MIGRATIONS: true
MANILA_USE_SCHEDULER_CREATING_SHARE_FROM_SNAPSHOT: true
ML2_VLAN_RANGES: null
NETWORK_GATEWAY: 10.1.0.1
NOVA_VNC_ENABLED: true
NOVNC_FROM_PACKAGE: true
OVN_DBS_LOG_LEVEL: dbg
PHYSICAL_NETWORK: physnet1
PUBLIC_BRIDGE_MTU: '{{ external_bridge_mtu }}'
PUBLIC_NETWORK_GATEWAY: *****
Q_ML2_TENANT_NETWORK_TYPE: vlan
Q_USE_PROVIDERNET_FOR_PUBLIC: false
RABBIT_PASSWORD: secretrabbit
SERVICE_HOST: '{{ hostvars[''controller''][''nodepool''][''private_ipv4'']
}}'
SERVICE_PASSWORD: secretservice
SWIFT_HASH: 1234123412341234
SWIFT_REPLICAS: 1
SWIFT_START_ALL_SERVICES: false
TEMPEST_USE_TEST_ACCOUNTS: true
USE_PYTHON3: true
VERBOSE: true
VERBOSE_NO_TIMESTAMP: true
devstack_plugins: &id003
manila: https://opendev.org/openstack/manila
devstack_services: &id004
base: false
c-api: true
c-bak: true
c-sch: true
c-vol: true
cinder: false
dstat: false
etcd3: true
g-api: true
horizon: false
key: true
memory_tracker: true
mysql: true
n-api: true
n-api-meta: true
n-cond: true
n-cpu: true
n-novnc: true
n-sch: true
ovn-controller: true
ovn-northd: true
ovs-vswitchd: true
ovsdb-server: true
placement-api: true
q-ovn-metadata-agent: true
q-svc: true
rabbit: true
s-account: false
s-container: false
s-object: false
s-proxy: false
tempest: true
tls-proxy: true
extensions_to_txt: &id005
auto: true
conf: true
localrc: true
log: true
stackenv: true
yaml: true
yml: true
nodepool:
az: nova
cloud: devstack-22
external_id: 8de1445d-91cb-4bfb-a4a6-c36e0dac2320
host_id: 4a3e6309873aa0022094b86e7a3173e19e83693fb92891997bec521e
interface_ip: ****
label: ubuntu-jammy-functional
private_ipv4: ******
private_ipv6: null
provider: devstack-22
public_ipv4: ******
public_ipv6: ''
region: null
tempest_concurrency: 6
tempest_plugins: &id006
- manila-tempest-plugin
tempest_test_regex: manila_tempest_tests.tests.api
test_results_stage_name: test_results
tox_envlist: all
zuul_copy_output: &id007
/etc/ceph: logs
/etc/glusterfs/glusterd.vol: logs
/etc/libvirt: logs
/etc/resolv.conf: logs
/etc/sudoers: logs
/etc/sudoers.d: logs
/var/log/ceph: logs
/var/log/glusterfs: logs
/var/log/libvirt: logs
/var/log/mysql: logs
/var/log/openvswitch: logs
/var/log/postgresql: logs
/var/log/rabbitmq: logs
/var/log/unbound.log: logs
'{{ devstack_base_dir }}/tempest/etc/accounts.yaml': logs
'{{ devstack_base_dir }}/tempest/etc/tempest.conf': logs
'{{ devstack_base_dir }}/tempest/tempest.log': logs
'{{ devstack_conf_dir }}/.localrc.auto': logs
'{{ devstack_conf_dir }}/.stackenv': logs
'{{ devstack_conf_dir }}/local.conf': logs
'{{ devstack_conf_dir }}/localrc': logs
'{{ devstack_full_log}}': logs
'{{ devstack_log_dir }}/devstacklog.txt': logs
'{{ devstack_log_dir }}/devstacklog.txt.summary': logs
'{{ devstack_log_dir }}/dstat-csv.log': logs
'{{ devstack_log_dir }}/tcpdump.pcap': logs
'{{ devstack_log_dir }}/worlddump-latest.txt': logs
'{{ stage_dir }}/apache': logs
'{{ stage_dir }}/apache_config': logs
'{{ stage_dir }}/audit.log': logs
'{{ stage_dir }}/core': logs
'{{ stage_dir }}/deprecations.log': logs
'{{ stage_dir }}/df.txt': logs
'{{ stage_dir }}/dpkg-l.txt': logs
'{{ stage_dir }}/etc': logs
'{{ stage_dir }}/iptables.txt': logs
'{{ stage_dir }}/listen53.txt': logs
'{{ stage_dir }}/performance.json': logs
'{{ stage_dir }}/pip2-freeze.txt': logs
'{{ stage_dir }}/pip3-freeze.txt': logs
'{{ stage_dir }}/rpm-qa.txt': logs
'{{ stage_dir }}/stackviz': logs
'{{ stage_dir }}/verify_tempest_conf.log': logs
'{{ stage_dir }}/{{ test_results_stage_name }}.html': logs
'{{ stage_dir }}/{{ test_results_stage_name }}.subunit': logs
vars:
devstack_local_conf: *id001
devstack_localrc: *id002
devstack_plugins: *id003
devstack_services: *id004
extensions_to_txt: *id005
tempest_concurrency: 6
tempest_plugins: *id006
tempest_test_regex: manila_tempest_tests.tests.api
test_results_stage_name: test_results
tox_envlist: all
zuul_copy_output: *id007
@agireesh:matrix.orgthis one I got from my zuul server logs 17:26
@clarkb:matrix.orgagireesh: that is the same as the one being served and it has already been obfuscated. Maybe you are obfuscating in place so the file is updated and doesn't have the old info17:29
@agireesh:matrix.orgI just cat the "inventory.yaml" file and before pasting I just masked ip address, I didn't make any other changes in this file 17:35
@clarkb:matrix.orgok there doesn't appear to be a zuul dict in that paste (though ist a bit hard to read because the indnetation was removed)17:40
@sylvass:albinvass.seagireesh: it looks like the obfuscation happens in the job you linked17:44
@agireesh:matrix.orgzuul-info]# cat inventory.yaml | grep -i src_dir17:47
I am searching "src_dir" sting in inventory.yaml file on my zuul server and it is not giving any output
@clarkb:matrix.orgagireesh: https://zuul.opendev.org/t/openstack/build/8f99ad4b2c43491b86161bbe02825a98/log/zuul-info/inventory.yaml#350-360 is an example from an opendev job17:48
@agireesh:matrix.orgthat mean this variable or sting is not present in inventory.yaml17:48
@clarkb:matrix.orgagireesh: when we look at the rest of your log we can see tasks running successfully that depend on that information being present though. So it seems like ti is getting lost somewhere17:50
@clarkb:matrix.orgagireesh: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/prepare-workspace-git/tasks/main.yaml#L13-L16 specifically appears to run17:50
@agireesh:matrix.orgone more point i wanted to update here, zuul is deployed using  software factory and I am using 3.6 version 17:54
@clarkb:matrix.orgone option may be to modify your base job to include a debug task that records the value of zuul.project and zuul.projects17:54
@sylvass:albinvass.seI'm still hung up on the AnsibleUnsafeText part, that gives me the feeling that something is setting zj_project by registering an output somewhere17:55
@clarkb:matrix.orgAlbin Vass: yes I think the variable is being shadowed or overwritten17:55
@clarkb:matrix.orgnote software factory 3.6 seems to map to zuul 4.6 which is also quite old17:56
@sylvass:albinvass.seAnsible doesn't mention the priority of loop_var, I'd expect it to be pretty high up since it's task-local, but registered vars are very high so something may shadow it indeed: 17:59
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#id45
@sylvass:albinvass.se* Ansible doesn't mention the priority of loop_var, I'd expect it to be pretty high up since it's task-local, but registered vars are very high so something may shadow it indeed: 17:59
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
@agireesh:matrix.orghttps://netappopenstacklogserver.s3.us-west-1.amazonaws.com/index.html#logs/09/911709/4/upstream-check/manila-tempest-plugin-ontap-no-dhss/83cf9b6/, also please refer this logs when job is passing few months back 18:00
@agireesh:matrix.orgLOOP [prepare-workspace-git : Set initial repo states in workspace]18:01
2024-03-19 11:34:00.421029 | controller | + [ -d /opt/git/opendev.org/openstack/cinder ]
2024-03-19 11:34:00.421410 | controller | + git clone --bare /opt/git/opendev.org/openstack/cinder /home/zuul/src/opendev.org/openstack/cinder/.git
2024-03-19 11:34:00.421505 | controller | Cloning into bare repository '/home/zuul/src/opendev.org/openstack/cinder/.git'...
2024-03-19 11:34:00.421591 | controller | done.
@clarkb:matrix.orgbased on the timing I wonder if https://review.opendev.org/c/zuul/zuul-jobs/+/887917 broke things somehow18:03
@clarkb:matrix.orgperhaps `with_items: "{{ zuul.projects.values() | list }}"` was required for older ansible versions?18:04
@agireesh:matrix.orgThanks you guys for looking into this issue, it is late here, I am going to bed now. Please let me know if you guys found the root cause and solution for this issue18:04
@agireesh:matrix.org * Thank you guys for looking into this issue, it is late here, I am going to bed now. Please let me know if you guys found the root cause and solution for this issue18:04
@clarkb:matrix.orgI think you can try to use a version of zuul-jobs that doesn't include that change above18:05
@clarkb:matrix.orgbut otherwise debugging really does probably need you to find the values of those variables18:05
@agireesh:matrix.orghow to use the older version of zuul-jobs18:05
@clarkb:matrix.orgbut ya I wonder if old ansible needed the explicit conversion to a list otherwise it was iterating over somethign else (liek strings?)18:06
@agireesh:matrix.org---18:06
- hosts: localhost
roles:
- emit-job-header
- log-inventory
- hosts: all
roles:
- add-build-sshkey
- start-zuul-console
- ensure-output-dirs
- validate-host
- prepare-workspace-git
...
@agireesh:matrix.orgcan i make the change in this file to use the older version 18:06
@clarkb:matrix.orgagireesh: you'll probably need to fork the repo locally and then point zuul at that version of the repo rather than the upstream repo (for opendev.org/zuul/zuul-jobs18:06
@agireesh:matrix.orgjobs.yaml file ..?18:08
@agireesh:matrix.org---18:08
- job:
name: base
parent: null
description: The base job for the NetApp OpenStack team's CI/CD system.
abstract: true
secrets:
- aws
- logserver
- obfuscation
nodeset: ubuntu-focal
attempts: 1
pre-run: playbooks/base/pre.yaml
post-run: playbooks/base/post.yaml
roles:
- zuul: opendev.org/zuul/zuul-jobs
- zuul: services.openstack.netapp.com/netapp-jobs
...
@agireesh:matrix.orgzuul: opendev.org/zuul/zuul-jobs, this one right ..?18:08
@clarkb:matrix.orgyes you would need to point it at a replacement. Looks like you already have a netapp-jobs repo. You could make a copy of the role there and give it a different name and also revert that change18:10
@clarkb:matrix.orgthe ansible loop docs do say that with_items does some amount of flattening. I wonder if old ansible flattened into a single string18:10
@clarkb:matrix.orgbut new ansible doesn't and that explains the behavior difference18:11
@agireesh:matrix.orgsure, try this and let you know 18:11
@agireesh:matrix.org * sure, will try this tomorrow and let you know 18:12
@fungicide:matrix.organy idea how far back in ansible that starts breaking? just wondering if we broke things for a version of ansible zuul is still intending to support for now18:13
@clarkb:matrix.orgno clue, this is just a hunch. The current docs also say with_items flattens but they must've realized flattening into a single string intead of a list of of strings is a problem and stopped if this hunch is correct18:14
@clarkb:matrix.orgQuickly checking the code for LookupBase._flatten() in ansible which appears to be what with_items calls I don't see it changing recently. May be the .values() return changed intead from something to list18:21
@clarkb:matrix.orgif it does turn out that this is the issue then we can probably add the `| list ` filter conversion back to that with_items line?18:27
@sylvass:albinvass.seClark: agireesh  yeah ansible 2.9 has different logic for with_items:18:40
```
- hosts: localhost
gather_facts: false
vars:
zuul:
a:
src_dir: path/to/project/a
b:
src_dir: path/to/project/b
tasks:
- debug:
msg: "{{ item|type_debug }}"
with_items: "{{ zuul.values() }}"
- debug:
msg: "{{ (zuul.values())|type_debug }}"
```
First run is with ansible 2.9.27 while the second is using ansible 2.16.5
```
zuul-jobs on  master [?] via 🐍 v3.11.9 (venv)
❯ ansible-playbook test.yaml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] *************************************************************************************************
TASK [debug] *****************************************************************************************************
ok: [localhost] => (item=dict_values([{'src_dir': 'path/to/project/a'}, {'src_dir': 'path/to/project/b'}])) => {
"msg": "AnsibleUnsafeText"
}
TASK [debug] *****************************************************************************************************
ok: [localhost] => {
"msg": "dict_values"
}
PLAY RECAP *******************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
zuul-jobs on  master [?] via 🐍 v3.11.9 (venv)
❯ , ansible-playbook test.yaml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] *************************************************************************************************
TASK [debug] *****************************************************************************************************
ok: [localhost] => (item={'src_dir': 'path/to/project/a'}) => {
"msg": "dict"
}
ok: [localhost] => (item={'src_dir': 'path/to/project/b'}) => {
"msg": "dict"
}
TASK [debug] *****************************************************************************************************
ok: [localhost] => {
"msg": "dict_values"
}
PLAY RECAP *******************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
@clarkb:matrix.orgAlbin Vass: if you still have that test env set up can you add the |list and see if the behavior is consistent across them? If so that seems like a reasonable thing to put back in18:42
@sylvass:albinvass.seClark: just did and it works18:42
@sylvass:albinvass.sepushing a fix in a bit18:42
@clarkb:matrix.orgthanks18:42
@sylvass:albinvass.seah the difference is that ansible 2.9 returns `dict_values([{'src_dir': 'path/to/project/a'}, {'src_dir': 'path/to/project/b'}])` for `values()` while more recent ansible returns a list. That's why the explicit |list is needed18:46
@clarkb:matrix.orgAnd flatten only special cases list and tuple. Dict_values get flattened further 18:47
@jim:acmegating.comwe don't support old versions of ansible in zuul-jobs, so if you're not planning on continuously updating zuul, you should definitely not continuously update zuul-jobs.  ansible 2.9 hasn't been supported by zuul in a while.18:52
@jim:acmegating.comwe don't generally intentionally break stuff, but i don't want to give the impression that this is a regression that we would consider needs to be fixed.18:53
@jim:acmegating.compinning zuul-jobs until you can upgrade zuul and zuul-jobs would be a good course of action.18:54
-@gerrit:opendev.org- Albin Vass proposed: [zuul/zuul-jobs] 920769: prepare-workspace-git: fix loop logic for older ansible versions https://review.opendev.org/c/zuul/zuul-jobs/+/92076918:58
@sylvass:albinvass.secorvus: I agree18:59
@clarkb:matrix.orgI think my main concern would be that ansible's behavior here appears to be undefined19:05
@clarkb:matrix.organd thus could revert or change again in the future. Being explicit about wanting a list is good belts and suspenders against future problems too19:05
@jim:acmegating.comhrm, well if that's the case, we should probably take steps to avoid iterating over dict values anywhere?  that sounds cumbersome.19:09
@clarkb:matrix.orgyes, I suspect that ansible might tell us we are doing it wrong since with_* is deprecated iirc and I think the inconsistent flattening behavior is one reaso nfor that19:09
@clarkb:matrix.orgiirc they want you to use loops and lookup filters19:10
@jim:acmegating.comi rather thought they reversed that19:10
@clarkb:matrix.orghttps://docs.ansible.com/ansible/latest/playbook_guide/playbooks_loops.html#migrating-from-with-x-to-loop ya the language there does seem less harsh than I remember it being in the past19:11
@clarkb:matrix.orgmore of a recommendation rather than an eventual requirement19:11
@jim:acmegating.comi agree that the behavior could change, but also, i don't think we currently have an expectation that it will change, and ansible makes unexpected behavior changes from time to time anyway, so nothing is really future-proof.  maybe it's a form of nihilistic optimism on my part, but it seems that the easiest thing to do is keep the status quo and not try to anticipate the future.  :)19:14
@clarkb:matrix.orgshould we add a note that we know the role isn't compatible with older ansible instead? The other reason to try and address it is to avoid questions in the first place. A hint about the behavior change would at least get us in the right direction quickly19:15
@sylvass:albinvass.seanother thing we could do is to tag zuul-jobs whenever we remove an ansible version from zuul19:16
@sylvass:albinvass.seto make it easier to use opendev.org/zuul/zuul-jobs, but pin it if you're on an older version of zuul19:17
@sylvass:albinvass.seof course that only applies to roles and not jobs I suppose, for jobs to work we'd need to make a branch instead19:18
@jim:acmegating.comi'm happy to entertain docs updates to make it more clear that zuul-jobs only supports, well, the current version of zuul if that is unclear19:19
@jim:acmegating.comi don't think we should tag or branch zuul-jobs; anyone running zuul by definition has the capability of hosting their own zuul-jobs if they want to pin it (and tagging or branching only really helps them identify what version to use in their local copy)19:21
@clarkb:matrix.orgya I think the two concerns I have are avoiding the need to do debugging of difficult to debug issues like this (so a comment and/or docs would be good) and then less concerned about changing to accomodate new ansible as tests should fail and with comments the issue should be clear if this particular values() return type changes again19:42
@clarkb:matrix.orgwe22:35
@clarkb:matrix.org * we've just debugged a new git clone permissiosn problem in prepare-workspace-git if using a git repo cache over in OpenDev land22:36
@clarkb:matrix.orgthe gist of the problem is that if you do a local cloen as user foo against a repo owned by user bar it complains about dubious permissions. We're currently working to address that in OpenDev by changing ownership of the git repo cache from root:root to zuul:zuul which should match the user that runs prepare-workspace-git. I thought I'd mention that here in case others start to see the problem and possibly have other good ideas for dealing with it22:37
@clarkb:matrix.orgwe haven't confirmed our fix works yet as the new image is currently building but we should know soonish22:37
@jim:acmegating.comwe also discussed the idea of having prepare-workspace-git optionally mark the cache directories as safe; it's unclear how good of an idea that is at this point.23:27

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!