Skip to content

Comments

HYPERFLEET-569 - feat: config loader#45

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift-hyperfleet:mainfrom
rh-amarin:restart-config-loading
Feb 12, 2026
Merged

HYPERFLEET-569 - feat: config loader#45
openshift-merge-bot[bot] merged 1 commit intoopenshift-hyperfleet:mainfrom
rh-amarin:restart-config-loading

Conversation

@rh-amarin
Copy link
Contributor

@rh-amarin rh-amarin commented Feb 11, 2026

https://issues.redhat.com/browse/HYPERFLEET-569

One deviation from proposed: adapter-business-logic-template-MVP.yaml

  • The resources.transport.maestro.manifestWork now is a "real" manifestWork, meaning that the spec.workload.manifests are directly k8s manifests
Previously:                                                                                           
manifests:                                                                                            
  - name: "agentNamespace"                                                                            
     manifest:                                                                                         
       apiVersion: v1                                                                                  
       kind: Namespace                                                                                 
                                                                                                       
 Now:                                                                                                  
 manifests:                                                                                            
   - apiVersion: v1                                                                                    
     kind: Namespace
  • Adds Maestro as a transport backend alongside the existing Kubernetes transport, enabling the adapter
    to manage resources on spoke clusters via ManifestWork/CloudEvents

    • Extends the config loader to support per-resource transport configuration (transport.client: maestro)
      with ManifestWork templates, target cluster templating, and inline/ref manifest sources
    • Reorganizes charts/examples/ into kubernetes/ maestro/ subdirectories
      guides and complete example configs for each transport mode

    Details

    Maestro transport integration (cmd/adapter/main.go, internal/executor/, internal/maestro_client/)

    • main.go now selects transport client at startup based on spec.clients.maestro presence: if
      configured, creates a Maestro gRPC/CloudEvents client; otherwise falls back to the direct Kubernetes
      client
    • resource_executor.go builds a maestro_client.TransportContext (consumer name + ManifestWork template)
      for Maestro-bound resources and passes nil Manifest when the ManifestWork template already contains
      inline workload manifests, so buildManifestWork uses them as-is
    • client.go adds nil-safety for r.Manifest in ApplyResources result building and removes the
      now-unnecessary normalizeTemplateManifests function
    • client_test.go adds comprehensive unit tests for buildManifestWork (explicit resources, nil
      manifest/template passthrough, empty resources, immutability, metadata preservation, mixed
      nil/explicit, multiple bare manifests) and removes all wrapper-format normalization tests

    Config loader extensions (internal/config_loader/)

    • New types: MaestroClientConfig, MaestroTransportConfig, TransportConfig with ref support for
      ManifestWork templates
    • New accessors on Resource: GetTransportClient(), IsMaestroTransport(), HasManifestWorkRef(),
      GetManifestWorkRef()
    • New validation rules: maestro client fields, transport config per resource, manifestWork presence for
      maestro resources
    • New constants: TransportClientKubernetes, TransportClientMaestro
    • Extensive test coverage for all new validation paths and accessor methods

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for Maestro as an alternative transport client option alongside Kubernetes.
    • Introduced per-resource transport configuration enabling flexible deployment strategies.
    • Added ManifestWork reference support for resource deployment.
  • Documentation

    • Added comprehensive example guides for Kubernetes and Maestro adapter deployments.
    • Included configuration templates and YAML examples for both platforms.

@coderabbitai
Copy link

coderabbitai bot commented Feb 11, 2026

Walkthrough

This pull request introduces Maestro transport support to the adapter framework. It adds new configuration types and validators to handle per-resource transport selection (Kubernetes or Maestro), with Maestro-specific support for ManifestWork references. The adapter initialization logic is updated to conditionally construct either a Maestro or Kubernetes transport client based on configuration. The resource executor is modified to handle Maestro transport paths during resource application, including manifest preparation and target cluster rendering. Supporting accessors and file-loading logic are added to the config loader. Two comprehensive documentation examples and configuration templates are provided for both Kubernetes and Maestro deployment scenarios.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant Loader as Config Loader
    participant Validator as Validator
    participant Main as Main/Executor
    participant TransportClient as Transport Client<br/>(K8s or Maestro)
    participant ResExecutor as Resource Executor
    
    App->>Loader: Load AdapterTaskConfig
    Loader->>Loader: Parse YAML + Transport Config
    Loader->>Loader: Load ManifestWork refs from files
    Loader-->>Validator: Pass loaded config
    
    Validator->>Validator: Validate transport config
    Validator->>Validator: Check ManifestWork refs exist
    Validator-->>Main: Config validated
    
    Main->>Main: Check Maestro configured?
    alt Maestro Transport
        Main->>TransportClient: Create Maestro Client
    else Kubernetes Transport
        Main->>TransportClient: Create Kubernetes Client
    end
    
    Main->>ResExecutor: Apply Resources with client
    
    ResExecutor->>ResExecutor: Determine transport type
    alt Maestro Transport
        ResExecutor->>ResExecutor: Build ManifestWork from config
        ResExecutor->>ResExecutor: Render TargetCluster template
        ResExecutor->>TransportClient: ApplyResources(ManifestWork)
    else Kubernetes Transport
        ResExecutor->>ResExecutor: Extract manifest
        ResExecutor->>TransportClient: ApplyResources(Manifest)
    end
    
    TransportClient-->>ResExecutor: Result
    ResExecutor-->>Main: Status
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

lgtm, approved

Suggested reviewers

  • yingzhanredhat
  • jsell-rh
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 41.94% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly identifies the main feature addition (Maestro transport support and config loader extensions) with the associated issue ticket.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/executor/resource_executor.go (1)

110-121: ⚠️ Potential issue | 🟠 Major

Pass Maestro transport context into discovery calls.

GetResource/DiscoverResources are invoked with target=nil. Maestro operations require a ConsumerName for routing, so discovery can fail or hit the wrong target. Build the transport target (ConsumerName) before discovery and pass it through for both discovery and the by-name lookup path.

Suggested plumbing for Maestro discovery target
@@
-	// Discover existing resource
-	var existingResource *unstructured.Unstructured
-	var err error
-	if resource.Discovery != nil {
-		// Use Discovery config to find existing resource (e.g., by label selector)
-		re.log.Debugf(ctx, "Discovering existing resource using discovery config...")
-		existingResource, err = re.discoverExistingResource(ctx, gvk, resource.Discovery, execCtx, transportClient)
-	} else {
-		// No Discovery config - lookup by name from manifest
-		re.log.Debugf(ctx, "Looking up existing resource by name...")
-		existingResource, err = transportClient.GetResource(ctx, gvk, resourceManifest.GetNamespace(), resourceManifest.GetName(), nil)
-	}
+	// Build transport target early for maestro (ConsumerName needed for discovery)
+	var transportTarget transport_client.TransportContext
+	if resource.IsMaestroTransport() && resource.Transport.Maestro != nil {
+		targetCluster, tplErr := renderTemplate(resource.Transport.Maestro.TargetCluster, execCtx.Params)
+		if tplErr != nil {
+			result.Status = StatusFailed
+			result.Error = tplErr
+			return result, NewExecutorError(PhaseResources, resource.Name, "failed to render targetCluster template", tplErr)
+		}
+		transportTarget = &maestro_client.TransportContext{ConsumerName: targetCluster}
+	}
+
+	// Discover existing resource
+	var existingResource *unstructured.Unstructured
+	var err error
+	if resource.Discovery != nil {
+		re.log.Debugf(ctx, "Discovering existing resource using discovery config...")
+		existingResource, err = re.discoverExistingResource(ctx, gvk, resource.Discovery, execCtx, transportClient, transportTarget)
+	} else {
+		re.log.Debugf(ctx, "Looking up existing resource by name...")
+		existingResource, err = transportClient.GetResource(ctx, gvk, resourceManifest.GetNamespace(), resourceManifest.GetName(), transportTarget)
+	}
@@
-func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk schema.GroupVersionKind, discovery *config_loader.DiscoveryConfig, execCtx *ExecutionContext, client transport_client.TransportClient) (*unstructured.Unstructured, error) {
+func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk schema.GroupVersionKind, discovery *config_loader.DiscoveryConfig, execCtx *ExecutionContext, client transport_client.TransportClient, target transport_client.TransportContext) (*unstructured.Unstructured, error) {
@@
-		return client.GetResource(ctx, gvk, namespace, name, nil)
+		return client.GetResource(ctx, gvk, namespace, name, target)
@@
-		list, err := client.DiscoverResources(ctx, gvk, discoveryConfig, nil)
+		list, err := client.DiscoverResources(ctx, gvk, discoveryConfig, target)

Also applies to: 318-345

🤖 Fix all issues with AI agents
In `@charts/examples/kubernetes/README.md`:
- Line 146: Fix the typo in the Helm install example: replace the malformed flag
value for broker.googlepubsub.subscriptionId (currently written as "--set
broker.googlepubsub.subscriptionId=<gcp-subscription?") with a properly closed
placeholder, e.g. "--set broker.googlepubsub.subscriptionId=<gcp-subscription>",
removing the extraneous "?" and adding the missing ">" so the subscriptionId
placeholder is valid.

In `@charts/examples/maestro/adapter-task-config.yaml`:
- Around line 72-100: Remove the unsupported nested discovery under
transport.maestro (delete the transport.maestro.discovery block) and align
discovery with the ManifestWork: either switch the top-level discovery from
bySelectors to byName using the ManifestWork resource name (e.g., set
discovery.byName to the ManifestWork name used by manifestWork.ref for the
agentNamespaceManifestWork entry) or update the top-level discovery selectors to
match the ManifestWork labels produced by the manifestWork template; ensure
references to transport.maestro, manifestWork.ref, discovery.bySelectors and
discovery.byName are updated accordingly.
- Around line 115-170: The CEL paths reference non-existent nested fields (e.g.,
resources.?agentNamespaceManifestWork.agentNamespace and
resources.?clusterNamespace); update all expressions to use the actual resource
entry names (e.g., resources.?agentNamespaceManifestWork) or add distinct
resource entries for the Namespace/ConfigMap if you need their individual
statuses; specifically fix expressions in the "Applied", "Available",
"data.namespace.name/status", and any observed checks to reference
resources.?agentNamespaceManifestWork (or the newly defined resource keys) and
remove the .agentNamespace/.clusterNamespace suffixes so the CEL will evaluate
correctly.

In `@charts/examples/maestro/adapter-task-resource-manifestwork.yaml`:
- Around line 75-97: The ConfigMap's metadata.namespace currently uses "{{
.clusterId }}" which can differ from the Namespace's metadata.name "{{
.clusterId | lower }}"; update the ConfigMap so its metadata.namespace uses the
same normalization (use "{{ .clusterId | lower }}") to ensure it targets the
created Namespace; check the ConfigMap block (kind: ConfigMap,
metadata.namespace) and the Namespace block (kind: Namespace, metadata.name) and
make the namespace template identical to the Namespace name template.

In `@charts/examples/maestro/adapter-task-resource-namespace.yaml`:
- Line 9: The generation annotation uses inconsistent template variable names
across examples: one uses {{ .generationId }} while others use {{
.generationSpec }}; update the template in adapter-task-resource-namespace.yaml
(the hyperfleet.io/generation annotation) to match the canonical variable name
used in the Kubernetes examples (use {{ .generationSpec }} or choose one
canonical name) and update any example files or README references so all
examples consistently reference the same symbol (generationSpec or generationId)
derived from event.generation to avoid confusion.

In `@charts/examples/maestro/README.md`:
- Around line 141-148: Update the example helm install command: change the
values file reference to charts/examples/maestro/values.yaml (use -f
charts/examples/maestro/values.yaml) and remove the stray question mark from the
broker.googlepubsub.subscriptionId placeholder so it reads --set
broker.googlepubsub.subscriptionId=<gcp-subscription>; keep the rest of the
--set placeholders as-is and ensure the command still uses helm install <name>
./charts with the image.registry and Google Pub/Sub settings.

In `@cmd/adapter/main.go`:
- Around line 280-305: The ResourceExecutor is currently forced to use a single
global transport client set via execBuilder.WithTransportClient(...) in main
(created by createMaestroClient / createK8sClient), which ignores per-resource
transport overrides from resource.GetTransportClient(); either enforce a
validation rule that all resources must match the adapter-level client, or
update the executor to pick clients per resource: instantiate and retain both
transport clients (call createMaestroClient and createK8sClient when their
configs are present), pass a transport-client map or selector into
ResourceExecutor (or change execBuilder to accept a client factory), and update
ResourceExecutor to call resource.GetTransportClient() to select the correct
client instance when executing each resource.
- Around line 513-527: The createMaestroClient function currently only wires a
few fields; update it to populate all remaining MaestroClientConfig fields into
maestro_client.Config: parse maestroConfig.Timeout into config.HTTPTimeout
(handle string/duration parsing and return error on parse failure), inspect
maestroConfig.Auth.Type and if "token" set config.TokenFile (and avoid assuming
TLS), otherwise wire TLS fields (ClientCertFile, ClientKeyFile, CAFile) and set
config.HTTPCAFile when provided; also copy maestroConfig.RetryAttempts into
config.RetryAttempts and map keepalive settings (e.g., maestroConfig.Keepalive
-> config.Keepalive/KeepaliveTime/KeepaliveTimeout as applicable). Ensure
createMaestroClient returns parse errors when timeout parsing fails and uses the
unique symbols createMaestroClient, maestroConfig, maestroConfig.Auth.Type,
maestroConfig.Timeout, maestroConfig.TokenFile, maestroConfig.Auth.TLSConfig,
maestroConfig.HTTPCAFile, maestroConfig.RetryAttempts, and
maestroConfig.Keepalive to locate and update the wiring.

In `@internal/executor/resource_executor.go`:
- Around line 146-163: The conversion to workv1.ManifestWork incorrectly assumes
resourceManifest always contains a ManifestWork; update the Maestro transport
handling in build/execute code (where resource.IsMaestroTransport() is checked
and runtime.DefaultUnstructuredConverter.FromUnstructured is called) to first
check if resource.Transport.Maestro.ManifestWork is provided and, if so, render
and use that rendered ManifestWork payload (renderTemplate on
resource.Transport.Maestro.ManifestWork) to construct mw; otherwise, only
attempt FromUnstructured on resourceManifest after verifying it is a
ManifestWork-like object (or return a clear NewExecutorError indicating the
wrong manifest type). This ensures you use
resource.Transport.Maestro.ManifestWork when present instead of blindly
converting resourceManifest (and keeps error handling via
result.Status/result.Error and NewExecutorError).
🧹 Nitpick comments (1)
charts/examples/maestro/adapter-config.yaml (1)

70-70: Add trailing newline.

The file is missing a trailing newline at the end. Most style guides and linters expect files to end with a newline character.

--namespace <namespace> \
--set image.registry=quay.io/<developer-registry> \
--set broker.googlepubsub.projectId=<gcp-project> \
--set broker.googlepubsub.subscriptionId=<gcp-subscription? \
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in helm install example.

The --set broker.googlepubsub.subscriptionId=<gcp-subscription? has a typo: missing closing > and has an extraneous ?.

📝 Suggested fix
-  --set broker.googlepubsub.subscriptionId=<gcp-subscription? \
+  --set broker.googlepubsub.subscriptionId=<gcp-subscription> \
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
--set broker.googlepubsub.subscriptionId=<gcp-subscription? \
--set broker.googlepubsub.subscriptionId=<gcp-subscription> \
🤖 Prompt for AI Agents
In `@charts/examples/kubernetes/README.md` at line 146, Fix the typo in the Helm
install example: replace the malformed flag value for
broker.googlepubsub.subscriptionId (currently written as "--set
broker.googlepubsub.subscriptionId=<gcp-subscription?") with a properly closed
placeholder, e.g. "--set broker.googlepubsub.subscriptionId=<gcp-subscription>",
removing the extraneous "?" and adding the missing ">" so the subscriptionId
placeholder is valid.

Comment on lines +72 to +100
- name: "agentNamespaceManifestWork"
transport:
client: "maestro"
maestro:
targetCluster: "{{ .placementClusterName }}"
# manifestWork supports both inline configuration and and ref approaches
# ref is suggested as the file method is more readable and maintainable.
manifestWork:
ref: "/etc/adapter/manifestwork.yaml"
discovery:
# The "namespace" field within discovery is optional:
# - For namespaced resources: set namespace to target the specific namespace
# - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty
# Here we omit it since Namespace is cluster-scoped
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
discovery:
# The "namespace" field within discovery is optional:
# - For namespaced resources: set namespace to target the specific namespace
# - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty
# Here we omit it since Namespace is cluster-scoped
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unsupported transport.maestro.discovery and align discovery to the ManifestWork.

transport.maestro doesn’t define discovery, so this block is ignored; only the top-level discovery is used. Also, the current labelSelector matches Namespace labels while the manifest is a ManifestWork, so discovery won’t find existing ManifestWorks. Consider removing the nested block and switching discovery to byName (or matching the ManifestWork labels from the template).

Suggested update (discover ManifestWork by name)
     transport:
       client: "maestro"
       maestro:
         targetCluster: "{{ .placementClusterName }}"
         manifestWork:
           ref: "/etc/adapter/manifestwork.yaml"
-        discovery:
-          bySelectors:
-            labelSelector:
-              hyperfleet.io/resource-type: "namespace"
-              hyperfleet.io/cluster-id: "{{ .clusterId }}"
-              hyperfleet.io/managed-by: "{{ .metadata.name }}"
-    discovery:
-      bySelectors:
-        labelSelector:
-          hyperfleet.io/resource-type: "namespace"
-          hyperfleet.io/cluster-id: "{{ .clusterId }}"
-          hyperfleet.io/managed-by: "{{ .metadata.name }}"
+    discovery:
+      byName: "hyperfleet-cluster-setup-{{ .clusterId }}"
🤖 Prompt for AI Agents
In `@charts/examples/maestro/adapter-task-config.yaml` around lines 72 - 100,
Remove the unsupported nested discovery under transport.maestro (delete the
transport.maestro.discovery block) and align discovery with the ManifestWork:
either switch the top-level discovery from bySelectors to byName using the
ManifestWork resource name (e.g., set discovery.byName to the ManifestWork name
used by manifestWork.ref for the agentNamespaceManifestWork entry) or update the
top-level discovery selectors to match the ManifestWork labels produced by the
manifestWork template; ensure references to transport.maestro, manifestWork.ref,
discovery.bySelectors and discovery.byName are updated accordingly.

Comment on lines +115 to +170
# Applied: Resources successfully created
- type: "Applied"
status:
expression: |
resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False"
reason:
expression: |
resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active"
? "NamespaceCreated"
: "NamespacePending"
message:
expression: |
resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active"
? "Namespace created successfully"
: "Namespace creation in progress"

# Available: Resources are active and ready
- type: "Available"
status:
expression: |
resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False"
reason:
expression: |
resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "NamespaceReady" : "NamespaceNotReady"
message:
expression: |
resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "Namespace is active and ready" : "Namespace is not active and ready"

# Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config.
- type: "Health"
status:
expression: |
adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
reason:
expression: |
adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy"
message:
expression: |
adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully"

# Use CEL expression for numeric fields to preserve type (not Go template which outputs strings)
observed_generation:
expression: "generationId"

# Use Go template with now and date functions for timestamps
observed_time: "{{ now | date \"2006-01-02T15:04:05Z07:00\" }}"

# Optional data field for adapter-specific metrics extracted from resources
data:
namespace:
name:
expression: |
resources.?clusterNamespace.?metadata.?name.orValue("")
status:
expression: |
resources.?clusterNamespace.?status.?phase.orValue("")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's check the file structure and total line count
wc -l charts/examples/maestro/adapter-task-config.yaml

# Then read the entire file to see all resource definitions
cat -n charts/examples/maestro/adapter-task-config.yaml

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 8656


🏁 Script executed:

# Search for other adapter config examples
fd -e yaml -e yml | grep -i adapter | head -20

# Look for manifestwork examples
fd -e yaml -e yml | xargs rg -l "manifestwork\|manifestWork" | head -10

# Check if there's documentation about resources structure
fd -e md | xargs rg -l "resources\." | head -10

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 1238


🏁 Script executed:

# Check the manifestwork.yaml resource definition
cat -n charts/examples/maestro/adapter-task-resource-manifestwork.yaml

# Check the namespace.yaml resource definition
cat -n charts/examples/maestro/adapter-task-resource-namespace.yaml

# Check the maestro adapter config
cat -n charts/examples/maestro/adapter-config.yaml

# Check the executor README
cat -n internal/executor/README.md

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 30486


🏁 Script executed:

# Search for test data showing resource structure after execution
find test -name "*.yaml" -o -name "*.go" | head -20

# Look for how resources are populated in executor
rg -A 10 "resources\[" internal/executor/ | head -50

# Check test data for execution results
cat test/integration/executor/testdata/test-adapter-config.yaml 2>/dev/null || echo "File not found"

# Search for how discoveryResult or resources are structured
rg "discoveryResult\|resources.*=" internal/executor/ -A 3 | head -80

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 4224


Correct the CEL resource paths to match actual resource definitions.

Only agentNamespaceManifestWork is defined as a resource entry; the conditions and data fields reference nested paths (resources.?agentNamespaceManifestWork.agentNamespace and resources.?clusterNamespace) that don't exist. These expressions will always resolve empty and conditions/data won't update. Either update the paths to reference the actual ManifestWork resource (resources.?agentNamespaceManifestWork), or define separate resource entries for the Namespace and ConfigMap objects if direct access to their status is needed.

🤖 Prompt for AI Agents
In `@charts/examples/maestro/adapter-task-config.yaml` around lines 115 - 170, The
CEL paths reference non-existent nested fields (e.g.,
resources.?agentNamespaceManifestWork.agentNamespace and
resources.?clusterNamespace); update all expressions to use the actual resource
entry names (e.g., resources.?agentNamespaceManifestWork) or add distinct
resource entries for the Namespace/ConfigMap if you need their individual
statuses; specifically fix expressions in the "Applied", "Available",
"data.namespace.name/status", and any observed checks to reference
resources.?agentNamespaceManifestWork (or the newly defined resource keys) and
remove the .agentNamespace/.clusterNamespace suffixes so the CEL will evaluate
correctly.

Comment on lines +75 to +97
- apiVersion: v1
kind: Namespace
metadata:
name: "{{ .clusterId | lower }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
hyperfleet.io/resource-type: "namespace"
annotations:
hyperfleet.io/created-by: "hyperfleet-adapter"
hyperfleet.io/generation: "{{ .generationId }}"
- apiVersion: v1
kind: ConfigMap
metadata:
name: "cluster-config"
namespace: "{{ .clusterId }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
annotations:
hyperfleet.io/generation: "{{ .generationId }}"
data:
cluster_id: "{{ .clusterId }}"
cluster_name: "{{ .clusterName }}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Normalize ConfigMap namespace to match the Namespace name.

The Namespace uses {{ .clusterId | lower }} but the ConfigMap targets {{ .clusterId }}. If clusterId contains uppercase characters, the ConfigMap will target a different/invalid namespace. Use the same normalization.

Suggested fix
-        namespace: "{{ .clusterId }}"
+        namespace: "{{ .clusterId | lower }}"
🤖 Prompt for AI Agents
In `@charts/examples/maestro/adapter-task-resource-manifestwork.yaml` around lines
75 - 97, The ConfigMap's metadata.namespace currently uses "{{ .clusterId }}"
which can differ from the Namespace's metadata.name "{{ .clusterId | lower }}";
update the ConfigMap so its metadata.namespace uses the same normalization (use
"{{ .clusterId | lower }}") to ensure it targets the created Namespace; check
the ConfigMap block (kind: ConfigMap, metadata.namespace) and the Namespace
block (kind: Namespace, metadata.name) and make the namespace template identical
to the Namespace name template.

hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/cluster-name: "{{ .clusterName }}"
annotations:
hyperfleet.io/generation: "{{ .generationId }}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check what generation variable names are used across example configs
echo "=== Generation variable usage in manifests ==="
rg -n 'generation' charts/examples/ --type yaml -C1

echo ""
echo "=== Listing all yaml files in charts/examples ==="
find charts/examples/ -type f -name "*.yaml" | sort

echo ""
echo "=== Check task config files for generation field definitions ==="
find charts/examples/ -type f -name "*task-config.yaml" -o -name "*task*.yaml" | xargs grep -l 'generation' 2>/dev/null

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 8802


Inconsistent template variable naming across examples.

Maestro examples use {{ .generationId }} while Kubernetes examples use {{ .generationSpec }} for the generation annotation, despite both capturing from the same source (event.generation). This inconsistency could confuse users referencing both examples.

Consider aligning the variable naming across both example sets.

🤖 Prompt for AI Agents
In `@charts/examples/maestro/adapter-task-resource-namespace.yaml` at line 9, The
generation annotation uses inconsistent template variable names across examples:
one uses {{ .generationId }} while others use {{ .generationSpec }}; update the
template in adapter-task-resource-namespace.yaml (the hyperfleet.io/generation
annotation) to match the canonical variable name used in the Kubernetes examples
(use {{ .generationSpec }} or choose one canonical name) and update any example
files or README references so all examples consistently reference the same
symbol (generationSpec or generationId) derived from event.generation to avoid
confusion.

Comment on lines +141 to +148
```bash
helm install <name> ./charts -f charts/examples/values.yaml \
--namespace <namespace> \
--set image.registry=quay.io/<developer-registry> \
--set broker.googlepubsub.projectId=<gcp-project> \
--set broker.googlepubsub.subscriptionId=<gcp-subscription? \
--set broker.googlepubsub.topic=<gcp-topic> \
--set broker.googlepubsub.deadLetterTopic=<gcp-dlq-topic>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix example command path and placeholder typos.

The example should reference the maestro values file and remove the stray ? in the placeholder.

Suggested fix
-helm install <name> ./charts -f charts/examples/values.yaml \
+helm install <name> ./charts -f charts/examples/maestro/values.yaml \
   --namespace <namespace> \
   --set image.registry=quay.io/<developer-registry> \
   --set broker.googlepubsub.projectId=<gcp-project> \
-  --set broker.googlepubsub.subscriptionId=<gcp-subscription? \
+  --set broker.googlepubsub.subscriptionId=<gcp-subscription> \
   --set broker.googlepubsub.topic=<gcp-topic> \
   --set broker.googlepubsub.deadLetterTopic=<gcp-dlq-topic>
🤖 Prompt for AI Agents
In `@charts/examples/maestro/README.md` around lines 141 - 148, Update the example
helm install command: change the values file reference to
charts/examples/maestro/values.yaml (use -f charts/examples/maestro/values.yaml)
and remove the stray question mark from the broker.googlepubsub.subscriptionId
placeholder so it reads --set
broker.googlepubsub.subscriptionId=<gcp-subscription>; keep the rest of the
--set placeholders as-is and ensure the command still uses helm install <name>
./charts with the image.registry and Google Pub/Sub settings.

Comment on lines +280 to +305
// Create transport client - only one transport is supported per adapter instance
execBuilder := executor.NewBuilder().
WithConfig(config).
WithAPIClient(apiClient).
WithLogger(log)

if config.Spec.Clients.Maestro != nil {
log.Info(ctx, "Creating Maestro transport client...")
maestroClient, err := createMaestroClient(ctx, config.Spec.Clients.Maestro, log)
if err != nil {
errCtx := logger.WithErrorField(ctx, err)
log.Errorf(errCtx, "Failed to create Maestro client")
return fmt.Errorf("failed to create Maestro client: %w", err)
}
execBuilder = execBuilder.WithTransportClient(maestroClient)
log.Info(ctx, "Maestro transport client created successfully")
} else {
log.Info(ctx, "Creating Kubernetes transport client...")
k8sClient, err := createK8sClient(ctx, config.Spec.Clients.Kubernetes, log)
if err != nil {
errCtx := logger.WithErrorField(ctx, err)
log.Errorf(errCtx, "Failed to create Kubernetes client")
return fmt.Errorf("failed to create Kubernetes client: %w", err)
}
execBuilder = execBuilder.WithTransportClient(k8sClient)
log.Info(ctx, "Kubernetes transport client created successfully")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify validation rules prevent mixed transport clients
rg -n "transport" internal/config_loader/validator.go internal/config_loader/validator_test.go 2>/dev/null || echo "Files not found in specified paths"

# Search more broadly for validator and transport references
fd -t f "validator*.go" | head -20

# Search for transport-related configurations
rg -n "GetTransportClient|IsMaestroTransport|TransportClientMaestro|transport.*config|spec\.clients" --type go | head -40

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 7504


🏁 Script executed:

#!/bin/bash
# Check how executor uses the transport clients
rg -n "execBuilder\|WithTransportClient\|GetTransportClient" cmd/adapter/main.go

# Examine resource_executor.go to see if it respects per-resource transport
sed -n '100,160p' internal/executor/resource_executor.go
sed -n '170,250p' internal/executor/resource_executor.go

# Check executor interface
fd -t f "executor.go" | grep -v test
rg -n "type.*Executor|func.*Execute|WithTransportClient" internal/executor/ | head -30

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 8566


🏁 Script executed:

#!/bin/bash
# Check if per-resource transport configs are actually loaded/stored
rg -n "resource\.Transport\|Transport.*Client" internal/config_loader/loader.go | head -20

# Check the config structs to see if Resource has Transport field
fd -t f "types.go" | xargs grep -l "type Resource"
rg -n "type Resource struct" internal/config_loader/

# Check executor.go to see how WithTransportClient works
sed -n '330,350p' internal/executor/executor.go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 840


🏁 Script executed:

#!/bin/bash
# Get the full Resource struct definition
sed -n '313,400p' internal/config_loader/types.go

# Also check if there's a Transport field in Resource
rg -A 30 "type Resource struct" internal/config_loader/types.go | head -50

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 4251


🏁 Script executed:

#!/bin/bash
# Check TransportConfig struct definition to understand per-resource transport
rg -A 20 "type TransportConfig struct" internal/config_loader/types.go

# Check how GetTransportClient() works - does it look at resource.Transport?
rg -A 10 "func.*GetTransportClient" internal/config_loader/accessors.go

# Check if executor should be using per-resource transport
rg -n "resource\.Transport\." internal/executor/resource_executor.go | head -10

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 1941


Fix transport client selection to respect per-resource configuration.

The code validates per-resource transport configurations (resource.Transport.Client), but ResourceExecutor ignores them and routes all resources through a single global transport client set in main.go (line 280–305). If a resource specifies a different transport client than the adapter-level setting, that configuration is silently ignored.

For example, if the adapter is configured with Maestro globally but a resource specifies transport.client: kubernetes, the resource will still be routed through the Maestro client with Kubernetes manifest structure, causing execution failures.

Either:

  1. Enforce consistent transport clients across all resources during validation, or
  2. Modify ResourceExecutor to select the transport client per resource based on resource.GetTransportClient() and maintain separate client instances.
🤖 Prompt for AI Agents
In `@cmd/adapter/main.go` around lines 280 - 305, The ResourceExecutor is
currently forced to use a single global transport client set via
execBuilder.WithTransportClient(...) in main (created by createMaestroClient /
createK8sClient), which ignores per-resource transport overrides from
resource.GetTransportClient(); either enforce a validation rule that all
resources must match the adapter-level client, or update the executor to pick
clients per resource: instantiate and retain both transport clients (call
createMaestroClient and createK8sClient when their configs are present), pass a
transport-client map or selector into ResourceExecutor (or change execBuilder to
accept a client factory), and update ResourceExecutor to call
resource.GetTransportClient() to select the correct client instance when
executing each resource.

Comment on lines +513 to +527
// createMaestroClient creates a Maestro client from the config
func createMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) {
config := &maestro_client.Config{
MaestroServerAddr: maestroConfig.HTTPServerAddress,
GRPCServerAddr: maestroConfig.GRPCServerAddress,
SourceID: maestroConfig.SourceID,
Insecure: maestroConfig.Insecure,
}

// Set TLS config if present
if maestroConfig.Auth.TLSConfig != nil {
config.CAFile = maestroConfig.Auth.TLSConfig.CAFile
config.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile
config.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find MaestroClientConfig type definition
rg -n "type MaestroClientConfig" -A 30

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 2707


🏁 Script executed:

#!/bin/bash
# Find maestro_client.Config type definition
rg -n "type Config struct" -A 25 | grep -A 25 "maestro_client"

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 3686


🏁 Script executed:

#!/bin/bash
# Get full maestro_client.Config definition
rg -n "type Config struct" -A 50 internal/maestro_client/client.go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 2426


🏁 Script executed:

#!/bin/bash
# Check if Timeout is parsed anywhere or how Auth.Type is used
rg -n "Timeout|Auth\.Type" cmd/adapter/main.go
rg -n "Auth\.Type" internal/ --type go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 972


🏁 Script executed:

#!/bin/bash
# Check the complete createMaestroClient function to see all usages
rg -n "func createMaestroClient" -A 30 cmd/adapter/main.go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 852


Wire remaining MaestroClientConfig fields to maestro_client.Config.

The function maps only a subset of available config: it sets MaestroServerAddr, GRPCServerAddr, SourceID, Insecure, and TLS cert paths, but ignores Timeout (should parse and set HTTPTimeout), Auth.Type (to distinguish TLS vs token auth), TokenFile, HTTPCAFile, RetryAttempts, and Keepalive settings. Additionally, Auth.Type should be checked—if set to "token", TokenFile should be used instead of assuming TLS is always the auth method.

🤖 Prompt for AI Agents
In `@cmd/adapter/main.go` around lines 513 - 527, The createMaestroClient function
currently only wires a few fields; update it to populate all remaining
MaestroClientConfig fields into maestro_client.Config: parse
maestroConfig.Timeout into config.HTTPTimeout (handle string/duration parsing
and return error on parse failure), inspect maestroConfig.Auth.Type and if
"token" set config.TokenFile (and avoid assuming TLS), otherwise wire TLS fields
(ClientCertFile, ClientKeyFile, CAFile) and set config.HTTPCAFile when provided;
also copy maestroConfig.RetryAttempts into config.RetryAttempts and map
keepalive settings (e.g., maestroConfig.Keepalive ->
config.Keepalive/KeepaliveTime/KeepaliveTimeout as applicable). Ensure
createMaestroClient returns parse errors when timeout parsing fails and uses the
unique symbols createMaestroClient, maestroConfig, maestroConfig.Auth.Type,
maestroConfig.Timeout, maestroConfig.TokenFile, maestroConfig.Auth.TLSConfig,
maestroConfig.HTTPCAFile, maestroConfig.RetryAttempts, and
maestroConfig.Keepalive to locate and update the wiring.

Comment on lines +146 to +163
// Build transport context for maestro transport
var transportTarget transport_client.TransportContext
if resource.IsMaestroTransport() && resource.Transport.Maestro != nil {
// Render targetCluster template
targetCluster, tplErr := renderTemplate(resource.Transport.Maestro.TargetCluster, execCtx.Params)
if tplErr != nil {
result.Status = StatusFailed
result.Error = tplErr
return result, NewExecutorError(PhaseResources, resource.Name, "failed to render targetCluster template", tplErr)
}

// Convert rendered manifest to *workv1.ManifestWork for the maestro transport context
mw := &workv1.ManifestWork{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(resourceManifest.Object, mw); err != nil {
result.Status = StatusFailed
result.Error = err
return result, NewExecutorError(PhaseResources, resource.Name, "failed to convert manifest to ManifestWork", err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "resource_executor.go" -type f

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 123


🏁 Script executed:

cat -n ./internal/executor/resource_executor.go | head -200

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 9272


🏁 Script executed:

cat -n ./internal/executor/resource_executor.go | sed -n '200,300p'

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 4353


🏁 Script executed:

find . -name "*.go" -path "*/config_loader/*" -type f | head -5

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 278


🏁 Script executed:

grep -n "type Resource struct" ./internal/config_loader/*.go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 143


🏁 Script executed:

cat -n ./internal/config_loader/types.go | sed -n '313,380p'

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 2860


🏁 Script executed:

grep -n "type TransportConfig struct\|type MaestroConfig struct" ./internal/config_loader/types.go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 116


🏁 Script executed:

cat -n ./internal/config_loader/types.go | sed -n '297,312p'

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 998


🏁 Script executed:

grep -n "IsMaestroTransport\|isResourceMaestro" ./internal/config_loader/*.go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 602


🏁 Script executed:

cat -n ./internal/config_loader/accessors.go | sed -n '183,195p'

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 634


🏁 Script executed:

cat -n ./internal/config_loader/validator.go | sed -n '560,580p'

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 924


🏁 Script executed:

grep -n "Maestro.*Manifest\|manifest.*Maestro" ./internal/config_loader/validator.go | head -20

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 336


🏁 Script executed:

cat -n ./internal/config_loader/validator.go | sed -n '315,335p'

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 973


🏁 Script executed:

grep -rn "Maestro" ./internal/executor/ --include="*.go" | grep -i test

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 65


🏁 Script executed:

find . -name "*executor*test*.go" -o -name "*test*executor*.go" | head -5

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 332


🏁 Script executed:

grep -n "Maestro\|maestro" ./internal/executor/resource_executor_test.go

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 65


Fix ManifestWork conversion for Maestro resources with explicit manifests.

The code at line 159 unconditionally converts resourceManifest to workv1.ManifestWork. However, when a Maestro resource is configured with resource.Manifest set, buildManifest returns a normal Kubernetes manifest (from resource.Manifest) rather than a ManifestWork structure. The FromUnstructured conversion will fail in this case.

The validator allows both resource.Manifest and resource.Transport.Maestro.ManifestWork to be set simultaneously (line 322-324 requires only one), but there is no constraint preventing a user from providing both.

Either enforce that manifest must be nil for Maestro resources in validation, or use resource.Transport.Maestro.ManifestWork (rendered) to build the ManifestWork instead of attempting to convert the potentially non-ManifestWork manifest.

🤖 Prompt for AI Agents
In `@internal/executor/resource_executor.go` around lines 146 - 163, The
conversion to workv1.ManifestWork incorrectly assumes resourceManifest always
contains a ManifestWork; update the Maestro transport handling in build/execute
code (where resource.IsMaestroTransport() is checked and
runtime.DefaultUnstructuredConverter.FromUnstructured is called) to first check
if resource.Transport.Maestro.ManifestWork is provided and, if so, render and
use that rendered ManifestWork payload (renderTemplate on
resource.Transport.Maestro.ManifestWork) to construct mw; otherwise, only
attempt FromUnstructured on resourceManifest after verifying it is a
ManifestWork-like object (or return a clear NewExecutorError indicating the
wrong manifest type). This ensures you use
resource.Transport.Maestro.ManifestWork when present instead of blindly
converting resourceManifest (and keeps error handling via
result.Status/result.Error and NewExecutorError).

@xueli181114
Copy link
Contributor

/lgtm

@openshift-ci
Copy link

openshift-ci bot commented Feb 12, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: xueli181114

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 0551a01 into openshift-hyperfleet:main Feb 12, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants