Skip to content

Comments

feat: restructed adapter config file#46

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift-hyperfleet:mainfrom
xueli181114:enhance-config-loader
Feb 12, 2026
Merged

feat: restructed adapter config file#46
openshift-merge-bot[bot] merged 1 commit intoopenshift-hyperfleet:mainfrom
xueli181114:enhance-config-loader

Conversation

@xueli181114
Copy link
Contributor

@xueli181114 xueli181114 commented Feb 12, 2026

Summary by CodeRabbit

  • New Features

    • Added example adapter configs and task/manifest templates for Maestro Kubernetes, including detailed comments and TLS/auth placeholders.
    • Added nested discovery to locate and extract sub-resources from applied manifests and improved post-apply status reporting.
  • Refactoring

    • Unified manifest-based apply/render flow for all transports, simplifying resource application and discovery.
    • Moved manifest handling to a top-level resource model and streamlined validation and execution context.

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2026

Walkthrough

This PR moves manifest handling from a per-transport/batched ManifestWork model to a single-manifest render-and-apply flow. Resource.Manifest is promoted to a top-level field and MaestroTransportConfig.ManifestWork is removed. NestedDiscoveries are added to support sub-resource discovery inside applied manifests. The executor now renders manifests to JSON bytes and calls a unified TransportClient ApplyResource, while k8s and maestro clients parse those bytes (to unstructured or ManifestWork) and apply via new ApplyManifest/ApplyResource paths. Config loader, validator, types, and tests updated to reflect the manifest/nested-discovery model and API signature changes. Several example chart YAMLs demonstrating the new manifest and nested discovery usage are added.

Sequence Diagram(s)

sequenceDiagram
    participant Executor
    participant ResourceExecutor
    participant TransportClient
    participant K8sClient
    participant MaestroClient

    Executor->>ResourceExecutor: Start resource processing (template + params)
    ResourceExecutor->>ResourceExecutor: renderToBytes (render templates → JSON bytes)
    ResourceExecutor->>TransportClient: ApplyResource(ctx, manifestBytes, opts, target)
    alt Kubernetes target
        TransportClient->>K8sClient: ApplyResource(manifestBytes, opts, target)
        K8sClient->>K8sClient: parseToUnstructured(manifestBytes)
        K8sClient->>K8sClient: ApplyManifest(newManifest, existing, opts)
        K8sClient-->>TransportClient: ApplyResult(operation, reason)
    else Maestro target
        TransportClient->>MaestroClient: ApplyResource(manifestBytes, opts, target)
        MaestroClient->>MaestroClient: parseManifestWork(manifestBytes)
        MaestroClient->>MaestroClient: ApplyManifestWork(parsedWork, consumer)
        MaestroClient-->>TransportClient: ApplyResult(operation, reason)
    end
    TransportClient-->>ResourceExecutor: ApplyResult
    ResourceExecutor->>ResourceExecutor: post-apply discovery (including NestedDiscoveries)
    ResourceExecutor->>Executor: store results in ExecutionContext (resources/nested)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

lgtm, approved

Suggested reviewers

  • Mischulee
  • yingzhanredhat
🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title 'feat: restructed adapter config file' is vague and contains a typo ('restructed' should be 'restructured'). While it mentions 'adapter config', the actual changes involve significant architectural refactoring including manifest handling, resource discovery, transport client interfaces, and executor logic - far beyond just config file changes. Revise the title to accurately reflect the main architectural changes, such as 'feat: refactor manifest handling and transport client abstraction' or 'feat: migrate manifest config from nested transport to top-level resource'. Address the spelling error.
Docstring Coverage ⚠️ Warning Docstring coverage is 60.61% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
internal/config_loader/validator.go (1)

381-390: ⚠️ Potential issue | 🟠 Major

Manifest template validation is still running for maestro.
The note says maestro manifests skip template-variable validation, but the code still validates resource.Manifest, which will reject runtime-only vars (e.g., adapterName/timestamp). Guard this by transport to match the intent.

💡 Suggested fix
-		if manifest, ok := resource.Manifest.(map[string]interface{}); ok {
-			v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest)
-		}
-		// NOTE: For maestro transport, we skip template variable validation for manifest content.
+		// NOTE: For maestro transport, we skip template variable validation for manifest content.
+		if !resource.IsMaestroTransport() {
+			if manifest, ok := resource.Manifest.(map[string]interface{}); ok {
+				v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest)
+			}
+		}
internal/executor/resource_executor.go (1)

139-173: ⚠️ Potential issue | 🟠 Major

GVK resolution can be empty for YAML-typed manifests.
resolveGVK only handles map[string]interface{}. If manifests are unmarshaled as map[interface{}]interface{} (common for YAML), resolveGVK returns an empty GVK, causing GetResource/DiscoverResources to fail. Normalize the manifest the same way renderToBytes does (or derive GVK from the rendered manifest).

🔧 Suggested fix
 func (re *ResourceExecutor) resolveGVK(resource config_loader.Resource) schema.GroupVersionKind {
-	manifestData, ok := resource.Manifest.(map[string]interface{})
-	if !ok {
-		return schema.GroupVersionKind{}
-	}
+	var manifestData map[string]interface{}
+	switch m := resource.Manifest.(type) {
+	case map[string]interface{}:
+		manifestData = m
+	case map[interface{}]interface{}:
+		manifestData = convertToStringKeyMap(m)
+	default:
+		return schema.GroupVersionKind{}
+	}
 	apiVersion, ok1 := manifestData["apiVersion"].(string)
 	kind, ok2 := manifestData["kind"].(string)
 	if !ok1 || !ok2 {
 		return schema.GroupVersionKind{}
 	}

Also applies to: 191-229, 243-259

internal/maestro_client/client.go (1)

590-598: ⚠️ Potential issue | 🟡 Minor

The determineOperation logic is unreliable and may misreport the actual operation.

ApplyManifestWork determines the operation using CompareGenerations() and executes the correct action (Create, Patch, or Skip), but only returns the resulting ManifestWork object. The determineOperation function then tries to infer which operation occurred by checking if ResourceVersion is empty—but this fails because both Create and Patch operations result in a work with ResourceVersion set, making them indistinguishable.

The operation decision computed internally in ApplyManifestWork should be returned and used directly instead of inferring it from the work's state. Either modify ApplyManifestWork to return both the work and the operation, or pass the operation decision to determineOperation so it doesn't need to infer it.

🤖 Fix all issues with AI agents
In `@charts/examples/maestro-kubernetes/adapter-task-config.yaml`:
- Around line 88-105: The discovery names and nested discovery identifiers must
match the ManifestWork template and CEL paths: change discovery.byName from "{{
.clusterId }}" to "hyperfleet-cluster-setup-{{ .clusterId }}"; change the nested
discovery named "mgmtConfigMap" to use byName: "cluster-config" (instead of "{{
.clusterId }}"); rename or add the nested discovery for the namespace so its
name matches the CEL/ManifestWork usage (e.g., use "clusterNamespace" or
"agentNamespace" instead of "mgmtNamespace") so CEL paths refer to an actual
nestedDiscoveries entry; update any other occurrences (lines ~133-155, ~178-184)
to the same names to keep discovery.byName, nestedDiscoveries.*.name, and
nestedDiscoveries.*.discovery.byName/bySelectors consistent with the
ManifestWork resource names and CEL variables.

In `@charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml`:
- Around line 75-91: The ConfigMap's metadata.namespace uses "{{ .clusterId }}"
while the Namespace resource uses "{{ .clusterId | lower }}", causing a
mismatch; update the ConfigMap manifest so metadata.name "cluster-config" and
metadata.namespace use the same normalized value as the Namespace (apply the
same "| lower" filter to ".clusterId" in the ConfigMap's metadata.namespace),
and verify any other manifests (e.g. resources referencing metadata.namespace)
also use the same normalized ".clusterId | lower" symbol to avoid landing
resources in a non-existent namespace.

In `@internal/executor/resource_executor.go`:
- Around line 123-131: The discovered parent resource is stored into
execCtx.Resources but its NestedDiscoveries are not, so CEL lookups like
resources.<parent>.<nested> remain empty; after re.discoverResource returns a
non-nil discovered object, iterate its NestedDiscoveries (or use
re.discoverResource recursively for each nested entry), attach each nested
discovery onto the parent discovered object (or populate execCtx.Resources with
keys for parent and parent.<nested>), then save the augmented parent into
execCtx.Resources[resource.Name]; update logic around resource.Discovery,
discovered, and execCtx.Resources to ensure nested discoveries are populated
before storing.
🧹 Nitpick comments (4)
charts/examples/maestro/adapter-task-config.yaml (2)

77-91: Clean up duplicate and incomplete comments.

Several comment issues in this section:

  1. Lines 80-81: Duplicate sentence "It is created by the adapter and can be used to create resources on the cluster."
  2. Line 88: Incomplete sentence "When try to dig Manifest" - appears truncated.
📝 Suggested comment cleanup
-      # ManifestWork is a kind of manifest that can be used to create resources on the cluster.
-      # It is a collection of resources that are created together.
-      # It is created by the adapter and can be used to create resources on the cluster.
-      # It is created by the adapter and can be used to create resources on the cluster.
+      # ManifestWork is a manifest that bundles multiple resources to be created together on the cluster.
       manifest:
         ref: "/etc/adapter/manifestwork.yaml"
       
       # Discover the manifestWork by name
-      # This is used to get the manifestWork object from the cluster.
-      # With this approach, the manifestWork object can be used to get the resources inside the manifestWork.
-      # When try to dig  Manifest
+      # Used to retrieve the manifestWork object and access its contained resources.
       discovery:
         byName: "{{ .clusterId }}"

92-104: Clarify the nestedDiscoveries comment.

The comment on lines 93-94 is unclear: "This approach can be used to use the discovery name to parameter level". Consider rewording to explain the purpose more clearly.

📝 Suggested clarification
       # Discover sub-resources within the manifestWork
-      # This approach can be used to use the discovery name to parameter level
-      # This can support jsonPath to dig into the resource status. like mgmtNamespace.status.conditions[?(@.type=="Ready")].status
+      # Each nested discovery can be referenced by name in post-processing expressions.
+      # Supports jsonPath to access resource status, e.g., mgmtNamespace.status.conditions[?(@.type=="Ready")].status
       nestedDiscoveries:
internal/k8s_client/apply.go (1)

195-215: Consider validating the parsed object has required fields.

The parseToUnstructured function successfully parses JSON/YAML but doesn't validate that the result contains required Kubernetes fields (apiVersion, kind, metadata). A malformed manifest could pass parsing but fail later with unclear errors.

🔧 Optional: Add validation for required fields
 func parseToUnstructured(data []byte) (*unstructured.Unstructured, error) {
 	obj := &unstructured.Unstructured{}

 	// Try JSON first
 	if err := json.Unmarshal(data, &obj.Object); err == nil && obj.Object != nil {
+		if err := validateUnstructured(obj); err != nil {
+			return nil, err
+		}
 		return obj, nil
 	}

 	// Fall back to YAML → JSON → unstructured
 	jsonData, err := yaml.YAMLToJSON(data)
 	if err != nil {
 		return nil, fmt.Errorf("failed to convert YAML to JSON: %w", err)
 	}

 	if err := json.Unmarshal(jsonData, &obj.Object); err != nil {
 		return nil, fmt.Errorf("failed to parse manifest: %w", err)
 	}

+	if err := validateUnstructured(obj); err != nil {
+		return nil, err
+	}
+
 	return obj, nil
 }
+
+func validateUnstructured(obj *unstructured.Unstructured) error {
+	if obj.GetAPIVersion() == "" {
+		return fmt.Errorf("manifest missing required field: apiVersion")
+	}
+	if obj.GetKind() == "" {
+		return fmt.Errorf("manifest missing required field: kind")
+	}
+	return nil
+}
internal/maestro_client/client.go (1)

567-587: JSON success check may pass for partial/invalid ManifestWorks.

The check work.Name != "" after JSON unmarshal is a reasonable heuristic, but it could pass for a valid JSON object that isn't actually a ManifestWork (e.g., a ConfigMap with a name field in metadata). This is similar to the k8s_client parsing concern.

🔧 Optional: Add Kind/APIVersion validation
 func parseManifestWork(data []byte) (*workv1.ManifestWork, error) {
 	work := &workv1.ManifestWork{}

 	// Try JSON first
-	if err := json.Unmarshal(data, work); err == nil && work.Name != "" {
+	if err := json.Unmarshal(data, work); err == nil && isValidManifestWork(work) {
 		return work, nil
 	}

 	// Fall back to YAML
 	jsonData, err := yaml.YAMLToJSON(data)
 	if err != nil {
 		return nil, fmt.Errorf("failed to convert YAML to JSON: %w", err)
 	}

 	if err := json.Unmarshal(jsonData, work); err != nil {
 		return nil, fmt.Errorf("failed to parse ManifestWork: %w", err)
 	}

+	if !isValidManifestWork(work) {
+		return nil, fmt.Errorf("parsed object is not a valid ManifestWork")
+	}
+
 	return work, nil
 }
+
+func isValidManifestWork(work *workv1.ManifestWork) bool {
+	return work.Name != "" && work.Kind == "ManifestWork"
+}

Comment on lines +88 to +105
# When try to dig Manifest
discovery:
byName: "{{ .clusterId }}"

# Discover sub-resources within the manifestWork
# This approach can be used to use the discovery name to parameter level
# This can support jsonPath to dig into the resource status. like mgmtNamespace.status.conditions[?(@.type=="Ready")].status
nestedDiscoveries:
- name: "mgmtNamespace"
discovery:
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
- name: "mgmtConfigMap"
discovery:
byName: "{{ .clusterId }}"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Align discovery names with the ManifestWork template and CEL paths.
There are several naming mismatches that will make discovery/CEL lookups fail:

  • discovery.byName uses {{ .clusterId }}, but the ManifestWork template name is hyperfleet-cluster-setup-{{ .clusterId }}.
  • mgmtConfigMap uses byName: "{{ .clusterId }}", but the template creates cluster-config.
  • CEL references agentNamespace and clusterNamespace, which aren’t declared (resource is namespace, nested is mgmtNamespace).
🛠️ Example alignment
   discovery:
-    byName: "{{ .clusterId }}"
+    byName: "hyperfleet-cluster-setup-{{ .clusterId }}"

   nestedDiscoveries:
-    - name: "mgmtNamespace"
+    - name: "agentNamespace"     # or update CEL to use mgmtNamespace
       discovery:
         bySelectors:
           labelSelector:
             hyperfleet.io/resource-type: "namespace"
             hyperfleet.io/cluster-id: "{{ .clusterId }}"
     - name: "mgmtConfigMap"
       discovery:
-        byName: "{{ .clusterId }}"
+        byName: "cluster-config"

   data:
     namespace:
       name:
         expression: |
-          resources.?clusterNamespace.?metadata.?name.orValue("")
+          resources.?namespace.?metadata.?name.orValue("")

Also applies to: 133-155, 178-184

🤖 Prompt for AI Agents
In `@charts/examples/maestro-kubernetes/adapter-task-config.yaml` around lines 88
- 105, The discovery names and nested discovery identifiers must match the
ManifestWork template and CEL paths: change discovery.byName from "{{ .clusterId
}}" to "hyperfleet-cluster-setup-{{ .clusterId }}"; change the nested discovery
named "mgmtConfigMap" to use byName: "cluster-config" (instead of "{{ .clusterId
}}"); rename or add the nested discovery for the namespace so its name matches
the CEL/ManifestWork usage (e.g., use "clusterNamespace" or "agentNamespace"
instead of "mgmtNamespace") so CEL paths refer to an actual nestedDiscoveries
entry; update any other occurrences (lines ~133-155, ~178-184) to the same names
to keep discovery.byName, nestedDiscoveries.*.name, and
nestedDiscoveries.*.discovery.byName/bySelectors consistent with the
ManifestWork resource names and CEL variables.

Comment on lines +75 to +91
- apiVersion: v1
kind: Namespace
metadata:
name: "{{ .clusterId | lower }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
hyperfleet.io/resource-type: "namespace"
annotations:
hyperfleet.io/created-by: "hyperfleet-adapter"
hyperfleet.io/generation: "{{ .generationId }}"
- apiVersion: v1
kind: ConfigMap
metadata:
name: "cluster-config"
namespace: "{{ .clusterId }}"
labels:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Namespace normalization mismatch for the ConfigMap.
The Namespace is created with {{ .clusterId | lower }}, but the ConfigMap targets {{ .clusterId }}. If clusterId contains uppercase/invalid chars, the ConfigMap will land in a non-existent namespace.

🔧 Suggested fix
     - apiVersion: v1
       kind: ConfigMap
       metadata:
         name: "cluster-config"
-        namespace: "{{ .clusterId }}"
+        namespace: "{{ .clusterId | lower }}"
🤖 Prompt for AI Agents
In `@charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml`
around lines 75 - 91, The ConfigMap's metadata.namespace uses "{{ .clusterId }}"
while the Namespace resource uses "{{ .clusterId | lower }}", causing a
mismatch; update the ConfigMap manifest so metadata.name "cluster-config" and
metadata.namespace use the same normalized value as the Namespace (apply the
same "| lower" filter to ".clusterId" in the ConfigMap's metadata.namespace),
and verify any other manifests (e.g. resources referencing metadata.namespace)
also use the same normalized ".clusterId | lower" symbol to avoid landing
resources in a non-existent namespace.

@xueli181114 xueli181114 force-pushed the enhance-config-loader branch from ef7b16d to d7a0b36 Compare February 12, 2026 06:15
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/config_loader/validator.go (1)

381-390: ⚠️ Potential issue | 🟠 Major

Maestro manifest template validation still runs despite the skip note.
As written, manifest maps are always validated before the note, so inline ManifestWork templates with runtime-only variables will fail validation.

🛠️ Suggested guard to skip maestro manifest template validation
-		if manifest, ok := resource.Manifest.(map[string]interface{}); ok {
-			v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest)
-		}
-		// NOTE: For maestro transport, we skip template variable validation for manifest content.
-		// ManifestWork templates may use variables provided at runtime by the framework
-		// (e.g., adapterName, timestamp) that are not necessarily declared in params or captures.
+		// NOTE: For maestro transport, we skip template variable validation for manifest content.
+		// ManifestWork templates may use variables provided at runtime by the framework
+		// (e.g., adapterName, timestamp) that are not necessarily declared in params or captures.
+		if !resource.IsMaestroTransport() {
+			if manifest, ok := resource.Manifest.(map[string]interface{}); ok {
+				v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest)
+			}
+		}
🤖 Fix all issues with AI agents
In `@charts/examples/maestro/adapter-task-config.yaml`:
- Around line 94-103: The nested discovery name "mgmtNamespace" in
nestedDiscoveries is inconsistent with the post-processing paths (which still
reference resources.?agentNamespaceManifestWork.agentNamespace... and
resources.?agentNamespaceManifestWork.clusterNamespace...), causing those
lookups to be empty; update the post.conditions and post.data entries to
reference the discovery name mgmtNamespace (e.g., change any occurrences of
agentNamespace or clusterNamespace under resources.?agentNamespaceManifestWork
to mgmtNamespace) so post-processing uses
resources.?agentNamespaceManifestWork.mgmtNamespace for status and metadata
lookups, and ensure all references to nestedDiscoveries, post.conditions, and
post.data consistently use the same name.

In `@internal/k8s_client/apply.go`:
- Around line 23-51: In ApplyResource, after parseToUnstructured returns obj,
validate that obj.GroupVersionKind() has a non-empty Kind and non-empty
APIVersion (i.e. check gvk.Kind and gvk.Empty()/gvk.Version/Group as
appropriate) and that obj.GetName() is non-empty (you can allow empty namespace
via obj.GetNamespace()); if any are missing return a clear error like "manifest
missing apiVersion/kind/metadata.name" before calling GetResource or
ApplyManifest. Update the logic in Client.ApplyResource to perform these checks
(referencing parseToUnstructured, obj.GroupVersionKind(), obj.GetName(),
obj.GetNamespace(), GetResource, and ApplyManifest) and fail fast with a
descriptive error when fields are absent.

In `@internal/k8s_client/interface.go`:
- Around line 20-21: Update the comment above the embedded
transport_client.TransportClient to stop claiming ApplyResource([]byte) is the
only signature — note that ApplyResource on TransportClient now has a
different/extended signature and is not limited to a single []byte parameter;
update the text to accurately describe the exposed API (e.g., list ApplyResource
with its new signature or omit the parameter detail) along with GetResource and
DiscoverResources so the comment matches the current
transport_client.TransportClient methods.

In `@internal/maestro_client/client.go`:
- Around line 598-618: The parseManifestWork function currently validates
work.Name only on the JSON path; update parseManifestWork to enforce that the
parsed ManifestWork has a non-empty metadata.name in both paths by checking
work.Name after successful JSON unmarshal and again after the YAML->JSON
unmarshal/Unmarshal, returning a clear error (e.g., "manifestwork missing
metadata.name") if it's empty; reference the function parseManifestWork and the
work.Name field to locate where to add the post-parse validation and error
return so both JSON and YAML parsing behave consistently.

In `@internal/manifest/generation.go`:
- Around line 396-399: The docstring for DiscoverNestedManifest incorrectly
claims it returns "the manifest with the highest generation" but the function
signature only returns (*unstructured.UnstructuredList, error); update the
comment to remove the misleading return claim and either mention that callers
can use GetLatestGenerationFromList to pick the highest-generation item or add
guidance to use GetLatestGenerationFromList with the returned
*unstructured.UnstructuredList; edit the comment above DiscoverNestedManifest
and reference the helper GetLatestGenerationFromList and the returned type
unstructured.UnstructuredList so callers aren’t confused.
🧹 Nitpick comments (3)
charts/examples/maestro/adapter-task-config.yaml (1)

77-80: Clean up outdated/duplicate comments about ManifestWork.

The repeated “ManifestWork” comments are duplicated and now misleading in the manifest-based flow. Consider replacing with a short, accurate description of the manifest block.

internal/executor/resource_executor.go (1)

342-359: Consider logging when GVK resolution fails.

The resolveGVK function silently returns an empty GroupVersionKind on failure. While this may be intentional for graceful degradation, a debug log could help troubleshoot discovery issues.

🔧 Optional: Add debug logging for GVK resolution failures
 func (re *ResourceExecutor) resolveGVK(resource config_loader.Resource) schema.GroupVersionKind {
 	manifestData, ok := resource.Manifest.(map[string]interface{})
 	if !ok {
+		re.log.Debugf(context.Background(), "Resource[%s] manifest is not a map, cannot resolve GVK", resource.Name)
 		return schema.GroupVersionKind{}
 	}
 	apiVersion, ok1 := manifestData["apiVersion"].(string)
 	kind, ok2 := manifestData["kind"].(string)
 	if !ok1 || !ok2 {
+		re.log.Debugf(context.Background(), "Resource[%s] manifest missing apiVersion or kind", resource.Name)
 		return schema.GroupVersionKind{}
 	}
 	gv, err := schema.ParseGroupVersion(apiVersion)
 	if err != nil {
+		re.log.Debugf(context.Background(), "Resource[%s] invalid apiVersion %q: %v", resource.Name, apiVersion, err)
 		return schema.GroupVersionKind{}
 	}
 	return gv.WithKind(kind)
 }
internal/maestro_client/client.go (1)

437-483: ApplyOptions are currently ignored for Maestro.

Line 443 accepts opts but RecreateOnChange is never used, so behavior diverges from the k8s client. If parity is intended, consider threading opts through ApplyManifestWork or explicitly documenting that it’s unsupported for Maestro.

Comment on lines +94 to +103
nestedDiscoveries:
- name: "mgmtNamespace"
discovery:
# The "namespace" field within discovery is optional:
# - For namespaced resources: set namespace to target the specific namespace
# - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty
# Here we omit it since Namespace is cluster-scoped
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
discovery:
# The "namespace" field within discovery is optional:
# - For namespaced resources: set namespace to target the specific namespace
# - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty
# Here we omit it since Namespace is cluster-scoped
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"

hyperfleet.io/cluster-id: "{{ .clusterId }}"
- name: "mgmtConfigMap"
discovery:
byName: "{{ .clusterId }}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Align nested discovery names with post-processing references.

nestedDiscoveries defines mgmtNamespace, but post-processing still reads resources.?agentNamespaceManifestWork.agentNamespace... and resources.?clusterNamespace.... As written, those paths will resolve to empty values and keep conditions false.

Pick one naming scheme and use it consistently across nestedDiscoveries, post.conditions, and post.data.

🔧 Example fix (update post-processing to match `mgmtNamespace`)
# Applied / Available conditions
resources.?agentNamespaceManifestWork.mgmtNamespace.?status.?phase.orValue("")

# Data extraction
resources.?agentNamespaceManifestWork.mgmtNamespace.?metadata.?name.orValue("")
resources.?agentNamespaceManifestWork.mgmtNamespace.?status.?phase.orValue("")
🤖 Prompt for AI Agents
In `@charts/examples/maestro/adapter-task-config.yaml` around lines 94 - 103, The
nested discovery name "mgmtNamespace" in nestedDiscoveries is inconsistent with
the post-processing paths (which still reference
resources.?agentNamespaceManifestWork.agentNamespace... and
resources.?agentNamespaceManifestWork.clusterNamespace...), causing those
lookups to be empty; update the post.conditions and post.data entries to
reference the discovery name mgmtNamespace (e.g., change any occurrences of
agentNamespace or clusterNamespace under resources.?agentNamespaceManifestWork
to mgmtNamespace) so post-processing uses
resources.?agentNamespaceManifestWork.mgmtNamespace for status and metadata
lookups, and ensure all references to nestedDiscoveries, post.conditions, and
post.data consistently use the same name.

Comment on lines +23 to +51
// ApplyResource implements transport_client.TransportClient.
// It accepts rendered JSON/YAML bytes, parses them into an unstructured K8s resource,
// discovers the existing resource by name, and applies with generation comparison.
func (c *Client) ApplyResource(
ctx context.Context,
manifestBytes []byte,
opts *transport_client.ApplyOptions,
_ transport_client.TransportContext,
) (*transport_client.ApplyResult, error) {
if len(manifestBytes) == 0 {
return nil, fmt.Errorf("manifest bytes cannot be empty")
}

// Parse bytes into unstructured
obj, err := parseToUnstructured(manifestBytes)
if err != nil {
return nil, fmt.Errorf("failed to parse manifest: %w", err)
}

// Discover existing resource by name
gvk := obj.GroupVersionKind()
existing, err := c.GetResource(ctx, gvk, obj.GetNamespace(), obj.GetName(), nil)
if err != nil && !apierrors.IsNotFound(err) {
return nil, fmt.Errorf("failed to get existing resource %s/%s: %w", gvk.Kind, obj.GetName(), err)
}

// Apply with generation comparison
return c.ApplyManifest(ctx, obj, existing, opts)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate apiVersion/kind/name before discovery.

Line 43 uses GVK and name for lookup; if the manifest is missing apiVersion/kind/metadata.name, this will fail later with less-clear errors. Consider failing fast right after parsing.

Suggested fix
 	// Parse bytes into unstructured
 	obj, err := parseToUnstructured(manifestBytes)
 	if err != nil {
 		return nil, fmt.Errorf("failed to parse manifest: %w", err)
 	}
+	if obj.GetAPIVersion() == "" || obj.GetKind() == "" {
+		return nil, fmt.Errorf("manifest must include apiVersion and kind")
+	}
+	if obj.GetName() == "" {
+		return nil, fmt.Errorf("manifest metadata.name is required for apply")
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// ApplyResource implements transport_client.TransportClient.
// It accepts rendered JSON/YAML bytes, parses them into an unstructured K8s resource,
// discovers the existing resource by name, and applies with generation comparison.
func (c *Client) ApplyResource(
ctx context.Context,
manifestBytes []byte,
opts *transport_client.ApplyOptions,
_ transport_client.TransportContext,
) (*transport_client.ApplyResult, error) {
if len(manifestBytes) == 0 {
return nil, fmt.Errorf("manifest bytes cannot be empty")
}
// Parse bytes into unstructured
obj, err := parseToUnstructured(manifestBytes)
if err != nil {
return nil, fmt.Errorf("failed to parse manifest: %w", err)
}
// Discover existing resource by name
gvk := obj.GroupVersionKind()
existing, err := c.GetResource(ctx, gvk, obj.GetNamespace(), obj.GetName(), nil)
if err != nil && !apierrors.IsNotFound(err) {
return nil, fmt.Errorf("failed to get existing resource %s/%s: %w", gvk.Kind, obj.GetName(), err)
}
// Apply with generation comparison
return c.ApplyManifest(ctx, obj, existing, opts)
}
// ApplyResource implements transport_client.TransportClient.
// It accepts rendered JSON/YAML bytes, parses them into an unstructured K8s resource,
// discovers the existing resource by name, and applies with generation comparison.
func (c *Client) ApplyResource(
ctx context.Context,
manifestBytes []byte,
opts *transport_client.ApplyOptions,
_ transport_client.TransportContext,
) (*transport_client.ApplyResult, error) {
if len(manifestBytes) == 0 {
return nil, fmt.Errorf("manifest bytes cannot be empty")
}
// Parse bytes into unstructured
obj, err := parseToUnstructured(manifestBytes)
if err != nil {
return nil, fmt.Errorf("failed to parse manifest: %w", err)
}
if obj.GetAPIVersion() == "" || obj.GetKind() == "" {
return nil, fmt.Errorf("manifest must include apiVersion and kind")
}
if obj.GetName() == "" {
return nil, fmt.Errorf("manifest metadata.name is required for apply")
}
// Discover existing resource by name
gvk := obj.GroupVersionKind()
existing, err := c.GetResource(ctx, gvk, obj.GetNamespace(), obj.GetName(), nil)
if err != nil && !apierrors.IsNotFound(err) {
return nil, fmt.Errorf("failed to get existing resource %s/%s: %w", gvk.Kind, obj.GetName(), err)
}
// Apply with generation comparison
return c.ApplyManifest(ctx, obj, existing, opts)
}
🤖 Prompt for AI Agents
In `@internal/k8s_client/apply.go` around lines 23 - 51, In ApplyResource, after
parseToUnstructured returns obj, validate that obj.GroupVersionKind() has a
non-empty Kind and non-empty APIVersion (i.e. check gvk.Kind and
gvk.Empty()/gvk.Version/Group as appropriate) and that obj.GetName() is
non-empty (you can allow empty namespace via obj.GetNamespace()); if any are
missing return a clear error like "manifest missing
apiVersion/kind/metadata.name" before calling GetResource or ApplyManifest.
Update the logic in Client.ApplyResource to perform these checks (referencing
parseToUnstructured, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(),
GetResource, and ApplyManifest) and fail fast with a descriptive error when
fields are absent.

Comment on lines +20 to 21
// This provides: ApplyResource([]byte), GetResource, DiscoverResources
transport_client.TransportClient
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update embedded interface comment to match the new ApplyResource signature.
The embedded TransportClient no longer exposes ApplyResource([]byte) only.

✏️ Suggested comment update
-	// This provides: ApplyResource([]byte), GetResource, DiscoverResources
+	// This provides: ApplyResource(ctx, manifest, opts, target), GetResource, DiscoverResources
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// This provides: ApplyResource([]byte), GetResource, DiscoverResources
transport_client.TransportClient
// This provides: ApplyResource(ctx, manifest, opts, target), GetResource, DiscoverResources
transport_client.TransportClient
🤖 Prompt for AI Agents
In `@internal/k8s_client/interface.go` around lines 20 - 21, Update the comment
above the embedded transport_client.TransportClient to stop claiming
ApplyResource([]byte) is the only signature — note that ApplyResource on
TransportClient now has a different/extended signature and is not limited to a
single []byte parameter; update the text to accurately describe the exposed API
(e.g., list ApplyResource with its new signature or omit the parameter detail)
along with GetResource and DiscoverResources so the comment matches the current
transport_client.TransportClient methods.

Comment on lines +598 to 618
// parseManifestWork parses JSON or YAML bytes into a ManifestWork object.
func parseManifestWork(data []byte) (*workv1.ManifestWork, error) {
work := &workv1.ManifestWork{}

// Convert each resource to a Manifest
manifests := make([]workv1.Manifest, 0, len(resources))
for _, r := range resources {
if r.Manifest == nil {
continue // Skip resources with no manifest. It means manifests already defined in the manifestWork template
}
raw, err := json.Marshal(r.Manifest.Object)
if err != nil {
return nil, fmt.Errorf("failed to marshal manifest %s: %w", r.Name, err)
}
manifests = append(manifests, workv1.Manifest{
RawExtension: runtime.RawExtension{Raw: raw},
})
// Try JSON first
if err := json.Unmarshal(data, work); err == nil && work.Name != "" {
return work, nil
}

// Replace the template's manifests with actual resources (only if there are any)
if len(manifests) > 0 {
work.Spec.Workload.Manifests = manifests
// Fall back to YAML
jsonData, err := yaml.YAMLToJSON(data)
if err != nil {
return nil, fmt.Errorf("failed to convert YAML to JSON: %w", err)
}

if err := json.Unmarshal(jsonData, work); err != nil {
return nil, fmt.Errorf("failed to parse ManifestWork: %w", err)
}
// Otherwise, use the template's inline manifests as-is

return work, nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate ManifestWork name consistently after parse.

Line 603 enforces work.Name only on the JSON path; the YAML fallback returns even if name is empty, which can surface as harder-to-debug apply errors. Consider enforcing metadata.name after parsing.

Suggested fix
 	if err := json.Unmarshal(jsonData, work); err != nil {
 		return nil, fmt.Errorf("failed to parse ManifestWork: %w", err)
 	}
+	if work.Name == "" {
+		return nil, fmt.Errorf("manifestwork metadata.name is required")
+	}
 
 	return work, nil
🤖 Prompt for AI Agents
In `@internal/maestro_client/client.go` around lines 598 - 618, The
parseManifestWork function currently validates work.Name only on the JSON path;
update parseManifestWork to enforce that the parsed ManifestWork has a non-empty
metadata.name in both paths by checking work.Name after successful JSON
unmarshal and again after the YAML->JSON unmarshal/Unmarshal, returning a clear
error (e.g., "manifestwork missing metadata.name") if it's empty; reference the
function parseManifestWork and the work.Name field to locate where to add the
post-parse validation and error return so both JSON and YAML parsing behave
consistently.

Comment on lines 396 to +399
// Returns:
// - List of matching manifests as unstructured objects
// - The manifest with the highest generation if multiple match
func DiscoverInManifestWork(work *workv1.ManifestWork, discovery Discovery) (*unstructured.UnstructuredList, error) {
func DiscoverNestedManifest(parent *unstructured.Unstructured, discovery Discovery) (*unstructured.UnstructuredList, error) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring mentions a “highest generation” return that the function doesn’t provide.
Consider trimming the comment or pointing callers to GetLatestGenerationFromList to avoid confusion.

✏️ Suggested comment tweak
-//   - List of matching manifests as unstructured objects
-//   - The manifest with the highest generation if multiple match
+//   - List of matching manifests as unstructured objects
+//   - Use GetLatestGenerationFromList if you need the newest match
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Returns:
// - List of matching manifests as unstructured objects
// - The manifest with the highest generation if multiple match
func DiscoverInManifestWork(work *workv1.ManifestWork, discovery Discovery) (*unstructured.UnstructuredList, error) {
func DiscoverNestedManifest(parent *unstructured.Unstructured, discovery Discovery) (*unstructured.UnstructuredList, error) {
// Returns:
// - List of matching manifests as unstructured objects
// - Use GetLatestGenerationFromList if you need the newest match
func DiscoverNestedManifest(parent *unstructured.Unstructured, discovery Discovery) (*unstructured.UnstructuredList, error) {
🤖 Prompt for AI Agents
In `@internal/manifest/generation.go` around lines 396 - 399, The docstring for
DiscoverNestedManifest incorrectly claims it returns "the manifest with the
highest generation" but the function signature only returns
(*unstructured.UnstructuredList, error); update the comment to remove the
misleading return claim and either mention that callers can use
GetLatestGenerationFromList to pick the highest-generation item or add guidance
to use GetLatestGenerationFromList with the returned
*unstructured.UnstructuredList; edit the comment above DiscoverNestedManifest
and reference the helper GetLatestGenerationFromList and the returned type
unstructured.UnstructuredList so callers aren’t confused.

@rh-amarin
Copy link
Contributor

/lgtm

@openshift-ci
Copy link

openshift-ci bot commented Feb 12, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rh-amarin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit ae7e6d0 into openshift-hyperfleet:main Feb 12, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants