feat: restructed adapter config file#46
Conversation
WalkthroughThis PR moves manifest handling from a per-transport/batched ManifestWork model to a single-manifest render-and-apply flow. Resource.Manifest is promoted to a top-level field and MaestroTransportConfig.ManifestWork is removed. NestedDiscoveries are added to support sub-resource discovery inside applied manifests. The executor now renders manifests to JSON bytes and calls a unified TransportClient ApplyResource, while k8s and maestro clients parse those bytes (to unstructured or ManifestWork) and apply via new ApplyManifest/ApplyResource paths. Config loader, validator, types, and tests updated to reflect the manifest/nested-discovery model and API signature changes. Several example chart YAMLs demonstrating the new manifest and nested discovery usage are added. Sequence Diagram(s)sequenceDiagram
participant Executor
participant ResourceExecutor
participant TransportClient
participant K8sClient
participant MaestroClient
Executor->>ResourceExecutor: Start resource processing (template + params)
ResourceExecutor->>ResourceExecutor: renderToBytes (render templates → JSON bytes)
ResourceExecutor->>TransportClient: ApplyResource(ctx, manifestBytes, opts, target)
alt Kubernetes target
TransportClient->>K8sClient: ApplyResource(manifestBytes, opts, target)
K8sClient->>K8sClient: parseToUnstructured(manifestBytes)
K8sClient->>K8sClient: ApplyManifest(newManifest, existing, opts)
K8sClient-->>TransportClient: ApplyResult(operation, reason)
else Maestro target
TransportClient->>MaestroClient: ApplyResource(manifestBytes, opts, target)
MaestroClient->>MaestroClient: parseManifestWork(manifestBytes)
MaestroClient->>MaestroClient: ApplyManifestWork(parsedWork, consumer)
MaestroClient-->>TransportClient: ApplyResult(operation, reason)
end
TransportClient-->>ResourceExecutor: ApplyResult
ResourceExecutor->>ResourceExecutor: post-apply discovery (including NestedDiscoveries)
ResourceExecutor->>Executor: store results in ExecutionContext (resources/nested)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
internal/config_loader/validator.go (1)
381-390:⚠️ Potential issue | 🟠 MajorManifest template validation is still running for maestro.
The note says maestro manifests skip template-variable validation, but the code still validatesresource.Manifest, which will reject runtime-only vars (e.g., adapterName/timestamp). Guard this by transport to match the intent.💡 Suggested fix
- if manifest, ok := resource.Manifest.(map[string]interface{}); ok { - v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) - } - // NOTE: For maestro transport, we skip template variable validation for manifest content. + // NOTE: For maestro transport, we skip template variable validation for manifest content. + if !resource.IsMaestroTransport() { + if manifest, ok := resource.Manifest.(map[string]interface{}); ok { + v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) + } + }internal/executor/resource_executor.go (1)
139-173:⚠️ Potential issue | 🟠 MajorGVK resolution can be empty for YAML-typed manifests.
resolveGVKonly handlesmap[string]interface{}. If manifests are unmarshaled asmap[interface{}]interface{}(common for YAML),resolveGVKreturns an empty GVK, causingGetResource/DiscoverResourcesto fail. Normalize the manifest the same wayrenderToBytesdoes (or derive GVK from the rendered manifest).🔧 Suggested fix
func (re *ResourceExecutor) resolveGVK(resource config_loader.Resource) schema.GroupVersionKind { - manifestData, ok := resource.Manifest.(map[string]interface{}) - if !ok { - return schema.GroupVersionKind{} - } + var manifestData map[string]interface{} + switch m := resource.Manifest.(type) { + case map[string]interface{}: + manifestData = m + case map[interface{}]interface{}: + manifestData = convertToStringKeyMap(m) + default: + return schema.GroupVersionKind{} + } apiVersion, ok1 := manifestData["apiVersion"].(string) kind, ok2 := manifestData["kind"].(string) if !ok1 || !ok2 { return schema.GroupVersionKind{} }Also applies to: 191-229, 243-259
internal/maestro_client/client.go (1)
590-598:⚠️ Potential issue | 🟡 MinorThe
determineOperationlogic is unreliable and may misreport the actual operation.
ApplyManifestWorkdetermines the operation usingCompareGenerations()and executes the correct action (Create, Patch, or Skip), but only returns the resultingManifestWorkobject. ThedetermineOperationfunction then tries to infer which operation occurred by checking ifResourceVersionis empty—but this fails because both Create and Patch operations result in a work withResourceVersionset, making them indistinguishable.The operation decision computed internally in
ApplyManifestWorkshould be returned and used directly instead of inferring it from the work's state. Either modifyApplyManifestWorkto return both the work and the operation, or pass the operation decision todetermineOperationso it doesn't need to infer it.
🤖 Fix all issues with AI agents
In `@charts/examples/maestro-kubernetes/adapter-task-config.yaml`:
- Around line 88-105: The discovery names and nested discovery identifiers must
match the ManifestWork template and CEL paths: change discovery.byName from "{{
.clusterId }}" to "hyperfleet-cluster-setup-{{ .clusterId }}"; change the nested
discovery named "mgmtConfigMap" to use byName: "cluster-config" (instead of "{{
.clusterId }}"); rename or add the nested discovery for the namespace so its
name matches the CEL/ManifestWork usage (e.g., use "clusterNamespace" or
"agentNamespace" instead of "mgmtNamespace") so CEL paths refer to an actual
nestedDiscoveries entry; update any other occurrences (lines ~133-155, ~178-184)
to the same names to keep discovery.byName, nestedDiscoveries.*.name, and
nestedDiscoveries.*.discovery.byName/bySelectors consistent with the
ManifestWork resource names and CEL variables.
In `@charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml`:
- Around line 75-91: The ConfigMap's metadata.namespace uses "{{ .clusterId }}"
while the Namespace resource uses "{{ .clusterId | lower }}", causing a
mismatch; update the ConfigMap manifest so metadata.name "cluster-config" and
metadata.namespace use the same normalized value as the Namespace (apply the
same "| lower" filter to ".clusterId" in the ConfigMap's metadata.namespace),
and verify any other manifests (e.g. resources referencing metadata.namespace)
also use the same normalized ".clusterId | lower" symbol to avoid landing
resources in a non-existent namespace.
In `@internal/executor/resource_executor.go`:
- Around line 123-131: The discovered parent resource is stored into
execCtx.Resources but its NestedDiscoveries are not, so CEL lookups like
resources.<parent>.<nested> remain empty; after re.discoverResource returns a
non-nil discovered object, iterate its NestedDiscoveries (or use
re.discoverResource recursively for each nested entry), attach each nested
discovery onto the parent discovered object (or populate execCtx.Resources with
keys for parent and parent.<nested>), then save the augmented parent into
execCtx.Resources[resource.Name]; update logic around resource.Discovery,
discovered, and execCtx.Resources to ensure nested discoveries are populated
before storing.
🧹 Nitpick comments (4)
charts/examples/maestro/adapter-task-config.yaml (2)
77-91: Clean up duplicate and incomplete comments.Several comment issues in this section:
- Lines 80-81: Duplicate sentence "It is created by the adapter and can be used to create resources on the cluster."
- Line 88: Incomplete sentence "When try to dig Manifest" - appears truncated.
📝 Suggested comment cleanup
- # ManifestWork is a kind of manifest that can be used to create resources on the cluster. - # It is a collection of resources that are created together. - # It is created by the adapter and can be used to create resources on the cluster. - # It is created by the adapter and can be used to create resources on the cluster. + # ManifestWork is a manifest that bundles multiple resources to be created together on the cluster. manifest: ref: "/etc/adapter/manifestwork.yaml" # Discover the manifestWork by name - # This is used to get the manifestWork object from the cluster. - # With this approach, the manifestWork object can be used to get the resources inside the manifestWork. - # When try to dig Manifest + # Used to retrieve the manifestWork object and access its contained resources. discovery: byName: "{{ .clusterId }}"
92-104: Clarify the nestedDiscoveries comment.The comment on lines 93-94 is unclear: "This approach can be used to use the discovery name to parameter level". Consider rewording to explain the purpose more clearly.
📝 Suggested clarification
# Discover sub-resources within the manifestWork - # This approach can be used to use the discovery name to parameter level - # This can support jsonPath to dig into the resource status. like mgmtNamespace.status.conditions[?(@.type=="Ready")].status + # Each nested discovery can be referenced by name in post-processing expressions. + # Supports jsonPath to access resource status, e.g., mgmtNamespace.status.conditions[?(@.type=="Ready")].status nestedDiscoveries:internal/k8s_client/apply.go (1)
195-215: Consider validating the parsed object has required fields.The
parseToUnstructuredfunction successfully parses JSON/YAML but doesn't validate that the result contains required Kubernetes fields (apiVersion, kind, metadata). A malformed manifest could pass parsing but fail later with unclear errors.🔧 Optional: Add validation for required fields
func parseToUnstructured(data []byte) (*unstructured.Unstructured, error) { obj := &unstructured.Unstructured{} // Try JSON first if err := json.Unmarshal(data, &obj.Object); err == nil && obj.Object != nil { + if err := validateUnstructured(obj); err != nil { + return nil, err + } return obj, nil } // Fall back to YAML → JSON → unstructured jsonData, err := yaml.YAMLToJSON(data) if err != nil { return nil, fmt.Errorf("failed to convert YAML to JSON: %w", err) } if err := json.Unmarshal(jsonData, &obj.Object); err != nil { return nil, fmt.Errorf("failed to parse manifest: %w", err) } + if err := validateUnstructured(obj); err != nil { + return nil, err + } + return obj, nil } + +func validateUnstructured(obj *unstructured.Unstructured) error { + if obj.GetAPIVersion() == "" { + return fmt.Errorf("manifest missing required field: apiVersion") + } + if obj.GetKind() == "" { + return fmt.Errorf("manifest missing required field: kind") + } + return nil +}internal/maestro_client/client.go (1)
567-587: JSON success check may pass for partial/invalid ManifestWorks.The check
work.Name != ""after JSON unmarshal is a reasonable heuristic, but it could pass for a valid JSON object that isn't actually a ManifestWork (e.g., a ConfigMap with anamefield in metadata). This is similar to the k8s_client parsing concern.🔧 Optional: Add Kind/APIVersion validation
func parseManifestWork(data []byte) (*workv1.ManifestWork, error) { work := &workv1.ManifestWork{} // Try JSON first - if err := json.Unmarshal(data, work); err == nil && work.Name != "" { + if err := json.Unmarshal(data, work); err == nil && isValidManifestWork(work) { return work, nil } // Fall back to YAML jsonData, err := yaml.YAMLToJSON(data) if err != nil { return nil, fmt.Errorf("failed to convert YAML to JSON: %w", err) } if err := json.Unmarshal(jsonData, work); err != nil { return nil, fmt.Errorf("failed to parse ManifestWork: %w", err) } + if !isValidManifestWork(work) { + return nil, fmt.Errorf("parsed object is not a valid ManifestWork") + } + return work, nil } + +func isValidManifestWork(work *workv1.ManifestWork) bool { + return work.Name != "" && work.Kind == "ManifestWork" +}
| # When try to dig Manifest | ||
| discovery: | ||
| byName: "{{ .clusterId }}" | ||
|
|
||
| # Discover sub-resources within the manifestWork | ||
| # This approach can be used to use the discovery name to parameter level | ||
| # This can support jsonPath to dig into the resource status. like mgmtNamespace.status.conditions[?(@.type=="Ready")].status | ||
| nestedDiscoveries: | ||
| - name: "mgmtNamespace" | ||
| discovery: | ||
| bySelectors: | ||
| labelSelector: | ||
| hyperfleet.io/resource-type: "namespace" | ||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| - name: "mgmtConfigMap" | ||
| discovery: | ||
| byName: "{{ .clusterId }}" | ||
|
|
There was a problem hiding this comment.
Align discovery names with the ManifestWork template and CEL paths.
There are several naming mismatches that will make discovery/CEL lookups fail:
discovery.byNameuses{{ .clusterId }}, but the ManifestWork template name ishyperfleet-cluster-setup-{{ .clusterId }}.mgmtConfigMapusesbyName: "{{ .clusterId }}", but the template createscluster-config.- CEL references
agentNamespaceandclusterNamespace, which aren’t declared (resource isnamespace, nested ismgmtNamespace).
🛠️ Example alignment
discovery:
- byName: "{{ .clusterId }}"
+ byName: "hyperfleet-cluster-setup-{{ .clusterId }}"
nestedDiscoveries:
- - name: "mgmtNamespace"
+ - name: "agentNamespace" # or update CEL to use mgmtNamespace
discovery:
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
- name: "mgmtConfigMap"
discovery:
- byName: "{{ .clusterId }}"
+ byName: "cluster-config"
data:
namespace:
name:
expression: |
- resources.?clusterNamespace.?metadata.?name.orValue("")
+ resources.?namespace.?metadata.?name.orValue("")Also applies to: 133-155, 178-184
🤖 Prompt for AI Agents
In `@charts/examples/maestro-kubernetes/adapter-task-config.yaml` around lines 88
- 105, The discovery names and nested discovery identifiers must match the
ManifestWork template and CEL paths: change discovery.byName from "{{ .clusterId
}}" to "hyperfleet-cluster-setup-{{ .clusterId }}"; change the nested discovery
named "mgmtConfigMap" to use byName: "cluster-config" (instead of "{{ .clusterId
}}"); rename or add the nested discovery for the namespace so its name matches
the CEL/ManifestWork usage (e.g., use "clusterNamespace" or "agentNamespace"
instead of "mgmtNamespace") so CEL paths refer to an actual nestedDiscoveries
entry; update any other occurrences (lines ~133-155, ~178-184) to the same names
to keep discovery.byName, nestedDiscoveries.*.name, and
nestedDiscoveries.*.discovery.byName/bySelectors consistent with the
ManifestWork resource names and CEL variables.
| - apiVersion: v1 | ||
| kind: Namespace | ||
| metadata: | ||
| name: "{{ .clusterId | lower }}" | ||
| labels: | ||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| hyperfleet.io/managed-by: "{{ .metadata.name }}" | ||
| hyperfleet.io/resource-type: "namespace" | ||
| annotations: | ||
| hyperfleet.io/created-by: "hyperfleet-adapter" | ||
| hyperfleet.io/generation: "{{ .generationId }}" | ||
| - apiVersion: v1 | ||
| kind: ConfigMap | ||
| metadata: | ||
| name: "cluster-config" | ||
| namespace: "{{ .clusterId }}" | ||
| labels: |
There was a problem hiding this comment.
Namespace normalization mismatch for the ConfigMap.
The Namespace is created with {{ .clusterId | lower }}, but the ConfigMap targets {{ .clusterId }}. If clusterId contains uppercase/invalid chars, the ConfigMap will land in a non-existent namespace.
🔧 Suggested fix
- apiVersion: v1
kind: ConfigMap
metadata:
name: "cluster-config"
- namespace: "{{ .clusterId }}"
+ namespace: "{{ .clusterId | lower }}"🤖 Prompt for AI Agents
In `@charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml`
around lines 75 - 91, The ConfigMap's metadata.namespace uses "{{ .clusterId }}"
while the Namespace resource uses "{{ .clusterId | lower }}", causing a
mismatch; update the ConfigMap manifest so metadata.name "cluster-config" and
metadata.namespace use the same normalized value as the Namespace (apply the
same "| lower" filter to ".clusterId" in the ConfigMap's metadata.namespace),
and verify any other manifests (e.g. resources referencing metadata.namespace)
also use the same normalized ".clusterId | lower" symbol to avoid landing
resources in a non-existent namespace.
ef7b16d to
d7a0b36
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/config_loader/validator.go (1)
381-390:⚠️ Potential issue | 🟠 MajorMaestro manifest template validation still runs despite the skip note.
As written, manifest maps are always validated before the note, so inline ManifestWork templates with runtime-only variables will fail validation.🛠️ Suggested guard to skip maestro manifest template validation
- if manifest, ok := resource.Manifest.(map[string]interface{}); ok { - v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) - } - // NOTE: For maestro transport, we skip template variable validation for manifest content. - // ManifestWork templates may use variables provided at runtime by the framework - // (e.g., adapterName, timestamp) that are not necessarily declared in params or captures. + // NOTE: For maestro transport, we skip template variable validation for manifest content. + // ManifestWork templates may use variables provided at runtime by the framework + // (e.g., adapterName, timestamp) that are not necessarily declared in params or captures. + if !resource.IsMaestroTransport() { + if manifest, ok := resource.Manifest.(map[string]interface{}); ok { + v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) + } + }
🤖 Fix all issues with AI agents
In `@charts/examples/maestro/adapter-task-config.yaml`:
- Around line 94-103: The nested discovery name "mgmtNamespace" in
nestedDiscoveries is inconsistent with the post-processing paths (which still
reference resources.?agentNamespaceManifestWork.agentNamespace... and
resources.?agentNamespaceManifestWork.clusterNamespace...), causing those
lookups to be empty; update the post.conditions and post.data entries to
reference the discovery name mgmtNamespace (e.g., change any occurrences of
agentNamespace or clusterNamespace under resources.?agentNamespaceManifestWork
to mgmtNamespace) so post-processing uses
resources.?agentNamespaceManifestWork.mgmtNamespace for status and metadata
lookups, and ensure all references to nestedDiscoveries, post.conditions, and
post.data consistently use the same name.
In `@internal/k8s_client/apply.go`:
- Around line 23-51: In ApplyResource, after parseToUnstructured returns obj,
validate that obj.GroupVersionKind() has a non-empty Kind and non-empty
APIVersion (i.e. check gvk.Kind and gvk.Empty()/gvk.Version/Group as
appropriate) and that obj.GetName() is non-empty (you can allow empty namespace
via obj.GetNamespace()); if any are missing return a clear error like "manifest
missing apiVersion/kind/metadata.name" before calling GetResource or
ApplyManifest. Update the logic in Client.ApplyResource to perform these checks
(referencing parseToUnstructured, obj.GroupVersionKind(), obj.GetName(),
obj.GetNamespace(), GetResource, and ApplyManifest) and fail fast with a
descriptive error when fields are absent.
In `@internal/k8s_client/interface.go`:
- Around line 20-21: Update the comment above the embedded
transport_client.TransportClient to stop claiming ApplyResource([]byte) is the
only signature — note that ApplyResource on TransportClient now has a
different/extended signature and is not limited to a single []byte parameter;
update the text to accurately describe the exposed API (e.g., list ApplyResource
with its new signature or omit the parameter detail) along with GetResource and
DiscoverResources so the comment matches the current
transport_client.TransportClient methods.
In `@internal/maestro_client/client.go`:
- Around line 598-618: The parseManifestWork function currently validates
work.Name only on the JSON path; update parseManifestWork to enforce that the
parsed ManifestWork has a non-empty metadata.name in both paths by checking
work.Name after successful JSON unmarshal and again after the YAML->JSON
unmarshal/Unmarshal, returning a clear error (e.g., "manifestwork missing
metadata.name") if it's empty; reference the function parseManifestWork and the
work.Name field to locate where to add the post-parse validation and error
return so both JSON and YAML parsing behave consistently.
In `@internal/manifest/generation.go`:
- Around line 396-399: The docstring for DiscoverNestedManifest incorrectly
claims it returns "the manifest with the highest generation" but the function
signature only returns (*unstructured.UnstructuredList, error); update the
comment to remove the misleading return claim and either mention that callers
can use GetLatestGenerationFromList to pick the highest-generation item or add
guidance to use GetLatestGenerationFromList with the returned
*unstructured.UnstructuredList; edit the comment above DiscoverNestedManifest
and reference the helper GetLatestGenerationFromList and the returned type
unstructured.UnstructuredList so callers aren’t confused.
🧹 Nitpick comments (3)
charts/examples/maestro/adapter-task-config.yaml (1)
77-80: Clean up outdated/duplicate comments about ManifestWork.The repeated “ManifestWork” comments are duplicated and now misleading in the manifest-based flow. Consider replacing with a short, accurate description of the
manifestblock.internal/executor/resource_executor.go (1)
342-359: Consider logging when GVK resolution fails.The
resolveGVKfunction silently returns an emptyGroupVersionKindon failure. While this may be intentional for graceful degradation, a debug log could help troubleshoot discovery issues.🔧 Optional: Add debug logging for GVK resolution failures
func (re *ResourceExecutor) resolveGVK(resource config_loader.Resource) schema.GroupVersionKind { manifestData, ok := resource.Manifest.(map[string]interface{}) if !ok { + re.log.Debugf(context.Background(), "Resource[%s] manifest is not a map, cannot resolve GVK", resource.Name) return schema.GroupVersionKind{} } apiVersion, ok1 := manifestData["apiVersion"].(string) kind, ok2 := manifestData["kind"].(string) if !ok1 || !ok2 { + re.log.Debugf(context.Background(), "Resource[%s] manifest missing apiVersion or kind", resource.Name) return schema.GroupVersionKind{} } gv, err := schema.ParseGroupVersion(apiVersion) if err != nil { + re.log.Debugf(context.Background(), "Resource[%s] invalid apiVersion %q: %v", resource.Name, apiVersion, err) return schema.GroupVersionKind{} } return gv.WithKind(kind) }internal/maestro_client/client.go (1)
437-483: ApplyOptions are currently ignored for Maestro.Line 443 accepts
optsbut RecreateOnChange is never used, so behavior diverges from the k8s client. If parity is intended, consider threadingoptsthrough ApplyManifestWork or explicitly documenting that it’s unsupported for Maestro.
| nestedDiscoveries: | ||
| - name: "mgmtNamespace" | ||
| discovery: | ||
| # The "namespace" field within discovery is optional: | ||
| # - For namespaced resources: set namespace to target the specific namespace | ||
| # - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty | ||
| # Here we omit it since Namespace is cluster-scoped | ||
| bySelectors: | ||
| labelSelector: | ||
| hyperfleet.io/resource-type: "namespace" | ||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| hyperfleet.io/managed-by: "{{ .metadata.name }}" | ||
| discovery: | ||
| # The "namespace" field within discovery is optional: | ||
| # - For namespaced resources: set namespace to target the specific namespace | ||
| # - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty | ||
| # Here we omit it since Namespace is cluster-scoped | ||
| bySelectors: | ||
| labelSelector: | ||
| hyperfleet.io/resource-type: "namespace" | ||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| hyperfleet.io/managed-by: "{{ .metadata.name }}" | ||
|
|
||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| - name: "mgmtConfigMap" | ||
| discovery: | ||
| byName: "{{ .clusterId }}" |
There was a problem hiding this comment.
Align nested discovery names with post-processing references.
nestedDiscoveries defines mgmtNamespace, but post-processing still reads resources.?agentNamespaceManifestWork.agentNamespace... and resources.?clusterNamespace.... As written, those paths will resolve to empty values and keep conditions false.
Pick one naming scheme and use it consistently across nestedDiscoveries, post.conditions, and post.data.
🔧 Example fix (update post-processing to match `mgmtNamespace`)
# Applied / Available conditions
resources.?agentNamespaceManifestWork.mgmtNamespace.?status.?phase.orValue("")
# Data extraction
resources.?agentNamespaceManifestWork.mgmtNamespace.?metadata.?name.orValue("")
resources.?agentNamespaceManifestWork.mgmtNamespace.?status.?phase.orValue("")🤖 Prompt for AI Agents
In `@charts/examples/maestro/adapter-task-config.yaml` around lines 94 - 103, The
nested discovery name "mgmtNamespace" in nestedDiscoveries is inconsistent with
the post-processing paths (which still reference
resources.?agentNamespaceManifestWork.agentNamespace... and
resources.?agentNamespaceManifestWork.clusterNamespace...), causing those
lookups to be empty; update the post.conditions and post.data entries to
reference the discovery name mgmtNamespace (e.g., change any occurrences of
agentNamespace or clusterNamespace under resources.?agentNamespaceManifestWork
to mgmtNamespace) so post-processing uses
resources.?agentNamespaceManifestWork.mgmtNamespace for status and metadata
lookups, and ensure all references to nestedDiscoveries, post.conditions, and
post.data consistently use the same name.
| // ApplyResource implements transport_client.TransportClient. | ||
| // It accepts rendered JSON/YAML bytes, parses them into an unstructured K8s resource, | ||
| // discovers the existing resource by name, and applies with generation comparison. | ||
| func (c *Client) ApplyResource( | ||
| ctx context.Context, | ||
| manifestBytes []byte, | ||
| opts *transport_client.ApplyOptions, | ||
| _ transport_client.TransportContext, | ||
| ) (*transport_client.ApplyResult, error) { | ||
| if len(manifestBytes) == 0 { | ||
| return nil, fmt.Errorf("manifest bytes cannot be empty") | ||
| } | ||
|
|
||
| // Parse bytes into unstructured | ||
| obj, err := parseToUnstructured(manifestBytes) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to parse manifest: %w", err) | ||
| } | ||
|
|
||
| // Discover existing resource by name | ||
| gvk := obj.GroupVersionKind() | ||
| existing, err := c.GetResource(ctx, gvk, obj.GetNamespace(), obj.GetName(), nil) | ||
| if err != nil && !apierrors.IsNotFound(err) { | ||
| return nil, fmt.Errorf("failed to get existing resource %s/%s: %w", gvk.Kind, obj.GetName(), err) | ||
| } | ||
|
|
||
| // Apply with generation comparison | ||
| return c.ApplyManifest(ctx, obj, existing, opts) | ||
| } |
There was a problem hiding this comment.
Validate apiVersion/kind/name before discovery.
Line 43 uses GVK and name for lookup; if the manifest is missing apiVersion/kind/metadata.name, this will fail later with less-clear errors. Consider failing fast right after parsing.
Suggested fix
// Parse bytes into unstructured
obj, err := parseToUnstructured(manifestBytes)
if err != nil {
return nil, fmt.Errorf("failed to parse manifest: %w", err)
}
+ if obj.GetAPIVersion() == "" || obj.GetKind() == "" {
+ return nil, fmt.Errorf("manifest must include apiVersion and kind")
+ }
+ if obj.GetName() == "" {
+ return nil, fmt.Errorf("manifest metadata.name is required for apply")
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // ApplyResource implements transport_client.TransportClient. | |
| // It accepts rendered JSON/YAML bytes, parses them into an unstructured K8s resource, | |
| // discovers the existing resource by name, and applies with generation comparison. | |
| func (c *Client) ApplyResource( | |
| ctx context.Context, | |
| manifestBytes []byte, | |
| opts *transport_client.ApplyOptions, | |
| _ transport_client.TransportContext, | |
| ) (*transport_client.ApplyResult, error) { | |
| if len(manifestBytes) == 0 { | |
| return nil, fmt.Errorf("manifest bytes cannot be empty") | |
| } | |
| // Parse bytes into unstructured | |
| obj, err := parseToUnstructured(manifestBytes) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to parse manifest: %w", err) | |
| } | |
| // Discover existing resource by name | |
| gvk := obj.GroupVersionKind() | |
| existing, err := c.GetResource(ctx, gvk, obj.GetNamespace(), obj.GetName(), nil) | |
| if err != nil && !apierrors.IsNotFound(err) { | |
| return nil, fmt.Errorf("failed to get existing resource %s/%s: %w", gvk.Kind, obj.GetName(), err) | |
| } | |
| // Apply with generation comparison | |
| return c.ApplyManifest(ctx, obj, existing, opts) | |
| } | |
| // ApplyResource implements transport_client.TransportClient. | |
| // It accepts rendered JSON/YAML bytes, parses them into an unstructured K8s resource, | |
| // discovers the existing resource by name, and applies with generation comparison. | |
| func (c *Client) ApplyResource( | |
| ctx context.Context, | |
| manifestBytes []byte, | |
| opts *transport_client.ApplyOptions, | |
| _ transport_client.TransportContext, | |
| ) (*transport_client.ApplyResult, error) { | |
| if len(manifestBytes) == 0 { | |
| return nil, fmt.Errorf("manifest bytes cannot be empty") | |
| } | |
| // Parse bytes into unstructured | |
| obj, err := parseToUnstructured(manifestBytes) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to parse manifest: %w", err) | |
| } | |
| if obj.GetAPIVersion() == "" || obj.GetKind() == "" { | |
| return nil, fmt.Errorf("manifest must include apiVersion and kind") | |
| } | |
| if obj.GetName() == "" { | |
| return nil, fmt.Errorf("manifest metadata.name is required for apply") | |
| } | |
| // Discover existing resource by name | |
| gvk := obj.GroupVersionKind() | |
| existing, err := c.GetResource(ctx, gvk, obj.GetNamespace(), obj.GetName(), nil) | |
| if err != nil && !apierrors.IsNotFound(err) { | |
| return nil, fmt.Errorf("failed to get existing resource %s/%s: %w", gvk.Kind, obj.GetName(), err) | |
| } | |
| // Apply with generation comparison | |
| return c.ApplyManifest(ctx, obj, existing, opts) | |
| } |
🤖 Prompt for AI Agents
In `@internal/k8s_client/apply.go` around lines 23 - 51, In ApplyResource, after
parseToUnstructured returns obj, validate that obj.GroupVersionKind() has a
non-empty Kind and non-empty APIVersion (i.e. check gvk.Kind and
gvk.Empty()/gvk.Version/Group as appropriate) and that obj.GetName() is
non-empty (you can allow empty namespace via obj.GetNamespace()); if any are
missing return a clear error like "manifest missing
apiVersion/kind/metadata.name" before calling GetResource or ApplyManifest.
Update the logic in Client.ApplyResource to perform these checks (referencing
parseToUnstructured, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(),
GetResource, and ApplyManifest) and fail fast with a descriptive error when
fields are absent.
| // This provides: ApplyResource([]byte), GetResource, DiscoverResources | ||
| transport_client.TransportClient |
There was a problem hiding this comment.
Update embedded interface comment to match the new ApplyResource signature.
The embedded TransportClient no longer exposes ApplyResource([]byte) only.
✏️ Suggested comment update
- // This provides: ApplyResource([]byte), GetResource, DiscoverResources
+ // This provides: ApplyResource(ctx, manifest, opts, target), GetResource, DiscoverResources📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // This provides: ApplyResource([]byte), GetResource, DiscoverResources | |
| transport_client.TransportClient | |
| // This provides: ApplyResource(ctx, manifest, opts, target), GetResource, DiscoverResources | |
| transport_client.TransportClient |
🤖 Prompt for AI Agents
In `@internal/k8s_client/interface.go` around lines 20 - 21, Update the comment
above the embedded transport_client.TransportClient to stop claiming
ApplyResource([]byte) is the only signature — note that ApplyResource on
TransportClient now has a different/extended signature and is not limited to a
single []byte parameter; update the text to accurately describe the exposed API
(e.g., list ApplyResource with its new signature or omit the parameter detail)
along with GetResource and DiscoverResources so the comment matches the current
transport_client.TransportClient methods.
| // parseManifestWork parses JSON or YAML bytes into a ManifestWork object. | ||
| func parseManifestWork(data []byte) (*workv1.ManifestWork, error) { | ||
| work := &workv1.ManifestWork{} | ||
|
|
||
| // Convert each resource to a Manifest | ||
| manifests := make([]workv1.Manifest, 0, len(resources)) | ||
| for _, r := range resources { | ||
| if r.Manifest == nil { | ||
| continue // Skip resources with no manifest. It means manifests already defined in the manifestWork template | ||
| } | ||
| raw, err := json.Marshal(r.Manifest.Object) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to marshal manifest %s: %w", r.Name, err) | ||
| } | ||
| manifests = append(manifests, workv1.Manifest{ | ||
| RawExtension: runtime.RawExtension{Raw: raw}, | ||
| }) | ||
| // Try JSON first | ||
| if err := json.Unmarshal(data, work); err == nil && work.Name != "" { | ||
| return work, nil | ||
| } | ||
|
|
||
| // Replace the template's manifests with actual resources (only if there are any) | ||
| if len(manifests) > 0 { | ||
| work.Spec.Workload.Manifests = manifests | ||
| // Fall back to YAML | ||
| jsonData, err := yaml.YAMLToJSON(data) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to convert YAML to JSON: %w", err) | ||
| } | ||
|
|
||
| if err := json.Unmarshal(jsonData, work); err != nil { | ||
| return nil, fmt.Errorf("failed to parse ManifestWork: %w", err) | ||
| } | ||
| // Otherwise, use the template's inline manifests as-is | ||
|
|
||
| return work, nil | ||
| } |
There was a problem hiding this comment.
Validate ManifestWork name consistently after parse.
Line 603 enforces work.Name only on the JSON path; the YAML fallback returns even if name is empty, which can surface as harder-to-debug apply errors. Consider enforcing metadata.name after parsing.
Suggested fix
if err := json.Unmarshal(jsonData, work); err != nil {
return nil, fmt.Errorf("failed to parse ManifestWork: %w", err)
}
+ if work.Name == "" {
+ return nil, fmt.Errorf("manifestwork metadata.name is required")
+ }
return work, nil🤖 Prompt for AI Agents
In `@internal/maestro_client/client.go` around lines 598 - 618, The
parseManifestWork function currently validates work.Name only on the JSON path;
update parseManifestWork to enforce that the parsed ManifestWork has a non-empty
metadata.name in both paths by checking work.Name after successful JSON
unmarshal and again after the YAML->JSON unmarshal/Unmarshal, returning a clear
error (e.g., "manifestwork missing metadata.name") if it's empty; reference the
function parseManifestWork and the work.Name field to locate where to add the
post-parse validation and error return so both JSON and YAML parsing behave
consistently.
| // Returns: | ||
| // - List of matching manifests as unstructured objects | ||
| // - The manifest with the highest generation if multiple match | ||
| func DiscoverInManifestWork(work *workv1.ManifestWork, discovery Discovery) (*unstructured.UnstructuredList, error) { | ||
| func DiscoverNestedManifest(parent *unstructured.Unstructured, discovery Discovery) (*unstructured.UnstructuredList, error) { |
There was a problem hiding this comment.
Docstring mentions a “highest generation” return that the function doesn’t provide.
Consider trimming the comment or pointing callers to GetLatestGenerationFromList to avoid confusion.
✏️ Suggested comment tweak
-// - List of matching manifests as unstructured objects
-// - The manifest with the highest generation if multiple match
+// - List of matching manifests as unstructured objects
+// - Use GetLatestGenerationFromList if you need the newest match📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Returns: | |
| // - List of matching manifests as unstructured objects | |
| // - The manifest with the highest generation if multiple match | |
| func DiscoverInManifestWork(work *workv1.ManifestWork, discovery Discovery) (*unstructured.UnstructuredList, error) { | |
| func DiscoverNestedManifest(parent *unstructured.Unstructured, discovery Discovery) (*unstructured.UnstructuredList, error) { | |
| // Returns: | |
| // - List of matching manifests as unstructured objects | |
| // - Use GetLatestGenerationFromList if you need the newest match | |
| func DiscoverNestedManifest(parent *unstructured.Unstructured, discovery Discovery) (*unstructured.UnstructuredList, error) { |
🤖 Prompt for AI Agents
In `@internal/manifest/generation.go` around lines 396 - 399, The docstring for
DiscoverNestedManifest incorrectly claims it returns "the manifest with the
highest generation" but the function signature only returns
(*unstructured.UnstructuredList, error); update the comment to remove the
misleading return claim and either mention that callers can use
GetLatestGenerationFromList to pick the highest-generation item or add guidance
to use GetLatestGenerationFromList with the returned
*unstructured.UnstructuredList; edit the comment above DiscoverNestedManifest
and reference the helper GetLatestGenerationFromList and the returned type
unstructured.UnstructuredList so callers aren’t confused.
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rh-amarin The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ae7e6d0
into
openshift-hyperfleet:main
Summary by CodeRabbit
New Features
Refactoring