diff --git a/charts/examples/kubernetes/README.md b/charts/examples/kubernetes/README.md new file mode 100644 index 0000000..2aa0ae7 --- /dev/null +++ b/charts/examples/kubernetes/README.md @@ -0,0 +1,164 @@ +# Adapter example to create resources in a regional cluster + +This `values.yaml` deploys an `adapter-task-config.yaml` that creates: + +- A new namespace with the name of the cluster ID from the CloudEvent +- A service account, role and role bindings in that new namespace +- A Kubernetes Job with a status-reporter sidecar in that new namespace +- A nginx deployment in the same namespace as the adapter itself + +## Overview + +This example showcases: + +- **Inline manifests**: Defines the Kubernetes Namespace resource directly in the adapter task config +- **External file references**: References external YAML files for Job, ServiceAccount, Role, RoleBinding, and Deployment +- **Preconditions**: Fetches cluster status from the Hyperfleet API before proceeding +- **Resource discovery**: Finds existing resources using label selectors +- **Status reporting**: Builds a status payload with CEL expressions and reports back to the Hyperfleet API +- **Job with sidecar**: Demonstrates a Job pattern with a status-reporter sidecar that monitors job completion and updates job conditions +- **Simulation modes**: Supports different test scenarios via `SIMULATE_RESULT` environment variable +- **RBAC configuration**: Demonstrates configuring additional RBAC resources in helm values + +## Files + +| File | Description | +|------|-------------| +| `values.yaml` | Helm values that configure the adapter, broker, image, environment variables, and RBAC permissions | +| `adapter-config.yaml` | Adapter deployment config (clients, broker, Kubernetes settings) | +| `adapter-task-config.yaml` | Task configuration with inline namespace manifest, external file references, params, preconditions, and post-processing | +| `adapter-task-resource-job.yaml` | Kubernetes Job template with a main container and status-reporter sidecar | +| `adapter-task-resource-job-serviceaccount.yaml` | ServiceAccount for the Job to use in the cluster namespace | +| `adapter-task-resource-job-role.yaml` | Role granting permissions for the status-reporter to update job status | +| `adapter-task-resource-job-rolebinding.yaml` | RoleBinding connecting the ServiceAccount to the Role | +| `adapter-task-resource-deployment.yaml` | Nginx deployment template created in the adapter's namespace | + +## Key Features + +### Inline vs External Manifests + +This example uses both approaches: + +**Inline manifest** for the Namespace: + +```yaml +resources: + - name: "clusterNamespace" + manifest: + apiVersion: v1 + kind: Namespace + metadata: + name: "{{ .clusterId }}" +``` + +**External file reference** for complex resources: + +```yaml +resources: + - name: "jobNamespace" + manifest: + ref: "/etc/adapter/job.yaml" +``` + +### Job with Status-Reporter Sidecar + +The Job (`job.yaml`) includes two containers: + +1. **Main container**: Runs the workload and writes results to a shared volume +2. **Status-reporter sidecar**: Monitors the main container, reads results, and updates the Job's status conditions + +This pattern enables the adapter to track job completion through Kubernetes native conditions. + +### Simulation Modes + +The `SIMULATE_RESULT` environment variable controls test scenarios: + +| Value | Behavior | +|-------|----------| +| `success` | Writes success result and exits cleanly | +| `failure` | Writes failure result and exits with error | +| `hang` | Sleeps indefinitely (tests timeout handling) | +| `crash` | Exits without writing results | +| `invalid-json` | Writes malformed JSON | +| `missing-status` | Writes JSON without required status field | + +Configure in `values.yaml`: + +```yaml +env: + - name: SIMULATE_RESULT + value: success +``` + +## Configuration + +### RBAC Resources + +The `values.yaml` configures RBAC permissions needed for resource management. +In this example is overly permissive since is creating deployments and jobs + +```yaml +rbac: + resources: + - namespaces + - serviceaccounts + - configmaps + - deployments + - roles + - rolebindings + - jobs + - jobs/status + - pods +``` + +### Broker Configuration + +Update the `broker.googlepubsub` section in `values.yaml` with your GCP Pub/Sub settings: + +```yaml +broker: + googlepubsub: + projectId: CHANGE_ME + subscriptionId: CHANGE_ME + topic: CHANGE_ME + deadLetterTopic: CHANGE_ME +``` + +### Image Configuration + +Update the image registry in `values.yaml`: + +```yaml +image: + registry: CHANGE_ME + repository: hyperfleet-adapter + pullPolicy: Always + tag: latest +``` + +## Usage + +```bash +helm install ./charts -f charts/examples/values.yaml \ + --namespace \ + --set image.registry=quay.io/ \ + --set broker.googlepubsub.projectId= \ + --set broker.googlepubsub.subscriptionId= \ + --set broker.googlepubsub.deadLetterTopic= +``` + +## How It Works + +1. The adapter receives a CloudEvent with a cluster ID and generation +2. **Preconditions**: Fetches cluster status from the Hyperfleet API and captures the cluster name, generation, and ready condition +3. **Validation**: Checks that the cluster's Ready condition is "False" before proceeding +4. **Resource creation**: Creates resources in order: + - Namespace named with the cluster ID + - ServiceAccount in the new namespace + - Role and RoleBinding for the status-reporter + - Job with main container and status-reporter sidecar + - Nginx deployment in the adapter's namespace +5. **Job execution**: The Job runs, writes results to a shared volume, and the status-reporter updates job conditions +6. **Post-processing**: Builds a status payload checking Applied, Available, and Health conditions +7. **Status reporting**: Reports the status back to the Hyperfleet API diff --git a/charts/examples/adapter-config.yaml b/charts/examples/kubernetes/adapter-config.yaml similarity index 77% rename from charts/examples/adapter-config.yaml rename to charts/examples/kubernetes/adapter-config.yaml index d4502f1..473417b 100644 --- a/charts/examples/adapter-config.yaml +++ b/charts/examples/kubernetes/adapter-config.yaml @@ -11,7 +11,9 @@ spec: version: "0.1.0" # Log the full merged configuration after load (default: false) - debugConfig: false + debugConfig: true + log: + level: debug clients: hyperfleetApi: @@ -22,8 +24,9 @@ spec: retryBackoff: exponential broker: - subscriptionId: "example-clusters-subscription" - topic: "example-clusters" + subscriptionId: "CHANGE_ME" + topic: "CHANGE_ME" kubernetes: apiVersion: "v1" + #kubeConfigPath: PATH_TO_KUBECONFIG # for local development diff --git a/charts/examples/adapter-task-config.yaml b/charts/examples/kubernetes/adapter-task-config.yaml similarity index 96% rename from charts/examples/adapter-task-config.yaml rename to charts/examples/kubernetes/adapter-task-config.yaml index e517ef7..81e0cbd 100644 --- a/charts/examples/adapter-task-config.yaml +++ b/charts/examples/kubernetes/adapter-task-config.yaml @@ -77,6 +77,8 @@ spec: # Resources with valid K8s manifests resources: - name: "clusterNamespace" + transport: + client: "kubernetes" manifest: apiVersion: v1 kind: Namespace @@ -98,6 +100,8 @@ spec: # in the namespace created above # it will require a service account to be created in that namespace as well as a role and rolebinding - name: "jobServiceAccount" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job-serviceaccount.yaml" discovery: @@ -107,6 +111,8 @@ spec: hyperfleet.io/cluster-id: "{{ .clusterId }}" - name: "job_role" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job-role.yaml" discovery: @@ -118,6 +124,8 @@ spec: hyperfleet.io/resource-type: "role" - name: "job_rolebinding" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job-rolebinding.yaml" discovery: @@ -129,6 +137,8 @@ spec: hyperfleet.io/resource-type: "rolebinding" - name: "jobNamespace" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job.yaml" discovery: @@ -143,6 +153,8 @@ spec: # and using the same service account as the adapter - name: "deploymentNamespace" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/deployment.yaml" discovery: diff --git a/charts/examples/adapter-task-resource-deployment.yaml b/charts/examples/kubernetes/adapter-task-resource-deployment.yaml similarity index 100% rename from charts/examples/adapter-task-resource-deployment.yaml rename to charts/examples/kubernetes/adapter-task-resource-deployment.yaml diff --git a/charts/examples/adapter-task-resource-job-role.yaml b/charts/examples/kubernetes/adapter-task-resource-job-role.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job-role.yaml rename to charts/examples/kubernetes/adapter-task-resource-job-role.yaml diff --git a/charts/examples/adapter-task-resource-job-rolebinding.yaml b/charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job-rolebinding.yaml rename to charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml diff --git a/charts/examples/adapter-task-resource-job-serviceaccount.yaml b/charts/examples/kubernetes/adapter-task-resource-job-serviceaccount.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job-serviceaccount.yaml rename to charts/examples/kubernetes/adapter-task-resource-job-serviceaccount.yaml diff --git a/charts/examples/adapter-task-resource-job.yaml b/charts/examples/kubernetes/adapter-task-resource-job.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job.yaml rename to charts/examples/kubernetes/adapter-task-resource-job.yaml diff --git a/charts/examples/values.yaml b/charts/examples/kubernetes/values.yaml similarity index 100% rename from charts/examples/values.yaml rename to charts/examples/kubernetes/values.yaml diff --git a/charts/examples/maestro/README.md b/charts/examples/maestro/README.md new file mode 100644 index 0000000..2aa0ae7 --- /dev/null +++ b/charts/examples/maestro/README.md @@ -0,0 +1,164 @@ +# Adapter example to create resources in a regional cluster + +This `values.yaml` deploys an `adapter-task-config.yaml` that creates: + +- A new namespace with the name of the cluster ID from the CloudEvent +- A service account, role and role bindings in that new namespace +- A Kubernetes Job with a status-reporter sidecar in that new namespace +- A nginx deployment in the same namespace as the adapter itself + +## Overview + +This example showcases: + +- **Inline manifests**: Defines the Kubernetes Namespace resource directly in the adapter task config +- **External file references**: References external YAML files for Job, ServiceAccount, Role, RoleBinding, and Deployment +- **Preconditions**: Fetches cluster status from the Hyperfleet API before proceeding +- **Resource discovery**: Finds existing resources using label selectors +- **Status reporting**: Builds a status payload with CEL expressions and reports back to the Hyperfleet API +- **Job with sidecar**: Demonstrates a Job pattern with a status-reporter sidecar that monitors job completion and updates job conditions +- **Simulation modes**: Supports different test scenarios via `SIMULATE_RESULT` environment variable +- **RBAC configuration**: Demonstrates configuring additional RBAC resources in helm values + +## Files + +| File | Description | +|------|-------------| +| `values.yaml` | Helm values that configure the adapter, broker, image, environment variables, and RBAC permissions | +| `adapter-config.yaml` | Adapter deployment config (clients, broker, Kubernetes settings) | +| `adapter-task-config.yaml` | Task configuration with inline namespace manifest, external file references, params, preconditions, and post-processing | +| `adapter-task-resource-job.yaml` | Kubernetes Job template with a main container and status-reporter sidecar | +| `adapter-task-resource-job-serviceaccount.yaml` | ServiceAccount for the Job to use in the cluster namespace | +| `adapter-task-resource-job-role.yaml` | Role granting permissions for the status-reporter to update job status | +| `adapter-task-resource-job-rolebinding.yaml` | RoleBinding connecting the ServiceAccount to the Role | +| `adapter-task-resource-deployment.yaml` | Nginx deployment template created in the adapter's namespace | + +## Key Features + +### Inline vs External Manifests + +This example uses both approaches: + +**Inline manifest** for the Namespace: + +```yaml +resources: + - name: "clusterNamespace" + manifest: + apiVersion: v1 + kind: Namespace + metadata: + name: "{{ .clusterId }}" +``` + +**External file reference** for complex resources: + +```yaml +resources: + - name: "jobNamespace" + manifest: + ref: "/etc/adapter/job.yaml" +``` + +### Job with Status-Reporter Sidecar + +The Job (`job.yaml`) includes two containers: + +1. **Main container**: Runs the workload and writes results to a shared volume +2. **Status-reporter sidecar**: Monitors the main container, reads results, and updates the Job's status conditions + +This pattern enables the adapter to track job completion through Kubernetes native conditions. + +### Simulation Modes + +The `SIMULATE_RESULT` environment variable controls test scenarios: + +| Value | Behavior | +|-------|----------| +| `success` | Writes success result and exits cleanly | +| `failure` | Writes failure result and exits with error | +| `hang` | Sleeps indefinitely (tests timeout handling) | +| `crash` | Exits without writing results | +| `invalid-json` | Writes malformed JSON | +| `missing-status` | Writes JSON without required status field | + +Configure in `values.yaml`: + +```yaml +env: + - name: SIMULATE_RESULT + value: success +``` + +## Configuration + +### RBAC Resources + +The `values.yaml` configures RBAC permissions needed for resource management. +In this example is overly permissive since is creating deployments and jobs + +```yaml +rbac: + resources: + - namespaces + - serviceaccounts + - configmaps + - deployments + - roles + - rolebindings + - jobs + - jobs/status + - pods +``` + +### Broker Configuration + +Update the `broker.googlepubsub` section in `values.yaml` with your GCP Pub/Sub settings: + +```yaml +broker: + googlepubsub: + projectId: CHANGE_ME + subscriptionId: CHANGE_ME + topic: CHANGE_ME + deadLetterTopic: CHANGE_ME +``` + +### Image Configuration + +Update the image registry in `values.yaml`: + +```yaml +image: + registry: CHANGE_ME + repository: hyperfleet-adapter + pullPolicy: Always + tag: latest +``` + +## Usage + +```bash +helm install ./charts -f charts/examples/values.yaml \ + --namespace \ + --set image.registry=quay.io/ \ + --set broker.googlepubsub.projectId= \ + --set broker.googlepubsub.subscriptionId= \ + --set broker.googlepubsub.deadLetterTopic= +``` + +## How It Works + +1. The adapter receives a CloudEvent with a cluster ID and generation +2. **Preconditions**: Fetches cluster status from the Hyperfleet API and captures the cluster name, generation, and ready condition +3. **Validation**: Checks that the cluster's Ready condition is "False" before proceeding +4. **Resource creation**: Creates resources in order: + - Namespace named with the cluster ID + - ServiceAccount in the new namespace + - Role and RoleBinding for the status-reporter + - Job with main container and status-reporter sidecar + - Nginx deployment in the adapter's namespace +5. **Job execution**: The Job runs, writes results to a shared volume, and the status-reporter updates job conditions +6. **Post-processing**: Builds a status payload checking Applied, Available, and Health conditions +7. **Status reporting**: Reports the status back to the Hyperfleet API diff --git a/charts/examples/maestro/adapter-config.yaml b/charts/examples/maestro/adapter-config.yaml new file mode 100644 index 0000000..c279b66 --- /dev/null +++ b/charts/examples/maestro/adapter-config.yaml @@ -0,0 +1,70 @@ +# Example HyperFleet Adapter deployment configuration +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterConfig +metadata: + name: example1-namespace + labels: + hyperfleet.io/adapter-type: example1-namespace + hyperfleet.io/component: adapter +spec: + adapter: + version: "0.1.0" + + # Log the full merged configuration after load (default: false) + debugConfig: true + log: + level: debug + + clients: + hyperfleetApi: + baseUrl: http://hyperfleet-api:8000 + version: v1 + timeout: 2s + retryAttempts: 3 + retryBackoff: exponential + + broker: + subscriptionId: CHANGE_ME + topic: CHANGE_ME + + maestro: + grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090" + + # HTTPS server address for REST API operations (optional) + # Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS + httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000" + + # Source identifier for CloudEvents routing (must be unique across adapters) + # Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID + sourceId: "hyperfleet-adapter" + + # Client identifier (defaults to sourceId if not specified) + # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID + clientId: "hyperfleet-adapter-client" + insecure: true + + # Authentication configuration + #auth: + # type: "tls" # TLS certificate-based mTLS + # + # tlsConfig: + # # gRPC TLS configuration + # # Certificate paths (mounted from Kubernetes secrets) + # # Environment variable: HYPERFLEET_MAESTRO_CA_FILE + # caFile: "/etc/maestro/certs/grpc/ca.crt" + # + # # Environment variable: HYPERFLEET_MAESTRO_CERT_FILE + # certFile: "/etc/maestro/certs/grpc/client.crt" + # + # # Environment variable: HYPERFLEET_MAESTRO_KEY_FILE + # keyFile: "/etc/maestro/certs/grpc/client.key" + # + # # Server name for TLS verification + # # Environment variable: HYPERFLEET_MAESTRO_SERVER_NAME + # serverName: "maestro-grpc.maestro.svc.cluster.local" + # + # # HTTP API TLS configuration (may use different CA than gRPC) + # # If not set, falls back to caFile for backwards compatibility + # # Environment variable: HYPERFLEET_MAESTRO_HTTP_CA_FILE + # httpCaFile: "/etc/maestro/certs/https/ca.crt" + diff --git a/charts/examples/maestro/adapter-task-config.yaml b/charts/examples/maestro/adapter-task-config.yaml new file mode 100644 index 0000000..dafc543 --- /dev/null +++ b/charts/examples/maestro/adapter-task-config.yaml @@ -0,0 +1,180 @@ +# Example HyperFleet Adapter task configuration +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: example1-namespace + labels: + hyperfleet.io/adapter-type: example1-namespace + hyperfleet.io/component: adapter +spec: + # Parameters with all required variables + params: + + - name: "clusterId" + source: "event.id" + type: "string" + required: true + + - name: "generationId" + source: "event.generation" + type: "int" + required: true + + - name: "simulateResult" + source: "env.SIMULATE_RESULT" + type: "string" + required: true + + + # Preconditions with valid operators and CEL expressions + preconditions: + - name: "clusterStatus" + apiCall: + method: "GET" + url: "/clusters/{{ .clusterId }}" + timeout: 10s + retryAttempts: 3 + retryBackoff: "exponential" + capture: + - name: "clusterName" + field: "name" + - name: "generationId" + field: "generation" + - name: "timestamp" + field: "created_time" + - name: "readyConditionStatus" + expression: | + status.conditions.filter(c, c.type == "Ready").size() > 0 + ? status.conditions.filter(c, c.type == "Ready")[0].status + : "False" + + - name: "placementClusterName" + expression: "\"cluster1\"" # TBC coming from placement adapter + description: "Unique identifier for the target maestro" + + - name: "adapterName" + expression: "\"adapter1\"" # TBC coming from config passed to params + description: "Unique identifier for the adapter" + + # Structured conditions with valid operators + conditions: + - field: "readyConditionStatus" + operator: "equals" + value: "False" + + - name: "validationCheck" + # Valid CEL expression + expression: | + readyConditionStatus == "False" + + # Resources with valid K8s manifests + resources: + - name: "agentNamespaceManifestWork" + transport: + client: "maestro" + maestro: + targetCluster: "{{ .placementClusterName }}" + # manifestWork supports both inline configuration and and ref approaches + # ref is suggested as the file method is more readable and maintainable. + manifestWork: + ref: "/etc/adapter/manifestwork.yaml" + discovery: + # The "namespace" field within discovery is optional: + # - For namespaced resources: set namespace to target the specific namespace + # - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty + # Here we omit it since Namespace is cluster-scoped + bySelectors: + labelSelector: + hyperfleet.io/resource-type: "namespace" + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/managed-by: "{{ .metadata.name }}" + discovery: + # The "namespace" field within discovery is optional: + # - For namespaced resources: set namespace to target the specific namespace + # - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty + # Here we omit it since Namespace is cluster-scoped + bySelectors: + labelSelector: + hyperfleet.io/resource-type: "namespace" + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/managed-by: "{{ .metadata.name }}" + + + # Post-processing with valid CEL expressions + # This example contains multiple resources, we will only report on the conditions of the jobNamespace not to overcomplicate the example + post: + payloads: + - name: "clusterStatusPayload" + build: + # Adapter name for tracking which adapter reported this status + adapter: "{{ .metadata.name }}" + + # Conditions array - each condition has type, status, reason, message + # Use CEL optional chaining ?.orValue() for safe field access + conditions: + # Applied: Resources successfully created + - type: "Applied" + status: + expression: | + resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False" + reason: + expression: | + resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" + ? "NamespaceCreated" + : "NamespacePending" + message: + expression: | + resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" + ? "Namespace created successfully" + : "Namespace creation in progress" + + # Available: Resources are active and ready + - type: "Available" + status: + expression: | + resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False" + reason: + expression: | + resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "NamespaceReady" : "NamespaceNotReady" + message: + expression: | + resources.?agentNamespaceManifestWork.agentNamespace.?status.?phase.orValue("") == "Active" ? "Namespace is active and ready" : "Namespace is not active and ready" + + # Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config. + - type: "Health" + status: + expression: | + adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") + reason: + expression: | + adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" + message: + expression: | + adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" + + # Use CEL expression for numeric fields to preserve type (not Go template which outputs strings) + observed_generation: + expression: "generationId" + + # Use Go template with now and date functions for timestamps + observed_time: "{{ now | date \"2006-01-02T15:04:05Z07:00\" }}" + + # Optional data field for adapter-specific metrics extracted from resources + data: + namespace: + name: + expression: | + resources.?clusterNamespace.?metadata.?name.orValue("") + status: + expression: | + resources.?clusterNamespace.?status.?phase.orValue("") + + postActions: + - name: "reportClusterStatus" + apiCall: + method: "POST" + url: "/clusters/{{ .clusterId }}/statuses" + headers: + - name: "Content-Type" + value: "application/json" + body: "{{ .clusterStatusPayload }}" diff --git a/charts/examples/maestro/adapter-task-resource-manifestwork.yaml b/charts/examples/maestro/adapter-task-resource-manifestwork.yaml new file mode 100644 index 0000000..35ef18a --- /dev/null +++ b/charts/examples/maestro/adapter-task-resource-manifestwork.yaml @@ -0,0 +1,138 @@ +# ManifestWork Template for External Reference +# File: manifestwork-ref.yaml +# +# This template file defines the ManifestWork structure that wraps Kubernetes manifests +# for deployment via Maestro transport. It's referenced from business logic configs +# using the 'ref' approach for clean separation of concerns. +# +# Template Variables Available: +# - .clusterId: Target cluster identifier +# - .generationId: Resource generation for conflict resolution +# - .adapterName: Name of the adapter creating this ManifestWork +# - .placementCluster: Target cluster name (becomes ManifestWork namespace) +# - .timestamp: Creation timestamp +# - .manifests: Array of rendered Kubernetes manifests (injected by framework) + +apiVersion: work.open-cluster-management.io/v1 +kind: ManifestWork +metadata: + # ManifestWork name - must be unique within consumer namespace + name: "hyperfleet-cluster-setup-{{ .clusterId }}" + + # Labels for identification, filtering, and management + labels: + # HyperFleet tracking labels + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/adapter: "{{ .adapterName }}" + hyperfleet.io/component: "infrastructure" + hyperfleet.io/generation: "{{ .generationId }}" + hyperfleet.io/resource-group: "cluster-setup" + + # Maestro-specific labels + maestro.io/source-id: "{{ .adapterName }}" + maestro.io/resource-type: "manifestwork" + maestro.io/priority: "normal" + + # Standard Kubernetes application labels + app.kubernetes.io/name: "aro-hcp-cluster" + app.kubernetes.io/instance: "{{ .clusterId }}" + app.kubernetes.io/version: "v1.0.0" + app.kubernetes.io/component: "infrastructure" + app.kubernetes.io/part-of: "hyperfleet" + app.kubernetes.io/managed-by: "hyperfleet-adapter" + app.kubernetes.io/created-by: "{{ .adapterName }}" + + # Annotations for metadata and operational information + annotations: + # Tracking and lifecycle + hyperfleet.io/created-by: "hyperfleet-adapter-framework" + hyperfleet.io/managed-by: "{{ .adapterName }}" + hyperfleet.io/generation: "{{ .generationId }}" + hyperfleet.io/cluster-name: "{{ .clusterId }}" + hyperfleet.io/deployment-time: "{{ .timestamp }}" + + # Maestro-specific annotations + maestro.io/applied-time: "{{ .timestamp }}" + maestro.io/source-adapter: "{{ .adapterName }}" + + # Operational annotations + deployment.hyperfleet.io/strategy: "rolling" + deployment.hyperfleet.io/timeout: "300s" + monitoring.hyperfleet.io/enabled: "true" + + # Documentation + description: "Complete cluster setup including namespace, configuration, and RBAC" + documentation: "https://docs.hyperfleet.io/adapters/aro-hcp" + +# ManifestWork specification +spec: + # ============================================================================ + # Workload - Contains the Kubernetes manifests to deploy + # ============================================================================ + workload: + # Kubernetes manifests array - injected by framework from business logic config + manifests: + - apiVersion: v1 + kind: Namespace + metadata: + name: "{{ .clusterId | lower }}" + labels: + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/managed-by: "{{ .metadata.name }}" + hyperfleet.io/resource-type: "namespace" + annotations: + hyperfleet.io/created-by: "hyperfleet-adapter" + hyperfleet.io/generation: "{{ .generationId }}" + - apiVersion: v1 + kind: ConfigMap + metadata: + name: "cluster-config" + namespace: "{{ .clusterId }}" + labels: + hyperfleet.io/cluster-id: "{{ .clusterId }}" + annotations: + hyperfleet.io/generation: "{{ .generationId }}" + data: + cluster_id: "{{ .clusterId }}" + cluster_name: "{{ .clusterName }}" + + # ============================================================================ + # Delete Options - How resources should be removed + # ============================================================================ + deleteOption: + # Propagation policy for resource deletion + # - "Foreground": Wait for dependent resources to be deleted first + # - "Background": Delete immediately, let cluster handle dependents + # - "Orphan": Leave resources on cluster when ManifestWork is deleted + propagationPolicy: "Foreground" + + # Grace period for graceful deletion (seconds) + gracePeriodSeconds: 30 + + # ============================================================================ + # Manifest Configurations - Per-resource settings for update and feedback + # ============================================================================ + manifestConfigs: + # ======================================================================== + # Configuration for Namespace resources + # ======================================================================== + - resourceIdentifier: + group: "" # Core API group (empty for v1 resources) + resource: "namespaces" # Resource type + name: "{{ .clusterId | lower }}" # Specific resource name + updateStrategy: + type: "ServerSideApply" # Use server-side apply for namespaces + serverSideApply: + fieldManager: "hyperfleet-adapter" # Field manager name for conflict resolution + force: false # Don't force conflicts (fail on conflicts) + feedbackRules: + - type: "JSONPaths" # Use JSON path expressions for status feedback + jsonPaths: + - name: "phase" # Namespace phase (Active, Terminating) + path: ".status.phase" + - name: "conditions" # Namespace conditions array + path: ".status.conditions" + - name: "creationTimestamp" # When namespace was created + path: ".metadata.creationTimestamp" + + diff --git a/charts/examples/maestro/adapter-task-resource-namespace.yaml b/charts/examples/maestro/adapter-task-resource-namespace.yaml new file mode 100644 index 0000000..1971930 --- /dev/null +++ b/charts/examples/maestro/adapter-task-resource-namespace.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: "{{ .clusterId }}" + labels: + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/cluster-name: "{{ .clusterName }}" + annotations: + hyperfleet.io/generation: "{{ .generationId }}" diff --git a/charts/examples/maestro/values.yaml b/charts/examples/maestro/values.yaml new file mode 100644 index 0000000..95d47e4 --- /dev/null +++ b/charts/examples/maestro/values.yaml @@ -0,0 +1,48 @@ +adapterConfig: + create: true + files: + adapter-config.yaml: examples/maestro/adapter-config.yaml + +adapterTaskConfig: + create: true + files: + task-config.yaml: examples/maestro/adapter-task-config.yaml + manifestwork.yaml: examples/maestro/adapter-task-resource-manifestwork.yaml + +broker: + create: true + googlepubsub: + projectId: CHANGE_ME + subscriptionId: CHANGE_ME + topic: CHANGE_ME + deadLetterTopic: CHANGE_ME + +image: + registry: CHANGE_ME + repository: hyperfleet-adapter + pullPolicy: Always + tag: latest + +env: + - name: NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: SERVICE_ACCOUNT + valueFrom: + fieldRef: + fieldPath: spec.serviceAccountName + - name: SIMULATE_RESULT + value: success # other possible values: success, failure, hang, crash, invalid-json, missing-status + +rbac: + resources: + - namespaces + - serviceaccounts + - configmaps + - deployments + - roles + - rolebindings + - jobs + - jobs/status + - pods diff --git a/cmd/adapter/main.go b/cmd/adapter/main.go index 0f50fc9..40cba8a 100644 --- a/cmd/adapter/main.go +++ b/cmd/adapter/main.go @@ -13,6 +13,7 @@ import ( "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/executor" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/hyperfleet_api" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/k8s_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/maestro_client" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/health" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/otel" @@ -276,23 +277,37 @@ func runServe() error { return fmt.Errorf("failed to create HyperFleet API client: %w", err) } - // Create Kubernetes client - log.Info(ctx, "Creating Kubernetes client...") - k8sClient, err := createK8sClient(ctx, config.Spec.Clients.Kubernetes, log) - if err != nil { - errCtx := logger.WithErrorField(ctx, err) - log.Errorf(errCtx, "Failed to create Kubernetes client") - return fmt.Errorf("failed to create Kubernetes client: %w", err) + // Create transport client - only one transport is supported per adapter instance + execBuilder := executor.NewBuilder(). + WithConfig(config). + WithAPIClient(apiClient). + WithLogger(log) + + if config.Spec.Clients.Maestro != nil { + log.Info(ctx, "Creating Maestro transport client...") + maestroClient, err := createMaestroClient(ctx, config.Spec.Clients.Maestro, log) + if err != nil { + errCtx := logger.WithErrorField(ctx, err) + log.Errorf(errCtx, "Failed to create Maestro client") + return fmt.Errorf("failed to create Maestro client: %w", err) + } + execBuilder = execBuilder.WithTransportClient(maestroClient) + log.Info(ctx, "Maestro transport client created successfully") + } else { + log.Info(ctx, "Creating Kubernetes transport client...") + k8sClient, err := createK8sClient(ctx, config.Spec.Clients.Kubernetes, log) + if err != nil { + errCtx := logger.WithErrorField(ctx, err) + log.Errorf(errCtx, "Failed to create Kubernetes client") + return fmt.Errorf("failed to create Kubernetes client: %w", err) + } + execBuilder = execBuilder.WithTransportClient(k8sClient) + log.Info(ctx, "Kubernetes transport client created successfully") } // Create the executor using the builder pattern log.Info(ctx, "Creating event executor...") - exec, err := executor.NewBuilder(). - WithConfig(config). - WithAPIClient(apiClient). - WithTransportClient(k8sClient). - WithLogger(log). - Build() + exec, err := execBuilder.Build() if err != nil { errCtx := logger.WithErrorField(ctx, err) log.Errorf(errCtx, "Failed to create executor") @@ -494,3 +509,22 @@ func createK8sClient(ctx context.Context, k8sConfig config_loader.KubernetesConf } return k8s_client.NewClient(ctx, clientConfig, log) } + +// createMaestroClient creates a Maestro client from the config +func createMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) { + config := &maestro_client.Config{ + MaestroServerAddr: maestroConfig.HTTPServerAddress, + GRPCServerAddr: maestroConfig.GRPCServerAddress, + SourceID: maestroConfig.SourceID, + Insecure: maestroConfig.Insecure, + } + + // Set TLS config if present + if maestroConfig.Auth.TLSConfig != nil { + config.CAFile = maestroConfig.Auth.TLSConfig.CAFile + config.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile + config.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile + } + + return maestro_client.NewMaestroClient(ctx, config, log) +} diff --git a/internal/config_loader/accessors.go b/internal/config_loader/accessors.go index b17b2bc..4ad3917 100644 --- a/internal/config_loader/accessors.go +++ b/internal/config_loader/accessors.go @@ -171,6 +171,64 @@ func (c *Config) ResourceNames() []string { // Resource Accessors // ----------------------------------------------------------------------------- +// GetTransportClient returns the transport client type for this resource. +// Defaults to "kubernetes" if no transport config is set. +func (r *Resource) GetTransportClient() string { + if r == nil || r.Transport == nil || r.Transport.Client == "" { + return TransportClientKubernetes + } + return r.Transport.Client +} + +// IsMaestroTransport returns true if this resource uses the maestro transport client +func (r *Resource) IsMaestroTransport() bool { + return r.GetTransportClient() == TransportClientMaestro +} + +// HasManifestWorkRef returns true if the maestro transport manifestWork uses a ref +func (r *Resource) HasManifestWorkRef() bool { + if r == nil || r.Transport == nil || r.Transport.Maestro == nil { + return false + } + return r.Transport.Maestro.HasManifestWorkRef() +} + +// GetManifestWorkRef returns the manifestWork ref path if set, empty string otherwise +func (r *Resource) GetManifestWorkRef() string { + if r == nil || r.Transport == nil || r.Transport.Maestro == nil { + return "" + } + return r.Transport.Maestro.GetManifestWorkRef() +} + +// HasManifestWorkRef returns true if the manifestWork uses a ref +func (m *MaestroTransportConfig) HasManifestWorkRef() bool { + if m == nil || m.ManifestWork == nil { + return false + } + mw := normalizeToStringKeyMap(m.ManifestWork) + if mw == nil { + return false + } + _, hasRef := mw["ref"] + return hasRef +} + +// GetManifestWorkRef returns the manifestWork ref path if set, empty string otherwise +func (m *MaestroTransportConfig) GetManifestWorkRef() string { + if m == nil || m.ManifestWork == nil { + return "" + } + mw := normalizeToStringKeyMap(m.ManifestWork) + if mw == nil { + return "" + } + if ref, ok := mw["ref"].(string); ok { + return ref + } + return "" +} + // HasManifestRef returns true if the manifest uses a ref (single file reference) func (r *Resource) HasManifestRef() bool { if r == nil || r.Manifest == nil { diff --git a/internal/config_loader/constants.go b/internal/config_loader/constants.go index 4e3aa31..b72a2cb 100644 --- a/internal/config_loader/constants.go +++ b/internal/config_loader/constants.go @@ -73,6 +73,21 @@ const ( FieldValues = "values" // YAML alias for Value - both "value" and "values" are accepted in YAML ) +// Transport field names +const ( + FieldTransport = "transport" + FieldClient = "client" + FieldMaestro = "maestro" + FieldTargetCluster = "targetCluster" + FieldManifestWork = "manifestWork" +) + +// Transport client types +const ( + TransportClientKubernetes = "kubernetes" + TransportClientMaestro = "maestro" +) + // Resource field names const ( FieldManifest = "manifest" diff --git a/internal/config_loader/loader.go b/internal/config_loader/loader.go index f09ae10..02fa479 100644 --- a/internal/config_loader/loader.go +++ b/internal/config_loader/loader.go @@ -210,6 +210,24 @@ func loadTaskConfigFileReferences(config *AdapterTaskConfig, baseDir string) err resource.Manifest = content } + // Load transport.maestro.manifestWork.ref in spec.resources + for i := range config.Spec.Resources { + resource := &config.Spec.Resources[i] + ref := resource.GetManifestWorkRef() + if ref == "" { + continue + } + + content, err := loadYAMLFile(baseDir, ref) + if err != nil { + return fmt.Errorf("%s.%s[%d].%s.%s.%s.%s: %w", + FieldSpec, FieldResources, i, FieldTransport, FieldMaestro, FieldManifestWork, FieldRef, err) + } + + // Replace manifestWork with loaded content + resource.Transport.Maestro.ManifestWork = content + } + // Load buildRef in spec.post.payloads if config.Spec.Post != nil { for i := range config.Spec.Post.Payloads { diff --git a/internal/config_loader/loader_test.go b/internal/config_loader/loader_test.go index 172dbbe..650883c 100644 --- a/internal/config_loader/loader_test.go +++ b/internal/config_loader/loader_test.go @@ -535,7 +535,7 @@ spec: errorMsg: "spec.resources[0].name is required", }, { - name: "resource without manifest", + name: "resource without manifest - kubernetes transport requires manifest in semantic validation", yaml: ` apiVersion: hyperfleet.redhat.com/v1alpha1 kind: AdapterTaskConfig @@ -544,9 +544,10 @@ metadata: spec: resources: - name: "testNamespace" + discovery: + byName: "test-ns" `, - wantError: true, - errorMsg: "manifest is required", + wantError: false, // Manifest is no longer structurally required (validated semantically based on transport type) }, } @@ -1348,3 +1349,537 @@ values: require.Error(t, err) assert.Contains(t, err.Error(), "condition has both 'value' and 'values' keys") } + +// ============================================================================= +// Transport Config Tests +// ============================================================================= + +func TestTransportConfigYAMLParsing(t *testing.T) { + tests := []struct { + name string + yaml string + wantError bool + wantClient string + wantTarget string + wantMaestroNil bool + }{ + { + name: "resource with kubernetes transport", + yaml: ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: test-adapter +spec: + resources: + - name: "testResource" + transport: + client: "kubernetes" + manifest: + apiVersion: v1 + kind: Namespace + metadata: + name: "test-ns" + discovery: + byName: "test-ns" +`, + wantError: false, + wantClient: "kubernetes", + wantMaestroNil: true, + }, + { + name: "resource with maestro transport", + yaml: ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: test-adapter +spec: + resources: + - name: "testResource" + transport: + client: "maestro" + maestro: + targetCluster: "cluster1" + manifestWork: + apiVersion: work.open-cluster-management.io/v1 + kind: ManifestWork + metadata: + name: "test-mw" + discovery: + byName: "test-mw" +`, + wantError: false, + wantClient: "maestro", + wantTarget: "cluster1", + wantMaestroNil: false, + }, + { + name: "resource with maestro transport and manifestWork ref", + yaml: ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: test-adapter +spec: + resources: + - name: "testResource" + transport: + client: "maestro" + maestro: + targetCluster: "{{ .clusterName }}" + manifestWork: + ref: "/path/to/manifestwork.yaml" + discovery: + byName: "test-mw" +`, + wantError: false, + wantClient: "maestro", + wantTarget: "{{ .clusterName }}", + wantMaestroNil: false, + }, + { + name: "resource without transport (defaults to kubernetes)", + yaml: ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: test-adapter +spec: + resources: + - name: "testResource" + manifest: + apiVersion: v1 + kind: Namespace + metadata: + name: "test-ns" + discovery: + byName: "test-ns" +`, + wantError: false, + wantClient: "kubernetes", + wantMaestroNil: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + var config AdapterTaskConfig + err := yaml.Unmarshal([]byte(tt.yaml), &config) + + if tt.wantError { + assert.Error(t, err) + return + } + require.NoError(t, err) + require.Len(t, config.Spec.Resources, 1) + + resource := config.Spec.Resources[0] + assert.Equal(t, tt.wantClient, resource.GetTransportClient()) + + if tt.wantMaestroNil { + if resource.Transport != nil { + assert.Nil(t, resource.Transport.Maestro) + } + } else { + require.NotNil(t, resource.Transport) + require.NotNil(t, resource.Transport.Maestro) + assert.Equal(t, tt.wantTarget, resource.Transport.Maestro.TargetCluster) + } + }) + } +} + +func TestGetTransportClient(t *testing.T) { + tests := []struct { + name string + resource Resource + want string + }{ + { + name: "nil transport defaults to kubernetes", + resource: Resource{Name: "test"}, + want: TransportClientKubernetes, + }, + { + name: "empty client defaults to kubernetes", + resource: Resource{Name: "test", Transport: &TransportConfig{Client: ""}}, + want: TransportClientKubernetes, + }, + { + name: "explicit kubernetes", + resource: Resource{Name: "test", Transport: &TransportConfig{Client: "kubernetes"}}, + want: TransportClientKubernetes, + }, + { + name: "explicit maestro", + resource: Resource{Name: "test", Transport: &TransportConfig{Client: "maestro"}}, + want: TransportClientMaestro, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.want, tt.resource.GetTransportClient()) + }) + } +} + +func TestIsMaestroTransport(t *testing.T) { + tests := []struct { + name string + resource Resource + want bool + }{ + { + name: "nil transport is not maestro", + resource: Resource{Name: "test"}, + want: false, + }, + { + name: "kubernetes transport is not maestro", + resource: Resource{Name: "test", Transport: &TransportConfig{Client: "kubernetes"}}, + want: false, + }, + { + name: "maestro transport", + resource: Resource{Name: "test", Transport: &TransportConfig{Client: "maestro"}}, + want: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.want, tt.resource.IsMaestroTransport()) + }) + } +} + +func TestHasManifestWorkRef(t *testing.T) { + tests := []struct { + name string + resource Resource + want bool + }{ + { + name: "nil transport", + resource: Resource{Name: "test"}, + want: false, + }, + { + name: "maestro with no manifestWork", + resource: Resource{ + Name: "test", + Transport: &TransportConfig{ + Client: "maestro", + Maestro: &MaestroTransportConfig{TargetCluster: "cluster1"}, + }, + }, + want: false, + }, + { + name: "maestro with inline manifestWork (no ref)", + resource: Resource{ + Name: "test", + Transport: &TransportConfig{ + Client: "maestro", + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + }, + }, + }, + }, + want: false, + }, + { + name: "maestro with manifestWork ref", + resource: Resource{ + Name: "test", + Transport: &TransportConfig{ + Client: "maestro", + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "ref": "/path/to/manifestwork.yaml", + }, + }, + }, + }, + want: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.want, tt.resource.HasManifestWorkRef()) + }) + } +} + +func TestGetManifestWorkRef(t *testing.T) { + tests := []struct { + name string + resource Resource + want string + }{ + { + name: "nil transport returns empty", + resource: Resource{Name: "test"}, + want: "", + }, + { + name: "maestro with no manifestWork returns empty", + resource: Resource{ + Name: "test", + Transport: &TransportConfig{ + Client: "maestro", + Maestro: &MaestroTransportConfig{TargetCluster: "cluster1"}, + }, + }, + want: "", + }, + { + name: "maestro with inline manifestWork returns empty", + resource: Resource{ + Name: "test", + Transport: &TransportConfig{ + Client: "maestro", + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + }, + }, + }, + }, + want: "", + }, + { + name: "maestro with manifestWork ref", + resource: Resource{ + Name: "test", + Transport: &TransportConfig{ + Client: "maestro", + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "ref": "/etc/adapter/manifestwork.yaml", + }, + }, + }, + }, + want: "/etc/adapter/manifestwork.yaml", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.want, tt.resource.GetManifestWorkRef()) + }) + } +} + +func TestLoadConfigWithManifestWorkRef(t *testing.T) { + tmpDir := t.TempDir() + + // Create a manifestWork template file + manifestWorkFile := filepath.Join(tmpDir, "manifestwork.yaml") + require.NoError(t, os.WriteFile(manifestWorkFile, []byte(` +apiVersion: work.open-cluster-management.io/v1 +kind: ManifestWork +metadata: + name: "test-manifestwork" +spec: + workload: + manifests: [] +`), 0644)) + + adapterYAML := ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterConfig +metadata: + name: test-adapter +spec: + adapter: + version: "0.1.0" + clients: + hyperfleetApi: + baseUrl: "https://test.example.com" + timeout: 2s + kubernetes: + apiVersion: "v1" +` + + taskYAML := ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: test-adapter +spec: + params: + - name: "clusterName" + source: "event.name" + resources: + - name: "testManifestWork" + transport: + client: "maestro" + maestro: + targetCluster: "{{ .clusterName }}" + manifestWork: + ref: "manifestwork.yaml" + discovery: + bySelectors: + labelSelector: + app: "test" +` + + adapterPath, taskPath := createTestConfigFiles(t, tmpDir, adapterYAML, taskYAML) + + config, err := LoadConfig( + WithAdapterConfigPath(adapterPath), + WithTaskConfigPath(taskPath), + WithSkipSemanticValidation(), + ) + require.NoError(t, err) + require.NotNil(t, config) + + // Verify manifestWork ref was loaded and replaced + require.Len(t, config.Spec.Resources, 1) + resource := config.Spec.Resources[0] + require.NotNil(t, resource.Transport) + require.NotNil(t, resource.Transport.Maestro) + + // ManifestWork should be the loaded content, not the ref + mw, ok := resource.Transport.Maestro.ManifestWork.(map[string]interface{}) + require.True(t, ok, "ManifestWork should be a map after loading ref") + assert.Equal(t, "work.open-cluster-management.io/v1", mw["apiVersion"]) + assert.Equal(t, "ManifestWork", mw["kind"]) + + // Verify ref is no longer present + _, hasRef := mw["ref"] + assert.False(t, hasRef, "ref should be replaced with actual content") +} + +func TestLoadConfigWithManifestWorkRefNotFound(t *testing.T) { + tmpDir := t.TempDir() + + adapterYAML := ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterConfig +metadata: + name: test-adapter +spec: + adapter: + version: "0.1.0" + clients: + hyperfleetApi: + baseUrl: "https://test.example.com" + timeout: 2s + kubernetes: + apiVersion: "v1" +` + + taskYAML := ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: test-adapter +spec: + resources: + - name: "testManifestWork" + transport: + client: "maestro" + maestro: + targetCluster: "cluster1" + manifestWork: + ref: "nonexistent-manifestwork.yaml" + discovery: + bySelectors: + labelSelector: + app: "test" +` + + adapterPath, taskPath := createTestConfigFiles(t, tmpDir, adapterYAML, taskYAML) + + config, err := LoadConfig( + WithAdapterConfigPath(adapterPath), + WithTaskConfigPath(taskPath), + WithSkipSemanticValidation(), + ) + require.Error(t, err) + assert.Nil(t, config) + assert.Contains(t, err.Error(), "does not exist") +} + +func TestLoadConfigWithInlineManifestWork(t *testing.T) { + tmpDir := t.TempDir() + + adapterYAML := ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterConfig +metadata: + name: test-adapter +spec: + adapter: + version: "0.1.0" + clients: + hyperfleetApi: + baseUrl: "https://test.example.com" + timeout: 2s + kubernetes: + apiVersion: "v1" +` + + taskYAML := ` +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: test-adapter +spec: + params: + - name: "clusterName" + source: "event.name" + resources: + - name: "testManifestWork" + transport: + client: "maestro" + maestro: + targetCluster: "{{ .clusterName }}" + manifestWork: + apiVersion: work.open-cluster-management.io/v1 + kind: ManifestWork + metadata: + name: "inline-mw" + spec: + workload: + manifests: [] + discovery: + bySelectors: + labelSelector: + app: "test" +` + + adapterPath, taskPath := createTestConfigFiles(t, tmpDir, adapterYAML, taskYAML) + + config, err := LoadConfig( + WithAdapterConfigPath(adapterPath), + WithTaskConfigPath(taskPath), + WithSkipSemanticValidation(), + ) + require.NoError(t, err) + require.NotNil(t, config) + + // Verify inline manifestWork is preserved as-is + require.Len(t, config.Spec.Resources, 1) + resource := config.Spec.Resources[0] + require.NotNil(t, resource.Transport) + require.NotNil(t, resource.Transport.Maestro) + + mw, ok := resource.Transport.Maestro.ManifestWork.(map[string]interface{}) + require.True(t, ok, "ManifestWork should be a map") + assert.Equal(t, "work.open-cluster-management.io/v1", mw["apiVersion"]) + assert.Equal(t, "ManifestWork", mw["kind"]) +} diff --git a/internal/config_loader/types.go b/internal/config_loader/types.go index 09c541f..0ccac7d 100644 --- a/internal/config_loader/types.go +++ b/internal/config_loader/types.go @@ -293,10 +293,27 @@ func (c *Condition) UnmarshalYAML(unmarshal func(interface{}) error) error { return nil } +// TransportConfig specifies which transport client to use for a resource +type TransportConfig struct { + // Client is the transport client type: "kubernetes" or "maestro" + Client string `yaml:"client" validate:"required,oneof=kubernetes maestro"` + // Maestro contains maestro-specific transport settings (required when Client is "maestro") + Maestro *MaestroTransportConfig `yaml:"maestro,omitempty"` +} + +// MaestroTransportConfig contains maestro-specific transport settings +type MaestroTransportConfig struct { + // TargetCluster is the name of the target cluster (consumer) for ManifestWork delivery + TargetCluster string `yaml:"targetCluster" validate:"required"` + // ManifestWork is the ManifestWork template, either inline or as a ref to an external file + ManifestWork interface{} `yaml:"manifestWork,omitempty"` +} + // Resource represents a Kubernetes resource configuration type Resource struct { Name string `yaml:"name" validate:"required,resourcename"` - Manifest interface{} `yaml:"manifest,omitempty" validate:"required"` + Transport *TransportConfig `yaml:"transport,omitempty"` + Manifest interface{} `yaml:"manifest,omitempty"` RecreateOnChange bool `yaml:"recreateOnChange,omitempty"` Discovery *DiscoveryConfig `yaml:"discovery,omitempty" validate:"required"` } diff --git a/internal/config_loader/validator.go b/internal/config_loader/validator.go index 19d48f9..e99c48d 100644 --- a/internal/config_loader/validator.go +++ b/internal/config_loader/validator.go @@ -125,6 +125,18 @@ func (v *TaskConfigValidator) ValidateFileReferences() error { } } + // Validate transport.maestro.manifestWork.ref in spec.resources + for i, resource := range v.config.Spec.Resources { + ref := resource.GetManifestWorkRef() + if ref != "" { + path := fmt.Sprintf("%s.%s[%d].%s.%s.%s.%s", + FieldSpec, FieldResources, i, FieldTransport, FieldMaestro, FieldManifestWork, FieldRef) + if err := v.validateFileExists(ref, path); err != nil { + errors = append(errors, err.Error()) + } + } + } + if len(errors) > 0 { return fmt.Errorf("file reference errors:\n - %s", strings.Join(errors, "\n - ")) } @@ -169,6 +181,7 @@ func (v *TaskConfigValidator) ValidateSemantic() error { } // Run all semantic validators + v.validateTransportConfig() v.validateConditionValues() v.validateCaptureFieldExpressions() v.validateTemplateVariables() @@ -269,6 +282,58 @@ func (v *TaskConfigValidator) initCELEnv() error { return nil } +func (v *TaskConfigValidator) validateTransportConfig() { + for i, resource := range v.config.Spec.Resources { + basePath := fmt.Sprintf("%s.%s[%d]", FieldSpec, FieldResources, i) + + if resource.Transport != nil { + transportPath := basePath + "." + FieldTransport + + // Validate client type + client := resource.Transport.Client + if client != TransportClientKubernetes && client != TransportClientMaestro { + v.errors.Add(transportPath+"."+FieldClient, + fmt.Sprintf("unsupported transport client %q (supported: %s, %s)", + client, TransportClientKubernetes, TransportClientMaestro)) + continue + } + + if client == TransportClientMaestro { + // Maestro transport requires maestro config + if resource.Transport.Maestro == nil { + v.errors.Add(transportPath, + "maestro transport config is required when client is \"maestro\"") + continue + } + + maestroPath := transportPath + "." + TransportClientMaestro + + // Validate targetCluster is set + if resource.Transport.Maestro.TargetCluster == "" { + v.errors.Add(maestroPath+"."+FieldTargetCluster, + "targetCluster is required for maestro transport") + } else { + // Validate template variables in targetCluster + v.validateTemplateString(resource.Transport.Maestro.TargetCluster, + maestroPath+"."+FieldTargetCluster) + } + + // Validate manifestWork is set (either inline or ref) + if resource.Transport.Maestro.ManifestWork == nil && resource.Manifest == nil { + v.errors.Add(maestroPath+"."+FieldManifestWork, + "either manifestWork or manifest must be set for maestro transport") + } + } + } + + // Validate manifest is required for kubernetes transport (default) + if resource.GetTransportClient() == TransportClientKubernetes && resource.Manifest == nil { + v.errors.Add(basePath+"."+FieldManifest, + "manifest is required for kubernetes transport") + } + } +} + func (v *TaskConfigValidator) validateConditionValues() { for i, precond := range v.config.Spec.Preconditions { for j, cond := range precond.Conditions { @@ -325,12 +390,16 @@ func (v *TaskConfigValidator) validateTemplateVariables() { } } - // Validate resource manifests + // Validate resource manifests and transport config templates for i, resource := range v.config.Spec.Resources { resourcePath := fmt.Sprintf("%s.%s[%d]", FieldSpec, FieldResources, i) if manifest, ok := resource.Manifest.(map[string]interface{}); ok { v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) } + // NOTE: We intentionally skip template variable validation for manifestWork content. + // ManifestWork templates (both inline and ref) may use variables that are provided + // at runtime by the framework (e.g., adapterName, timestamp) and are not necessarily + // declared in the task config's params or precondition captures. if resource.Discovery != nil { discoveryPath := resourcePath + "." + FieldDiscovery v.validateTemplateString(resource.Discovery.Namespace, discoveryPath+"."+FieldNamespace) @@ -489,6 +558,11 @@ func (v *TaskConfigValidator) validateBuildExpressions(m map[string]interface{}, func (v *TaskConfigValidator) validateK8sManifests() { for i, resource := range v.config.Spec.Resources { + // Skip manifest validation for maestro transport resources without a manifest field + if resource.IsMaestroTransport() && resource.Manifest == nil { + continue + } + path := fmt.Sprintf("%s.%s[%d].%s", FieldSpec, FieldResources, i, FieldManifest) if manifest, ok := resource.Manifest.(map[string]interface{}); ok { diff --git a/internal/config_loader/validator_test.go b/internal/config_loader/validator_test.go index 0cabf31..f898f09 100644 --- a/internal/config_loader/validator_test.go +++ b/internal/config_loader/validator_test.go @@ -1,6 +1,8 @@ package config_loader import ( + "os" + "path/filepath" "testing" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/criteria" @@ -559,3 +561,406 @@ func TestFieldNameCachePopulated(t *testing.T) { }) } } + +// ============================================================================= +// Transport Config Validation Tests +// ============================================================================= + +func TestValidateTransportConfig(t *testing.T) { + t.Run("valid kubernetes transport", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testNs", + Transport: &TransportConfig{ + Client: TransportClientKubernetes, + }, + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{"name": "test"}, + }, + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("valid maestro transport with inline manifestWork", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + "metadata": map[string]interface{}{"name": "test-mw"}, + }, + }, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("valid maestro transport with manifest field", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + }, + }, + Manifest: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + "metadata": map[string]interface{}{"name": "test-mw"}, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("unsupported transport client", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testNs", + Transport: &TransportConfig{ + Client: "unsupported", + }, + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{"name": "test"}, + }, + Discovery: &DiscoveryConfig{ByName: "test"}, + }} + v := newTaskValidator(cfg) + // Structure validation catches invalid oneof + err := v.ValidateStructure() + require.Error(t, err) + assert.Contains(t, err.Error(), "invalid") + }) + + t.Run("maestro transport missing maestro config", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + // Missing Maestro config + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + _ = v.ValidateStructure() + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "maestro transport config is required") + }) + + t.Run("maestro transport missing targetCluster", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + // Missing TargetCluster + ManifestWork: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + }, + }, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + // targetCluster is structurally required + err := v.ValidateStructure() + require.Error(t, err) + assert.Contains(t, err.Error(), "targetCluster") + }) + + t.Run("maestro transport missing both manifest and manifestWork", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + // No ManifestWork + }, + }, + // No Manifest either + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + _ = v.ValidateStructure() + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "either manifestWork or manifest must be set") + }) + + t.Run("kubernetes transport missing manifest", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testNs", + Transport: &TransportConfig{ + Client: TransportClientKubernetes, + }, + // Missing Manifest + Discovery: &DiscoveryConfig{ByName: "test"}, + }} + v := newTaskValidator(cfg) + _ = v.ValidateStructure() + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "manifest is required for kubernetes transport") + }) + + t.Run("no transport defaults to kubernetes - manifest required", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testNs", + // No Transport (defaults to kubernetes) + // No Manifest + Discovery: &DiscoveryConfig{ByName: "test"}, + }} + v := newTaskValidator(cfg) + _ = v.ValidateStructure() + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "manifest is required for kubernetes transport") + }) + + t.Run("maestro transport with template variable in targetCluster", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{{Name: "clusterName", Source: "event.name"}} + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "{{ .clusterName }}", + ManifestWork: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + "metadata": map[string]interface{}{"name": "test"}, + }, + }, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("maestro transport with undefined template variable in targetCluster", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "{{ .undefinedVar }}", + ManifestWork: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + "metadata": map[string]interface{}{"name": "test"}, + }, + }, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + _ = v.ValidateStructure() + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "undefined template variable \"undefinedVar\"") + }) + + t.Run("maestro transport skips K8s manifest validation", func(t *testing.T) { + // Maestro resources without a manifest field should skip K8s manifest validation + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testMW", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + // ManifestWork content - not validated as K8s manifest + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + }, + }, + }, + // No Manifest field - this should not trigger "missing apiVersion" etc. + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) +} + +func TestValidateFileReferencesManifestWorkRef(t *testing.T) { + tmpDir := t.TempDir() + + // Create a test manifestWork file + manifestWorkDir := filepath.Join(tmpDir, "templates") + require.NoError(t, os.MkdirAll(manifestWorkDir, 0755)) + manifestWorkFile := filepath.Join(manifestWorkDir, "manifestwork.yaml") + require.NoError(t, os.WriteFile(manifestWorkFile, []byte("apiVersion: work.open-cluster-management.io/v1\nkind: ManifestWork"), 0644)) + + tests := []struct { + name string + config *AdapterTaskConfig + wantErr bool + errMsg string + }{ + { + name: "valid manifestWork ref", + config: &AdapterTaskConfig{ + APIVersion: "hyperfleet.redhat.com/v1alpha1", + Kind: "AdapterTaskConfig", + Metadata: Metadata{Name: "test"}, + Spec: AdapterTaskSpec{ + Resources: []Resource{{ + Name: "test", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "ref": "templates/manifestwork.yaml", + }, + }, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }}, + }, + }, + wantErr: false, + }, + { + name: "invalid manifestWork ref - file not found", + config: &AdapterTaskConfig{ + APIVersion: "hyperfleet.redhat.com/v1alpha1", + Kind: "AdapterTaskConfig", + Metadata: Metadata{Name: "test"}, + Spec: AdapterTaskSpec{ + Resources: []Resource{{ + Name: "test", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "ref": "templates/nonexistent.yaml", + }, + }, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }}, + }, + }, + wantErr: true, + errMsg: "does not exist", + }, + { + name: "inline manifestWork - no file reference validation needed", + config: &AdapterTaskConfig{ + APIVersion: "hyperfleet.redhat.com/v1alpha1", + Kind: "AdapterTaskConfig", + Metadata: Metadata{Name: "test"}, + Spec: AdapterTaskSpec{ + Resources: []Resource{{ + Name: "test", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "cluster1", + ManifestWork: map[string]interface{}{ + "apiVersion": "work.open-cluster-management.io/v1", + "kind": "ManifestWork", + }, + }, + }, + Discovery: &DiscoveryConfig{ + BySelectors: &SelectorConfig{ + LabelSelector: map[string]string{"app": "test"}, + }, + }, + }}, + }, + }, + wantErr: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + validator := NewTaskConfigValidator(tt.config, tmpDir) + err := validator.ValidateFileReferences() + if tt.wantErr { + require.Error(t, err) + assert.Contains(t, err.Error(), tt.errMsg) + } else { + assert.NoError(t, err) + } + }) + } +} diff --git a/internal/executor/executor.go b/internal/executor/executor.go index b6a9f92..f5196c5 100644 --- a/internal/executor/executor.go +++ b/internal/executor/executor.go @@ -51,6 +51,7 @@ func validateExecutorConfig(config *ExecutorConfig) error { return fmt.Errorf("field %s is required", field) } } + return nil } @@ -334,7 +335,7 @@ func (b *ExecutorBuilder) WithAPIClient(client hyperfleet_api.Client) *ExecutorB return b } -// WithTransportClient sets the transport client for resource application +// WithTransportClient sets the transport client for resource application (kubernetes or maestro) func (b *ExecutorBuilder) WithTransportClient(client transport_client.TransportClient) *ExecutorBuilder { b.config.TransportClient = client return b diff --git a/internal/executor/resource_executor.go b/internal/executor/resource_executor.go index b20c389..13f9cce 100644 --- a/internal/executor/resource_executor.go +++ b/internal/executor/resource_executor.go @@ -6,13 +6,16 @@ import ( "github.com/mitchellh/copystructure" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/maestro_client" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" apierrors "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" + workv1 "open-cluster-management.io/api/work/v1" ) // ResourceExecutor creates and updates Kubernetes resources @@ -95,9 +98,10 @@ func (re *ResourceExecutor) applyResource(ctx context.Context, resource config_l Status: StatusSuccess, } - if re.client == nil { + transportClient := re.client + if transportClient == nil { result.Status = StatusFailed - result.Error = fmt.Errorf("transport client not configured") + result.Error = fmt.Errorf("transport client not configured for %s", resource.GetTransportClient()) return result, NewExecutorError(PhaseResources, resource.Name, "transport client not configured", result.Error) } @@ -109,11 +113,11 @@ func (re *ResourceExecutor) applyResource(ctx context.Context, resource config_l if resource.Discovery != nil { // Use Discovery config to find existing resource (e.g., by label selector) re.log.Debugf(ctx, "Discovering existing resource using discovery config...") - existingResource, err = re.discoverExistingResource(ctx, gvk, resource.Discovery, execCtx) + existingResource, err = re.discoverExistingResource(ctx, gvk, resource.Discovery, execCtx, transportClient) } else { // No Discovery config - lookup by name from manifest re.log.Debugf(ctx, "Looking up existing resource by name...") - existingResource, err = re.client.GetResource(ctx, gvk, resourceManifest.GetNamespace(), resourceManifest.GetName(), nil) + existingResource, err = transportClient.GetResource(ctx, gvk, resourceManifest.GetNamespace(), resourceManifest.GetName(), nil) } // Fail fast on any error except NotFound (which means resource doesn't exist yet) @@ -139,13 +143,48 @@ func (re *ResourceExecutor) applyResource(ctx context.Context, resource config_l applyOpts = &transport_client.ApplyOptions{RecreateOnChange: true} } + // Build transport context for maestro transport + var transportTarget transport_client.TransportContext + if resource.IsMaestroTransport() && resource.Transport.Maestro != nil { + // Render targetCluster template + targetCluster, tplErr := renderTemplate(resource.Transport.Maestro.TargetCluster, execCtx.Params) + if tplErr != nil { + result.Status = StatusFailed + result.Error = tplErr + return result, NewExecutorError(PhaseResources, resource.Name, "failed to render targetCluster template", tplErr) + } + + // Convert rendered manifest to *workv1.ManifestWork for the maestro transport context + mw := &workv1.ManifestWork{} + if err := runtime.DefaultUnstructuredConverter.FromUnstructured(resourceManifest.Object, mw); err != nil { + result.Status = StatusFailed + result.Error = err + return result, NewExecutorError(PhaseResources, resource.Name, "failed to convert manifest to ManifestWork", err) + } + + transportTarget = &maestro_client.TransportContext{ + ConsumerName: targetCluster, + ManifestWork: mw, + } + } + + // For Maestro transport with inline workload manifests in the ManifestWork template + // (resource.Manifest == nil), pass nil Manifest so buildManifestWork uses the + // template's workload manifests as-is instead of double-wrapping the ManifestWork + // inside its own workload. + manifestForApply := resourceManifest + if resource.IsMaestroTransport() && resource.Manifest == nil { + manifestForApply = nil + } + // Use transport client to apply the resource - applyResult, err := re.client.ApplyResources(ctx, []transport_client.ResourceToApply{ + applyResult, err := transportClient.ApplyResources(ctx, []transport_client.ResourceToApply{ { Name: resource.Name, - Manifest: resourceManifest, + Manifest: manifestForApply, Existing: existingResource, Options: applyOpts, + Target: transportTarget, }, }) @@ -194,15 +233,23 @@ func (re *ResourceExecutor) applyResource(ctx context.Context, resource config_l func (re *ResourceExecutor) buildManifest(ctx context.Context, resource config_loader.Resource, execCtx *ExecutionContext) (*unstructured.Unstructured, error) { var manifestData map[string]interface{} - // Get manifest (inline or loaded from ref) + // Get manifest source based on transport type + var manifestSource interface{} if resource.Manifest != nil { - switch m := resource.Manifest.(type) { + manifestSource = resource.Manifest + } else if resource.IsMaestroTransport() && resource.Transport.Maestro != nil && resource.Transport.Maestro.ManifestWork != nil { + // For maestro transport, use manifestWork as the manifest source + manifestSource = resource.Transport.Maestro.ManifestWork + } + + if manifestSource != nil { + switch m := manifestSource.(type) { case map[string]interface{}: manifestData = m case map[interface{}]interface{}: manifestData = convertToStringKeyMap(m) default: - return nil, fmt.Errorf("unsupported manifest type: %T", resource.Manifest) + return nil, fmt.Errorf("unsupported manifest type: %T", manifestSource) } } else { return nil, fmt.Errorf("no manifest specified for resource %s", resource.Name) @@ -250,8 +297,8 @@ func validateManifest(obj *unstructured.Unstructured) error { } // discoverExistingResource discovers an existing resource using the discovery config -func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk schema.GroupVersionKind, discovery *config_loader.DiscoveryConfig, execCtx *ExecutionContext) (*unstructured.Unstructured, error) { - if re.client == nil { +func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk schema.GroupVersionKind, discovery *config_loader.DiscoveryConfig, execCtx *ExecutionContext, client transport_client.TransportClient) (*unstructured.Unstructured, error) { + if client == nil { return nil, fmt.Errorf("transport client not configured") } @@ -268,7 +315,7 @@ func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk sc if err != nil { return nil, fmt.Errorf("failed to render byName template: %w", err) } - return re.client.GetResource(ctx, gvk, namespace, name, nil) + return client.GetResource(ctx, gvk, namespace, name, nil) } // Discover by label selector @@ -294,7 +341,7 @@ func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk sc LabelSelector: labelSelector, } - list, err := re.client.DiscoverResources(ctx, gvk, discoveryConfig, nil) + list, err := client.DiscoverResources(ctx, gvk, discoveryConfig, nil) if err != nil { return nil, err } diff --git a/internal/executor/types.go b/internal/executor/types.go index ec0ff1d..4e10a0d 100644 --- a/internal/executor/types.go +++ b/internal/executor/types.go @@ -60,8 +60,7 @@ type ExecutorConfig struct { Config *config_loader.Config // APIClient is the HyperFleet API client APIClient hyperfleet_api.Client - // TransportClient is the transport client for applying resources - // (supports k8s_client, maestro_client, etc.) + // TransportClient is the transport client for applying resources (kubernetes or maestro) TransportClient transport_client.TransportClient // Logger is the logger instance Logger logger.Logger diff --git a/internal/maestro_client/client.go b/internal/maestro_client/client.go index 288f72b..30e5b73 100644 --- a/internal/maestro_client/client.go +++ b/internal/maestro_client/client.go @@ -483,11 +483,13 @@ func (c *Client) ApplyResources( // Convert to result with error for _, r := range resources { resourceResult := &transport_client.ResourceApplyResult{ - Name: r.Name, - Kind: r.Manifest.GetKind(), - Namespace: r.Manifest.GetNamespace(), - ResourceName: r.Manifest.GetName(), - Error: err, + Name: r.Name, + Error: err, + } + if r.Manifest != nil { + resourceResult.Kind = r.Manifest.GetKind() + resourceResult.Namespace = r.Manifest.GetNamespace() + resourceResult.ResourceName = r.Manifest.GetName() } result.Results = append(result.Results, resourceResult) result.FailedCount++ @@ -501,16 +503,18 @@ func (c *Client) ApplyResources( // Build success results for all resources for _, r := range resources { resourceResult := &transport_client.ResourceApplyResult{ - Name: r.Name, - Kind: r.Manifest.GetKind(), - Namespace: r.Manifest.GetNamespace(), - ResourceName: r.Manifest.GetName(), + Name: r.Name, ApplyResult: &transport_client.ApplyResult{ Resource: r.Manifest, Operation: op, Reason: fmt.Sprintf("applied via ManifestWork %s/%s", consumerName, work.Name), }, } + if r.Manifest != nil { + resourceResult.Kind = r.Manifest.GetKind() + resourceResult.Namespace = r.Manifest.GetNamespace() + resourceResult.ResourceName = r.Manifest.GetName() + } result.Results = append(result.Results, resourceResult) result.SuccessCount++ } @@ -623,6 +627,7 @@ func (c *Client) buildManifestWork(template *workv1.ManifestWork, resources []tr if len(manifests) > 0 { work.Spec.Workload.Manifests = manifests } + // Otherwise, use the template's inline manifests as-is return work, nil } diff --git a/internal/maestro_client/client_test.go b/internal/maestro_client/client_test.go new file mode 100644 index 0000000..8f21772 --- /dev/null +++ b/internal/maestro_client/client_test.go @@ -0,0 +1,271 @@ +package maestro_client + +import ( + "encoding/json" + "testing" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime" + workv1 "open-cluster-management.io/api/work/v1" +) + +// --- helpers --- + +// mustJSON marshals v to JSON or panics. +func mustJSON(t *testing.T, v interface{}) []byte { + t.Helper() + raw, err := json.Marshal(v) + require.NoError(t, err) + return raw +} + +// bareNamespaceJSON returns a bare Namespace manifest as JSON. +func bareNamespaceJSON(t *testing.T, name string) []byte { + t.Helper() + return mustJSON(t, map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": name, + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + }) +} + +// unmarshalManifestRaw unmarshals a workv1.Manifest.Raw back to a map. +func unmarshalManifestRaw(t *testing.T, m workv1.Manifest) map[string]interface{} { + t.Helper() + require.NotNil(t, m.Raw) + var obj map[string]interface{} + require.NoError(t, json.Unmarshal(m.Raw, &obj)) + return obj +} + +// newTestTemplate creates a ManifestWork template with the given workload manifests. +func newTestTemplate(name string, manifests []workv1.Manifest) *workv1.ManifestWork { + return &workv1.ManifestWork{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Annotations: map[string]string{ + constants.AnnotationGeneration: "1", + }, + Labels: map[string]string{ + "test": "true", + }, + }, + Spec: workv1.ManifestWorkSpec{ + Workload: workv1.ManifestsTemplate{ + Manifests: manifests, + }, + }, + } +} + +// --- buildManifestWork tests --- + +func TestBuildManifestWork_ExplicitResources(t *testing.T) { + // When resources have non-nil Manifest, template workload manifests are replaced. + templateManifests := []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: bareNamespaceJSON(t, "template-ns")}}, + } + template := newTestTemplate("test-mw", templateManifests) + + resource := &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{ + "name": "explicit-cm", + "namespace": "default", + }, + }, + } + + c := &Client{} + work, err := c.buildManifestWork(template, []transport_client.ResourceToApply{ + {Name: "cm", Manifest: resource}, + }, "consumer-1") + + require.NoError(t, err) + assert.Equal(t, "consumer-1", work.Namespace) + assert.Equal(t, "test-mw", work.Name) + require.Len(t, work.Spec.Workload.Manifests, 1) + + obj := unmarshalManifestRaw(t, work.Spec.Workload.Manifests[0]) + assert.Equal(t, "ConfigMap", obj["kind"], "should contain the explicit resource, not the template's") +} + +func TestBuildManifestWork_NilManifestUsesTemplate(t *testing.T) { + // When all resources have nil Manifest, template workload manifests are used as-is. + templateManifests := []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: bareNamespaceJSON(t, "from-template")}}, + } + template := newTestTemplate("test-mw", templateManifests) + + c := &Client{} + work, err := c.buildManifestWork(template, []transport_client.ResourceToApply{ + {Name: "ns", Manifest: nil}, + }, "consumer-1") + + require.NoError(t, err) + require.Len(t, work.Spec.Workload.Manifests, 1) + + obj := unmarshalManifestRaw(t, work.Spec.Workload.Manifests[0]) + assert.Equal(t, "Namespace", obj["kind"]) + assert.Equal(t, "v1", obj["apiVersion"]) +} + +func TestBuildManifestWork_EmptyResources(t *testing.T) { + // Empty resources list should use template manifests. + templateManifests := []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: bareNamespaceJSON(t, "keep-me")}}, + } + template := newTestTemplate("test-mw", templateManifests) + + c := &Client{} + work, err := c.buildManifestWork(template, []transport_client.ResourceToApply{}, "consumer-1") + + require.NoError(t, err) + require.Len(t, work.Spec.Workload.Manifests, 1) + + obj := unmarshalManifestRaw(t, work.Spec.Workload.Manifests[0]) + assert.Equal(t, "Namespace", obj["kind"]) +} + +func TestBuildManifestWork_DoesNotMutateTemplate(t *testing.T) { + // The original template must not be modified. + originalJSON := bareNamespaceJSON(t, "original-ns") + templateManifests := []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: originalJSON}}, + } + template := newTestTemplate("test-mw", templateManifests) + + resource := &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{"name": "new-cm", "namespace": "default"}, + }, + } + + c := &Client{} + _, err := c.buildManifestWork(template, []transport_client.ResourceToApply{ + {Name: "cm", Manifest: resource}, + }, "consumer-1") + require.NoError(t, err) + + // Template should still have the original Namespace manifest + require.Len(t, template.Spec.Workload.Manifests, 1) + obj := unmarshalManifestRaw(t, template.Spec.Workload.Manifests[0]) + assert.Equal(t, "Namespace", obj["kind"]) + assert.Equal(t, "", template.Namespace, "template namespace should not be modified") +} + +func TestBuildManifestWork_SetsConsumerNamespace(t *testing.T) { + template := newTestTemplate("test-mw", []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: bareNamespaceJSON(t, "ns")}}, + }) + + c := &Client{} + work, err := c.buildManifestWork(template, []transport_client.ResourceToApply{}, "my-cluster") + + require.NoError(t, err) + assert.Equal(t, "my-cluster", work.Namespace) +} + +func TestBuildManifestWork_PreservesMetadata(t *testing.T) { + template := newTestTemplate("my-manifestwork", []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: bareNamespaceJSON(t, "ns")}}, + }) + template.Labels["extra"] = "label" + template.Annotations["extra"] = "annotation" + + c := &Client{} + work, err := c.buildManifestWork(template, []transport_client.ResourceToApply{}, "consumer-1") + + require.NoError(t, err) + assert.Equal(t, "my-manifestwork", work.Name) + assert.Equal(t, "true", work.Labels["test"]) + assert.Equal(t, "label", work.Labels["extra"]) + assert.Equal(t, "1", work.Annotations[constants.AnnotationGeneration]) + assert.Equal(t, "annotation", work.Annotations["extra"]) +} + +func TestBuildManifestWork_MixedNilAndExplicitResources(t *testing.T) { + // If at least one resource has a non-nil Manifest, template manifests are replaced. + template := newTestTemplate("test-mw", []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: bareNamespaceJSON(t, "template-ns")}}, + }) + + resource := &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{"name": "explicit-cm", "namespace": "default"}, + }, + } + + c := &Client{} + work, err := c.buildManifestWork(template, []transport_client.ResourceToApply{ + {Name: "skipped", Manifest: nil}, + {Name: "cm", Manifest: resource}, + }, "consumer-1") + + require.NoError(t, err) + // Only the explicit resource should be included (nil ones are skipped) + require.Len(t, work.Spec.Workload.Manifests, 1) + + obj := unmarshalManifestRaw(t, work.Spec.Workload.Manifests[0]) + assert.Equal(t, "ConfigMap", obj["kind"]) +} + +func TestBuildManifestWork_TemplateWithMultipleBareManifests(t *testing.T) { + // Simulates the real-world scenario: ManifestWork template with Namespace + ConfigMap + // as bare manifests, and nil Manifest resources. + nsJSON := bareNamespaceJSON(t, "cluster-abc") + cmJSON := mustJSON(t, map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{ + "name": "cluster-config", + "namespace": "cluster-abc", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + "data": map[string]interface{}{"cluster_id": "abc"}, + }) + + template := newTestTemplate("hyperfleet-cluster-setup-abc", []workv1.Manifest{ + {RawExtension: runtime.RawExtension{Raw: nsJSON}}, + {RawExtension: runtime.RawExtension{Raw: cmJSON}}, + }) + + c := &Client{} + work, err := c.buildManifestWork(template, []transport_client.ResourceToApply{ + {Name: "manifestwork", Manifest: nil}, + }, "cluster1") + + require.NoError(t, err) + assert.Equal(t, "cluster1", work.Namespace) + require.Len(t, work.Spec.Workload.Manifests, 2) + + ns := unmarshalManifestRaw(t, work.Spec.Workload.Manifests[0]) + assert.Equal(t, "Namespace", ns["kind"]) + assert.Equal(t, "v1", ns["apiVersion"]) + nsMeta := ns["metadata"].(map[string]interface{}) + assert.Equal(t, "cluster-abc", nsMeta["name"]) + + cm := unmarshalManifestRaw(t, work.Spec.Workload.Manifests[1]) + assert.Equal(t, "ConfigMap", cm["kind"]) + cmMeta := cm["metadata"].(map[string]interface{}) + assert.Equal(t, "cluster-config", cmMeta["name"]) + assert.Equal(t, "cluster-abc", cmMeta["namespace"]) +}