diff --git a/README.md b/README.md index 579f7d3..9e120fe 100644 --- a/README.md +++ b/README.md @@ -157,6 +157,7 @@ A HyperFleet Adapter requires several files for configuration: To see all configuration options read [configuration.md](configuration.md) file #### Adapter configuration + The adapter deployment configuration (`AdapterConfig`) controls runtime and infrastructure settings for the adapter process, such as client connections, retries, and broker subscription details. It is loaded with Viper, so values can be overridden by CLI flags @@ -167,10 +168,11 @@ and environment variables in this priority order: CLI flags > env vars > file > (HyperFleet API, Maestro, broker, Kubernetes) Reference examples: -- `configs/adapter-deployment-config.yaml` (full reference with env/flag notes) -- `charts/examples/adapter-config.yaml` (minimal deployment example) + +- `charts/examples/adapter-config.yaml` #### Adapter task configuration + The adapter task configuration (`AdapterTaskConfig`) defines the **business logic** for processing events: parameters, preconditions, resources to create, and post-actions. This file is loaded as **static YAML** (no Viper overrides) and is required at runtime. @@ -180,13 +182,13 @@ This file is loaded as **static YAML** (no Viper overrides) and is required at r - **Resource manifests**: inline YAML or external file via `manifest.ref` Reference examples: -- `charts/examples/adapter-task-config.yaml` (worked example) -- `configs/adapter-task-config-template.yaml` (complete schema reference) +- `charts/examples/adapter-task-config.yaml` (worked example) ### Broker Configuration Broker configuration is particular since responsibility is split between: + - **Hyperfleet broker library**: configures the connection to a concrete broker (google pubsub, rabbitmq, ...) - Configured using a YAML file specified by the `BROKER_CONFIG_FILE` environment variable - **Adapter**: configures which topic/subscriptions to use on the broker diff --git a/charts/examples/kubernetes/README.md b/charts/examples/kubernetes/README.md new file mode 100644 index 0000000..2aa0ae7 --- /dev/null +++ b/charts/examples/kubernetes/README.md @@ -0,0 +1,164 @@ +# Adapter example to create resources in a regional cluster + +This `values.yaml` deploys an `adapter-task-config.yaml` that creates: + +- A new namespace with the name of the cluster ID from the CloudEvent +- A service account, role and role bindings in that new namespace +- A Kubernetes Job with a status-reporter sidecar in that new namespace +- A nginx deployment in the same namespace as the adapter itself + +## Overview + +This example showcases: + +- **Inline manifests**: Defines the Kubernetes Namespace resource directly in the adapter task config +- **External file references**: References external YAML files for Job, ServiceAccount, Role, RoleBinding, and Deployment +- **Preconditions**: Fetches cluster status from the Hyperfleet API before proceeding +- **Resource discovery**: Finds existing resources using label selectors +- **Status reporting**: Builds a status payload with CEL expressions and reports back to the Hyperfleet API +- **Job with sidecar**: Demonstrates a Job pattern with a status-reporter sidecar that monitors job completion and updates job conditions +- **Simulation modes**: Supports different test scenarios via `SIMULATE_RESULT` environment variable +- **RBAC configuration**: Demonstrates configuring additional RBAC resources in helm values + +## Files + +| File | Description | +|------|-------------| +| `values.yaml` | Helm values that configure the adapter, broker, image, environment variables, and RBAC permissions | +| `adapter-config.yaml` | Adapter deployment config (clients, broker, Kubernetes settings) | +| `adapter-task-config.yaml` | Task configuration with inline namespace manifest, external file references, params, preconditions, and post-processing | +| `adapter-task-resource-job.yaml` | Kubernetes Job template with a main container and status-reporter sidecar | +| `adapter-task-resource-job-serviceaccount.yaml` | ServiceAccount for the Job to use in the cluster namespace | +| `adapter-task-resource-job-role.yaml` | Role granting permissions for the status-reporter to update job status | +| `adapter-task-resource-job-rolebinding.yaml` | RoleBinding connecting the ServiceAccount to the Role | +| `adapter-task-resource-deployment.yaml` | Nginx deployment template created in the adapter's namespace | + +## Key Features + +### Inline vs External Manifests + +This example uses both approaches: + +**Inline manifest** for the Namespace: + +```yaml +resources: + - name: "clusterNamespace" + manifest: + apiVersion: v1 + kind: Namespace + metadata: + name: "{{ .clusterId }}" +``` + +**External file reference** for complex resources: + +```yaml +resources: + - name: "jobNamespace" + manifest: + ref: "/etc/adapter/job.yaml" +``` + +### Job with Status-Reporter Sidecar + +The Job (`job.yaml`) includes two containers: + +1. **Main container**: Runs the workload and writes results to a shared volume +2. **Status-reporter sidecar**: Monitors the main container, reads results, and updates the Job's status conditions + +This pattern enables the adapter to track job completion through Kubernetes native conditions. + +### Simulation Modes + +The `SIMULATE_RESULT` environment variable controls test scenarios: + +| Value | Behavior | +|-------|----------| +| `success` | Writes success result and exits cleanly | +| `failure` | Writes failure result and exits with error | +| `hang` | Sleeps indefinitely (tests timeout handling) | +| `crash` | Exits without writing results | +| `invalid-json` | Writes malformed JSON | +| `missing-status` | Writes JSON without required status field | + +Configure in `values.yaml`: + +```yaml +env: + - name: SIMULATE_RESULT + value: success +``` + +## Configuration + +### RBAC Resources + +The `values.yaml` configures RBAC permissions needed for resource management. +In this example is overly permissive since is creating deployments and jobs + +```yaml +rbac: + resources: + - namespaces + - serviceaccounts + - configmaps + - deployments + - roles + - rolebindings + - jobs + - jobs/status + - pods +``` + +### Broker Configuration + +Update the `broker.googlepubsub` section in `values.yaml` with your GCP Pub/Sub settings: + +```yaml +broker: + googlepubsub: + projectId: CHANGE_ME + subscriptionId: CHANGE_ME + topic: CHANGE_ME + deadLetterTopic: CHANGE_ME +``` + +### Image Configuration + +Update the image registry in `values.yaml`: + +```yaml +image: + registry: CHANGE_ME + repository: hyperfleet-adapter + pullPolicy: Always + tag: latest +``` + +## Usage + +```bash +helm install ./charts -f charts/examples/values.yaml \ + --namespace \ + --set image.registry=quay.io/ \ + --set broker.googlepubsub.projectId= \ + --set broker.googlepubsub.subscriptionId= \ + --set broker.googlepubsub.deadLetterTopic= +``` + +## How It Works + +1. The adapter receives a CloudEvent with a cluster ID and generation +2. **Preconditions**: Fetches cluster status from the Hyperfleet API and captures the cluster name, generation, and ready condition +3. **Validation**: Checks that the cluster's Ready condition is "False" before proceeding +4. **Resource creation**: Creates resources in order: + - Namespace named with the cluster ID + - ServiceAccount in the new namespace + - Role and RoleBinding for the status-reporter + - Job with main container and status-reporter sidecar + - Nginx deployment in the adapter's namespace +5. **Job execution**: The Job runs, writes results to a shared volume, and the status-reporter updates job conditions +6. **Post-processing**: Builds a status payload checking Applied, Available, and Health conditions +7. **Status reporting**: Reports the status back to the Hyperfleet API diff --git a/charts/examples/adapter-config.yaml b/charts/examples/kubernetes/adapter-config.yaml similarity index 77% rename from charts/examples/adapter-config.yaml rename to charts/examples/kubernetes/adapter-config.yaml index d4502f1..473417b 100644 --- a/charts/examples/adapter-config.yaml +++ b/charts/examples/kubernetes/adapter-config.yaml @@ -11,7 +11,9 @@ spec: version: "0.1.0" # Log the full merged configuration after load (default: false) - debugConfig: false + debugConfig: true + log: + level: debug clients: hyperfleetApi: @@ -22,8 +24,9 @@ spec: retryBackoff: exponential broker: - subscriptionId: "example-clusters-subscription" - topic: "example-clusters" + subscriptionId: "CHANGE_ME" + topic: "CHANGE_ME" kubernetes: apiVersion: "v1" + #kubeConfigPath: PATH_TO_KUBECONFIG # for local development diff --git a/charts/examples/adapter-task-config.yaml b/charts/examples/kubernetes/adapter-task-config.yaml similarity index 96% rename from charts/examples/adapter-task-config.yaml rename to charts/examples/kubernetes/adapter-task-config.yaml index e517ef7..81e0cbd 100644 --- a/charts/examples/adapter-task-config.yaml +++ b/charts/examples/kubernetes/adapter-task-config.yaml @@ -77,6 +77,8 @@ spec: # Resources with valid K8s manifests resources: - name: "clusterNamespace" + transport: + client: "kubernetes" manifest: apiVersion: v1 kind: Namespace @@ -98,6 +100,8 @@ spec: # in the namespace created above # it will require a service account to be created in that namespace as well as a role and rolebinding - name: "jobServiceAccount" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job-serviceaccount.yaml" discovery: @@ -107,6 +111,8 @@ spec: hyperfleet.io/cluster-id: "{{ .clusterId }}" - name: "job_role" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job-role.yaml" discovery: @@ -118,6 +124,8 @@ spec: hyperfleet.io/resource-type: "role" - name: "job_rolebinding" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job-rolebinding.yaml" discovery: @@ -129,6 +137,8 @@ spec: hyperfleet.io/resource-type: "rolebinding" - name: "jobNamespace" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/job.yaml" discovery: @@ -143,6 +153,8 @@ spec: # and using the same service account as the adapter - name: "deploymentNamespace" + transport: + client: "kubernetes" manifest: ref: "/etc/adapter/deployment.yaml" discovery: diff --git a/charts/examples/adapter-task-resource-deployment.yaml b/charts/examples/kubernetes/adapter-task-resource-deployment.yaml similarity index 100% rename from charts/examples/adapter-task-resource-deployment.yaml rename to charts/examples/kubernetes/adapter-task-resource-deployment.yaml diff --git a/charts/examples/adapter-task-resource-job-role.yaml b/charts/examples/kubernetes/adapter-task-resource-job-role.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job-role.yaml rename to charts/examples/kubernetes/adapter-task-resource-job-role.yaml diff --git a/charts/examples/adapter-task-resource-job-rolebinding.yaml b/charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job-rolebinding.yaml rename to charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml diff --git a/charts/examples/adapter-task-resource-job-serviceaccount.yaml b/charts/examples/kubernetes/adapter-task-resource-job-serviceaccount.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job-serviceaccount.yaml rename to charts/examples/kubernetes/adapter-task-resource-job-serviceaccount.yaml diff --git a/charts/examples/adapter-task-resource-job.yaml b/charts/examples/kubernetes/adapter-task-resource-job.yaml similarity index 100% rename from charts/examples/adapter-task-resource-job.yaml rename to charts/examples/kubernetes/adapter-task-resource-job.yaml diff --git a/charts/examples/values.yaml b/charts/examples/kubernetes/values.yaml similarity index 100% rename from charts/examples/values.yaml rename to charts/examples/kubernetes/values.yaml diff --git a/charts/examples/maestro/README.md b/charts/examples/maestro/README.md new file mode 100644 index 0000000..2aa0ae7 --- /dev/null +++ b/charts/examples/maestro/README.md @@ -0,0 +1,164 @@ +# Adapter example to create resources in a regional cluster + +This `values.yaml` deploys an `adapter-task-config.yaml` that creates: + +- A new namespace with the name of the cluster ID from the CloudEvent +- A service account, role and role bindings in that new namespace +- A Kubernetes Job with a status-reporter sidecar in that new namespace +- A nginx deployment in the same namespace as the adapter itself + +## Overview + +This example showcases: + +- **Inline manifests**: Defines the Kubernetes Namespace resource directly in the adapter task config +- **External file references**: References external YAML files for Job, ServiceAccount, Role, RoleBinding, and Deployment +- **Preconditions**: Fetches cluster status from the Hyperfleet API before proceeding +- **Resource discovery**: Finds existing resources using label selectors +- **Status reporting**: Builds a status payload with CEL expressions and reports back to the Hyperfleet API +- **Job with sidecar**: Demonstrates a Job pattern with a status-reporter sidecar that monitors job completion and updates job conditions +- **Simulation modes**: Supports different test scenarios via `SIMULATE_RESULT` environment variable +- **RBAC configuration**: Demonstrates configuring additional RBAC resources in helm values + +## Files + +| File | Description | +|------|-------------| +| `values.yaml` | Helm values that configure the adapter, broker, image, environment variables, and RBAC permissions | +| `adapter-config.yaml` | Adapter deployment config (clients, broker, Kubernetes settings) | +| `adapter-task-config.yaml` | Task configuration with inline namespace manifest, external file references, params, preconditions, and post-processing | +| `adapter-task-resource-job.yaml` | Kubernetes Job template with a main container and status-reporter sidecar | +| `adapter-task-resource-job-serviceaccount.yaml` | ServiceAccount for the Job to use in the cluster namespace | +| `adapter-task-resource-job-role.yaml` | Role granting permissions for the status-reporter to update job status | +| `adapter-task-resource-job-rolebinding.yaml` | RoleBinding connecting the ServiceAccount to the Role | +| `adapter-task-resource-deployment.yaml` | Nginx deployment template created in the adapter's namespace | + +## Key Features + +### Inline vs External Manifests + +This example uses both approaches: + +**Inline manifest** for the Namespace: + +```yaml +resources: + - name: "clusterNamespace" + manifest: + apiVersion: v1 + kind: Namespace + metadata: + name: "{{ .clusterId }}" +``` + +**External file reference** for complex resources: + +```yaml +resources: + - name: "jobNamespace" + manifest: + ref: "/etc/adapter/job.yaml" +``` + +### Job with Status-Reporter Sidecar + +The Job (`job.yaml`) includes two containers: + +1. **Main container**: Runs the workload and writes results to a shared volume +2. **Status-reporter sidecar**: Monitors the main container, reads results, and updates the Job's status conditions + +This pattern enables the adapter to track job completion through Kubernetes native conditions. + +### Simulation Modes + +The `SIMULATE_RESULT` environment variable controls test scenarios: + +| Value | Behavior | +|-------|----------| +| `success` | Writes success result and exits cleanly | +| `failure` | Writes failure result and exits with error | +| `hang` | Sleeps indefinitely (tests timeout handling) | +| `crash` | Exits without writing results | +| `invalid-json` | Writes malformed JSON | +| `missing-status` | Writes JSON without required status field | + +Configure in `values.yaml`: + +```yaml +env: + - name: SIMULATE_RESULT + value: success +``` + +## Configuration + +### RBAC Resources + +The `values.yaml` configures RBAC permissions needed for resource management. +In this example is overly permissive since is creating deployments and jobs + +```yaml +rbac: + resources: + - namespaces + - serviceaccounts + - configmaps + - deployments + - roles + - rolebindings + - jobs + - jobs/status + - pods +``` + +### Broker Configuration + +Update the `broker.googlepubsub` section in `values.yaml` with your GCP Pub/Sub settings: + +```yaml +broker: + googlepubsub: + projectId: CHANGE_ME + subscriptionId: CHANGE_ME + topic: CHANGE_ME + deadLetterTopic: CHANGE_ME +``` + +### Image Configuration + +Update the image registry in `values.yaml`: + +```yaml +image: + registry: CHANGE_ME + repository: hyperfleet-adapter + pullPolicy: Always + tag: latest +``` + +## Usage + +```bash +helm install ./charts -f charts/examples/values.yaml \ + --namespace \ + --set image.registry=quay.io/ \ + --set broker.googlepubsub.projectId= \ + --set broker.googlepubsub.subscriptionId= \ + --set broker.googlepubsub.deadLetterTopic= +``` + +## How It Works + +1. The adapter receives a CloudEvent with a cluster ID and generation +2. **Preconditions**: Fetches cluster status from the Hyperfleet API and captures the cluster name, generation, and ready condition +3. **Validation**: Checks that the cluster's Ready condition is "False" before proceeding +4. **Resource creation**: Creates resources in order: + - Namespace named with the cluster ID + - ServiceAccount in the new namespace + - Role and RoleBinding for the status-reporter + - Job with main container and status-reporter sidecar + - Nginx deployment in the adapter's namespace +5. **Job execution**: The Job runs, writes results to a shared volume, and the status-reporter updates job conditions +6. **Post-processing**: Builds a status payload checking Applied, Available, and Health conditions +7. **Status reporting**: Reports the status back to the Hyperfleet API diff --git a/charts/examples/maestro/adapter-config.yaml b/charts/examples/maestro/adapter-config.yaml new file mode 100644 index 0000000..e8bb971 --- /dev/null +++ b/charts/examples/maestro/adapter-config.yaml @@ -0,0 +1,70 @@ +# Example HyperFleet Adapter deployment configuration +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterConfig +metadata: + name: example1-namespace + labels: + hyperfleet.io/adapter-type: example1-namespace + hyperfleet.io/component: adapter +spec: + adapter: + version: "0.1.0" + + # Log the full merged configuration after load (default: false) + debugConfig: true + log: + level: debug + + clients: + hyperfleetApi: + baseUrl: http://hyperfleet-api:8000 + version: v1 + timeout: 2s + retryAttempts: 3 + retryBackoff: exponential + + broker: + subscriptionId: "CHANGE_ME" + topic: "CHANGE_ME" + + maestro: + grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090" + + # HTTPS server address for REST API operations (optional) + # Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS + httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000" + + # Source identifier for CloudEvents routing (must be unique across adapters) + # Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID + sourceId: "hyperfleet-adapter" + + # Client identifier (defaults to sourceId if not specified) + # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID + clientId: "hyperfleet-adapter-client" + insecure: true + + # Authentication configuration + #auth: + # type: "tls" # TLS certificate-based mTLS + # + # tlsConfig: + # # gRPC TLS configuration + # # Certificate paths (mounted from Kubernetes secrets) + # # Environment variable: HYPERFLEET_MAESTRO_CA_FILE + # caFile: "/etc/maestro/certs/grpc/ca.crt" + # + # # Environment variable: HYPERFLEET_MAESTRO_CERT_FILE + # certFile: "/etc/maestro/certs/grpc/client.crt" + # + # # Environment variable: HYPERFLEET_MAESTRO_KEY_FILE + # keyFile: "/etc/maestro/certs/grpc/client.key" + # + # # Server name for TLS verification + # # Environment variable: HYPERFLEET_MAESTRO_SERVER_NAME + # serverName: "maestro-grpc.maestro.svc.cluster.local" + # + # # HTTP API TLS configuration (may use different CA than gRPC) + # # If not set, falls back to caFile for backwards compatibility + # # Environment variable: HYPERFLEET_MAESTRO_HTTP_CA_FILE + # httpCaFile: "/etc/maestro/certs/https/ca.crt" + diff --git a/charts/examples/maestro/adapter-task-config.yaml b/charts/examples/maestro/adapter-task-config.yaml new file mode 100644 index 0000000..e932b84 --- /dev/null +++ b/charts/examples/maestro/adapter-task-config.yaml @@ -0,0 +1,167 @@ +# Example HyperFleet Adapter task configuration +apiVersion: hyperfleet.redhat.com/v1alpha1 +kind: AdapterTaskConfig +metadata: + name: example1-namespace + labels: + hyperfleet.io/adapter-type: example1-namespace + hyperfleet.io/component: adapter +spec: + # Parameters with all required variables + params: + + - name: "clusterId" + source: "event.id" + type: "string" + required: true + + - name: "generationSpec" + source: "event.generation" + type: "int" + required: true + + - name: "simulateResult" + source: "env.SIMULATE_RESULT" + type: "string" + required: true + + # Preconditions with valid operators and CEL expressions + preconditions: + - name: "clusterStatus" + apiCall: + method: "GET" + url: "/clusters/{{ .clusterId }}" + timeout: 10s + retryAttempts: 3 + retryBackoff: "exponential" + capture: + - name: "clusterName" + field: "name" + - name: "generationSpec" + field: "generation" + - name: "readyConditionStatus" + expression: | + status.conditions.filter(c, c.type == "Ready").size() > 0 + ? status.conditions.filter(c, c.type == "Ready")[0].status + : "False" + # Structured conditions with valid operators + conditions: + - field: "readyConditionStatus" + operator: "equals" + value: "False" + + - name: "validationCheck" + # Valid CEL expression + expression: | + readyConditionStatus == "False" + + # Resources with valid K8s manifests + resources: + # Maestro transport bundles multiple manifests into a single ManifestWork + - name: "maestromanifest" + transport: + client: "maestro" + maestro: + targetCluster: cluster1 + manifests: + - name: "maestro-ns" + manifest: + apiVersion: v1 + kind: Namespace + metadata: + name: "maestro-{{ .clusterId }}" + labels: + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/cluster-name: "{{ .clusterName }}" + annotations: + hyperfleet.io/generation: "{{ .generationSpec }}" + - name: "maestro-cm" + manifest: + apiVersion: v1 + kind: ConfigMap + metadata: + name: "cluster-config" + namespace: "maestro-{{ .clusterId }}" + labels: + hyperfleet.io/cluster-id: "{{ .clusterId }}" + annotations: + hyperfleet.io/generation: "{{ .generationSpec }}" + data: + cluster_id: "{{ .clusterId }}" + cluster_name: "{{ .clusterName }}" + discovery: + namespace: "*" # Cluster-scoped resource (Namespace) + bySelectors: + labelSelector: + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/cluster-name: "{{ .clusterName }}" + + + # Post-processing with valid CEL expressions + # This example contains multiple resources, we will only report on the conditions of the jobNamespace not to overcomplicate the example + post: + payloads: + - name: "clusterStatusPayload" + build: + adapter: "{{ .metadata.name }}" + conditions: + # Applied: Job successfully created + - type: "Applied" + status: + expression: | + has(resources.jobNamespace.spec) ? "True" : "False" + reason: + expression: | + has(resources.jobNamespace.spec) + ? "JobApplied" + : "JobPending" + message: + expression: | + has(resources.jobNamespace) + ? "jobNamespace manifest applied successfully" + : "jobNamespace is pending to be applied" + # Available: Check job status conditions + - type: "Available" + status: + expression: | + has(resources.jobNamespace.status.conditions) ? + ( resources.?jobNamespace.?status.?conditions.orValue([]).exists(c, c.type == "Available") + ? resources.jobNamespace.status.conditions.filter(c, c.type == "Available")[0].status : "False") + : "Unknown" + reason: + expression: | + resources.?jobNamespace.?status.?conditions.orValue([]).exists(c, c.type == "Available") + ? resources.jobNamespace.status.conditions.filter(c, c.type == "Available")[0].reason + : resources.?jobNamespace.?status.?conditions.orValue([]).exists(c, c.type == "Failed") ? "ValidationFailed" + : resources.?jobNamespace.?status.hasValue() ? "ValidationInProgress" : "ValidationPending" + message: + expression: | + resources.?jobNamespace.?status.?conditions.orValue([]).exists(c, c.type == "Available") + ? resources.jobNamespace.status.conditions.filter(c, c.type == "Available")[0].message + : resources.?jobNamespace.?status.?conditions.orValue([]).exists(c, c.type == "Failed") ? "Validation failed" + : resources.?jobNamespace.?status.hasValue() ? "Validation in progress" : "Validation is pending" + # Health: Adapter execution status (runtime) + - type: "Health" + status: + expression: | + adapter.?executionStatus.orValue("") == "success" ? "True" : "False" + reason: + expression: | + adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" + message: + expression: | + adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations in progress or completed successfully" + # Event generation ID metadata field needs to use expression to avoid interpolation issues + observed_generation: + expression: "generationSpec" + observed_time: "{{ now | date \"2006-01-02T15:04:05Z07:00\" }}" + + postActions: + - name: "reportClusterStatus" + apiCall: + method: "POST" + url: "/clusters/{{ .clusterId }}/statuses" + headers: + - name: "Content-Type" + value: "application/json" + body: "{{ .clusterStatusPayload }}" diff --git a/charts/examples/maestro/adapter-task-resource-namespace.yaml b/charts/examples/maestro/adapter-task-resource-namespace.yaml new file mode 100644 index 0000000..0e632c4 --- /dev/null +++ b/charts/examples/maestro/adapter-task-resource-namespace.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: "{{ .clusterId }}" + labels: + hyperfleet.io/cluster-id: "{{ .clusterId }}" + hyperfleet.io/cluster-name: "{{ .clusterName }}" + annotations: + hyperfleet.io/generation: "{{ .generationSpec }}" diff --git a/charts/examples/maestro/values.yaml b/charts/examples/maestro/values.yaml new file mode 100644 index 0000000..222e94a --- /dev/null +++ b/charts/examples/maestro/values.yaml @@ -0,0 +1,50 @@ +adapterConfig: + create: true + files: + adapter-config.yaml: examples/maestro/adapter-config.yaml + log: + level: debug + +adapterTaskConfig: + create: true + files: + task-config.yaml: examples/maestro/adapter-task-config.yaml + namespace.yaml: examples/maestro/adapter-task-resource-namespace.yaml + +broker: + create: true + googlepubsub: + projectId: CHANGE_ME + subscriptionId: CHANGE_ME + topic: CHANGE_ME + deadLetterTopic: CHANGE_ME + +image: + registry: CHANGE_ME + repository: hyperfleet-adapter + pullPolicy: Always + tag: latest + +env: + - name: NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: SERVICE_ACCOUNT + valueFrom: + fieldRef: + fieldPath: spec.serviceAccountName + - name: SIMULATE_RESULT + value: success # other possible values: success, failure, hang, crash, invalid-json, missing-status + +rbac: + resources: + - namespaces + - serviceaccounts + - configmaps + - deployments + - roles + - rolebindings + - jobs + - jobs/status + - pods diff --git a/cmd/adapter/main.go b/cmd/adapter/main.go index 292a8c5..b75ba85 100644 --- a/cmd/adapter/main.go +++ b/cmd/adapter/main.go @@ -9,10 +9,10 @@ import ( "syscall" "time" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/client_factory" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/executor" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/hyperfleet_api" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/k8s_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/health" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/otel" @@ -50,7 +50,6 @@ const ( ) func main() { - // Root command rootCmd := &cobra.Command{ Use: "adapter", @@ -269,20 +268,35 @@ func runServe() error { // Create HyperFleet API client from config log.Info(ctx, "Creating HyperFleet API client...") - apiClient, err := createAPIClient(config.Spec.Clients.HyperfleetAPI, log) + apiClient, err := client_factory.CreateAPIClient(config.Spec.Clients.HyperfleetAPI, log) if err != nil { errCtx := logger.WithErrorField(ctx, err) log.Errorf(errCtx, "Failed to create HyperFleet API client") return fmt.Errorf("failed to create HyperFleet API client: %w", err) } - // Create Kubernetes client - log.Info(ctx, "Creating Kubernetes client...") - k8sClient, err := createK8sClient(ctx, config.Spec.Clients.Kubernetes, log) - if err != nil { - errCtx := logger.WithErrorField(ctx, err) - log.Errorf(errCtx, "Failed to create Kubernetes client") - return fmt.Errorf("failed to create Kubernetes client: %w", err) + // Create transport client — either Maestro or Kubernetes, based on config + var transportClient transport_client.TransportClient + if config.Spec.Clients.Maestro != nil { + log.Info(ctx, "Creating Maestro client as transport...") + maestroClient, err := client_factory.CreateMaestroClient(ctx, config.Spec.Clients.Maestro, log) + if err != nil { + errCtx := logger.WithErrorField(ctx, err) + log.Errorf(errCtx, "Failed to create Maestro client") + return fmt.Errorf("failed to create Maestro client: %w", err) + } + transportClient = maestroClient + log.Info(ctx, "Maestro client created as transport client") + } else { + log.Info(ctx, "Creating Kubernetes client as transport...") + k8sClient, err := client_factory.CreateK8sClient(ctx, config.Spec.Clients.Kubernetes, log) + if err != nil { + errCtx := logger.WithErrorField(ctx, err) + log.Errorf(errCtx, "Failed to create Kubernetes client") + return fmt.Errorf("failed to create Kubernetes client: %w", err) + } + transportClient = k8sClient + log.Info(ctx, "Kubernetes client created as transport client") } // Create the executor using the builder pattern @@ -290,7 +304,7 @@ func runServe() error { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sClient). + WithTransportClient(transportClient). WithLogger(log). Build() if err != nil { @@ -437,60 +451,3 @@ func runServe() error { return nil } - -// createAPIClient creates a HyperFleet API client from the config -func createAPIClient(apiConfig config_loader.HyperfleetAPIConfig, log logger.Logger) (hyperfleet_api.Client, error) { - var opts []hyperfleet_api.ClientOption - - // Set base URL if configured (env fallback handled in NewClient) - if apiConfig.BaseURL != "" { - opts = append(opts, hyperfleet_api.WithBaseURL(apiConfig.BaseURL)) - } - - // Set timeout if configured (0 means use default) - if apiConfig.Timeout > 0 { - opts = append(opts, hyperfleet_api.WithTimeout(apiConfig.Timeout)) - } - - // Set retry attempts - if apiConfig.RetryAttempts > 0 { - opts = append(opts, hyperfleet_api.WithRetryAttempts(apiConfig.RetryAttempts)) - } - - // Set retry backoff strategy - if apiConfig.RetryBackoff != "" { - switch apiConfig.RetryBackoff { - case hyperfleet_api.BackoffExponential, hyperfleet_api.BackoffLinear, hyperfleet_api.BackoffConstant: - opts = append(opts, hyperfleet_api.WithRetryBackoff(apiConfig.RetryBackoff)) - default: - return nil, fmt.Errorf("invalid retry backoff strategy %q (supported: exponential, linear, constant)", apiConfig.RetryBackoff) - } - } - - // Set retry base delay - if apiConfig.BaseDelay > 0 { - opts = append(opts, hyperfleet_api.WithBaseDelay(apiConfig.BaseDelay)) - } - - // Set retry max delay - if apiConfig.MaxDelay > 0 { - opts = append(opts, hyperfleet_api.WithMaxDelay(apiConfig.MaxDelay)) - } - - // Set default headers - for key, value := range apiConfig.DefaultHeaders { - opts = append(opts, hyperfleet_api.WithDefaultHeader(key, value)) - } - - return hyperfleet_api.NewClient(log, opts...) -} - -// createK8sClient creates a Kubernetes client from the config -func createK8sClient(ctx context.Context, k8sConfig config_loader.KubernetesConfig, log logger.Logger) (*k8s_client.Client, error) { - clientConfig := k8s_client.ClientConfig{ - KubeConfigPath: k8sConfig.KubeConfigPath, - QPS: k8sConfig.QPS, - Burst: k8sConfig.Burst, - } - return k8s_client.NewClient(ctx, clientConfig, log) -} diff --git a/configs/README.md b/configs/README.md deleted file mode 100644 index 0afaf3d..0000000 --- a/configs/README.md +++ /dev/null @@ -1,216 +0,0 @@ -# Broker Configuration - -This directory contains ConfigMap templates and examples for configuring the hyperfleet-adapter broker consumer. - -## Files - -- **`broker-configmap-pubsub-template.yaml`** - Comprehensive template with all options and documentation -- **`broker-configmap-pubsub-example.yaml`** - Simple ready-to-use example for quick start - -## Quick Start - -### 1. Choose Your Broker - -Currently supported: -- **Google Pub/Sub** - For GCP environments -- RabbitMQ - (template can be added if needed) - -### 2. Configure for Google Pub/Sub - -Edit `broker-configmap-pubsub-example.yaml`: - -```yaml -data: - # Broker configuration - BROKER_GOOGLEPUBSUB_PROJECT_ID: "your-gcp-project" -``` - -Also set the adapter broker settings in the deployment config: - -```yaml -spec: - clients: - broker: - subscriptionId: "your-subscription-name" - topic: "your-topic-name" -``` - -### 3. Apply the ConfigMap - -```bash -kubectl apply -f configs/broker-configmap-pubsub-example.yaml -``` - -### 4. Reference in Deployment - -The adapter deployment should reference this ConfigMap using `envFrom`: - -```yaml -spec: - containers: - - name: adapter - envFrom: - - configMapRef: - name: hyperfleet-broker-pubsub -``` - -## Configuration Options - -### Environment Variables - -The hyperfleet-broker library reads configuration from environment variables: - -#### Required Variables - -| Variable | Description | Example | -|----------|-------------|---------| -| `BROKER_TYPE` | Broker type | `googlepubsub` | -| `BROKER_GOOGLEPUBSUB_PROJECT_ID` | GCP project ID | `my-project` | - -#### Optional Variables - -| Variable | Description | Default | -|----------|-------------|---------| -| `BROKER_GOOGLEPUBSUB_TOPIC` | Topic name for publishing | - | -| `BROKER_GOOGLEPUBSUB_MAX_OUTSTANDING_MESSAGES` | Max unacked messages | `1000` | -| `BROKER_GOOGLEPUBSUB_NUM_GOROUTINES` | Pub/Sub client goroutines | `10` | -| `SUBSCRIBER_PARALLELISM` | Concurrent handlers | `1` | -| `LOG_CONFIG` | Log configuration at startup | `false` | - -### Configuration File (Alternative) - -Instead of environment variables, you can use a `broker.yaml` file: - -```yaml -broker: - type: googlepubsub - googlepubsub: - project_id: "my-project" - subscription: "my-subscription" - max_outstanding_messages: 1000 - num_goroutines: 10 - -subscriber: - parallelism: 5 -``` - -Mount this as a file and set `BROKER_CONFIG_FILE=/etc/broker/broker.yaml`. - -## GCP Authentication - -### Option 1: Workload Identity (Recommended for GKE) - -```yaml -serviceAccountName: hyperfleet-adapter -# GKE will automatically inject credentials -``` - -### Option 2: Service Account Key - -```yaml -env: -- name: GOOGLE_APPLICATION_CREDENTIALS - value: /var/secrets/google/key.json -volumeMounts: -- name: gcp-credentials - mountPath: /var/secrets/google - readOnly: true -volumes: -- name: gcp-credentials - secret: - secretName: gcp-service-account-key -``` - -### Option 3: Emulator (Development Only) - -```yaml -env: -- name: PUBSUB_EMULATOR_HOST - value: "localhost:8085" -``` - -## Performance Tuning - -### High Throughput - -For processing many messages: - -```yaml -BROKER_GOOGLEPUBSUB_MAX_OUTSTANDING_MESSAGES: "5000" -BROKER_GOOGLEPUBSUB_NUM_GOROUTINES: "20" -SUBSCRIBER_PARALLELISM: "10" -``` - -### Low Latency - -For quick response times: - -```yaml -BROKER_GOOGLEPUBSUB_MAX_OUTSTANDING_MESSAGES: "100" -BROKER_GOOGLEPUBSUB_NUM_GOROUTINES: "5" -SUBSCRIBER_PARALLELISM: "3" -``` - -### Memory Constrained - -For limited memory environments: - -```yaml -BROKER_GOOGLEPUBSUB_MAX_OUTSTANDING_MESSAGES: "100" -BROKER_GOOGLEPUBSUB_NUM_GOROUTINES: "2" -SUBSCRIBER_PARALLELISM: "1" -``` - -## Troubleshooting - -### Enable Debug Logging - -```yaml -LOG_CONFIG: "true" -``` - -This will log the complete broker configuration at startup. - -### Check Credentials - -```bash -# Verify service account has required permissions -kubectl exec -it -- sh -# Inside pod: -gcloud auth list -gcloud pubsub subscriptions list --project= -``` - -### Test Pub/Sub Connection - -```bash -# From adapter pod -gcloud pubsub subscriptions pull \ - --project= \ - --limit=1 \ - --auto-ack -``` - -## Required GCP Permissions - -The service account needs these IAM roles: - -``` -roles/pubsub.subscriber # To consume messages -roles/pubsub.viewer # To view subscriptions -``` - -Or these specific permissions: - -``` -pubsub.subscriptions.consume -pubsub.subscriptions.get -``` - -## See Also - -- [hyperfleet-broker Library](https://github.com/openshift-hyperfleet/hyperfleet-broker) -- [Internal broker_consumer Package](../internal/broker_consumer/README.md) -- [Integration Tests](../test/integration/broker_consumer/README.md) -- [CloudEvents Specification](https://github.com/cloudevents/spec) - diff --git a/configs/adapter-config-template.yaml b/configs/adapter-config-template.yaml deleted file mode 100644 index 4f2ce1b..0000000 --- a/configs/adapter-config-template.yaml +++ /dev/null @@ -1,293 +0,0 @@ -# HyperFleet Adapter Task Configuration Template (MVP) -# -# This is a Configuration Template for configuring cloud provider adapters -# using the HyperFleet Adapter Framework with CEL (Common Expression Language). -# -# TEMPLATE SYNTAX: -# ================ -# 1. Go Templates ({{ .var }}) - Variable interpolation throughout -# 2. field: "path" - Simple JSON path extraction (translated to CEL internally) -# 3. expression: "cel" - Full CEL expressions for complex logic -# -# CONDITION SYNTAX (when:): -# ========================= -# Option 1: Expression syntax (CEL) -# when: -# expression: | -# clusterPhase == "Terminating" -# -# Option 2: Structured conditions (field + operator + value) -# when: -# conditions: -# - field: "clusterPhase" -# operator: "equals" -# value: "Terminating" -# -# Supported operators: equals, notEquals, in, notIn, contains, greaterThan, lessThan, exists -# -# CEL OPTIONAL CHAINING: -# ====================== -# Use optional chaining with orValue() to safely access potentially missing fields: -# resources.?clusterNamespace.?status.?phase.orValue("") -# adapter.?executionStatus.orValue("") -# -# Copy this file to your adapter repository and customize for your needs. - -apiVersion: hyperfleet.redhat.com/v1alpha1 -kind: AdapterTaskConfig -metadata: - # Adapter name (used as resource name and in logs/metrics) - name: example-adapter - labels: - hyperfleet.io/adapter-type: example - hyperfleet.io/component: adapter - -# ============================================================================ -# Task Specification -# ============================================================================ -spec: - # ============================================================================ - # Global params - # ============================================================================ - # params to extract from CloudEvent and environment variables - # - # SUPPORTED TYPES: - # ================ - # - string: Default, any value converted to string - # - int/int64: Integer value (strings parsed, floats truncated) - # - float/float64: Floating point value - # - bool: Boolean (supports: true/false, yes/no, on/off, 1/0) - # - params: - # Environment variables from deployment - - name: "hyperfleetApiBaseUrl" - source: "env.HYPERFLEET_API_BASE_URL" - type: "string" - description: "Base URL for the HyperFleet API" - required: true - - - name: "hyperfleetApiVersion" - source: "env.HYPERFLEET_API_VERSION" - type: "string" - default: "v1" - description: "API version to use" - - # Extract from CloudEvent data - - name: "clusterId" - source: "event.id" - type: "string" - description: "Unique identifier for the target cluster" - required: true - - # Example: Extract and convert to int - # - name: "nodeCount" - # source: "event.spec.nodeCount" - # type: "int" - # default: 3 - # description: "Number of nodes in the cluster" - - # Example: Extract and convert to bool - # - name: "enableFeature" - # source: "env.ENABLE_FEATURE" - # type: "bool" - # default: false - # description: "Enable experimental feature" - - - # ============================================================================ - # Global Preconditions - # ============================================================================ - # These preconditions run sequentially and validate cluster state before resource operations. - # - # DATA SCOPES: - # ============ - # Capture scope (field/expression): API response data only - # - Access: status.phase, items[0].name, etc. - # - # Conditions scope (conditions/expression): Full execution context - # - params.* : Original extracted params - # - .*: Full API response (e.g., clusterStatus.status.phase) - # - capturedField : Explicitly captured values - # - adapter.* : Adapter metadata - # - resources.* : Created resources (empty during preconditions) - # - preconditions: - # ========================================================================== - # Step 1: Get cluster status - # ========================================================================== - - name: "clusterStatus" - apiCall: - method: "GET" - # NOTE: API path includes /api/hyperfleet/ prefix - url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}" - timeout: 10s - retryAttempts: 3 - retryBackoff: "exponential" - # Capture fields from the API response. Captured values become variables for use in resources section. - # SCOPE: API response data only - # Supports two modes: - # - field: Simple dot notation or JSONPath expression for extracting values - # - expression: CEL expression for computed values - # Only one of 'field' or 'expression' can be set per capture. - capture: - # Simple dot notation - - name: "clusterName" - field: "name" - - name: "clusterPhase" - field: "status.phase" - - name: "generationId" - field: "generation" - - # JSONPath for complex extraction (filter by field value) - # See: https://kubernetes.io/docs/reference/kubectl/jsonpath/ - # - name: "lzNamespaceStatus" - # field: "{.items[?(@.adapter=='landing-zone-adapter')].data.namespace.status}" - - # CEL expression for computed values - # - name: "activeItemCount" - # expression: "items.filter(i, i.status == 'active').size()" - - # Conditions to check. SCOPE: Full execution context - # You can access: - # - Captured values: clusterPhase, clusterName, etc. - # - Full API response: clusterStatus.status.phase, clusterStatus.spec.nodeCount - # - Params: clusterId, hyperfleetApiBaseUrl, etc. - conditions: - # Using captured value - - field: "clusterPhase" - operator: "equals" - value: "NotReady" - - # Or dig directly into API response using precondition name - # - field: "clusterStatus.status.nodeCount" - # operator: "greaterThan" - # value: 0 - - # Alternative: CEL expression with full access - # expression: | - # clusterStatus.status.phase == "Ready" && - # clusterStatus.spec.nodeCount > 0 - - # ============================================================================ - # Resources (Create/Update Resources) - # ============================================================================ - # All resources are created/updated sequentially in the order defined below - resources: - # ========================================================================== - # Resource 1: Cluster Namespace - # ========================================================================== - - name: "clusterNamespace" - manifest: - apiVersion: v1 - kind: Namespace - metadata: - # Use | lower to ensure valid K8s resource name (lowercase RFC 1123) - name: "{{ .clusterId | lower }}" - labels: - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/managed-by: "{{ .metadata.name }}" - hyperfleet.io/resource-type: "namespace" - annotations: - hyperfleet.io/created-by: "hyperfleet-adapter" - hyperfleet.io/generation: "{{ .generationId }}" - discovery: - # The "namespace" field within discovery is optional: - # - For namespaced resources: set namespace to target the specific namespace - # - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty - # Here we omit it since Namespace is cluster-scoped - bySelectors: - labelSelector: - hyperfleet.io/resource-type: "namespace" - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/managed-by: "{{ .metadata.name }}" - - - # ============================================================================ - # Post-Processing - # ============================================================================ - post: - payloads: - # Build status payload inline - - name: "clusterStatusPayload" - build: - # Adapter name for tracking which adapter reported this status - adapter: "{{ .metadata.name }}" - - # Conditions array - each condition has type, status, reason, message - # Use CEL optional chaining ?.orValue() for safe field access - conditions: - # Applied: Resources successfully created - - type: "Applied" - status: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False" - reason: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" - ? "NamespaceCreated" - : "NamespacePending" - message: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" - ? "Namespace created successfully" - : "Namespace creation in progress" - - # Available: Resources are active and ready - - type: "Available" - status: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False" - reason: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "NamespaceReady" : "NamespaceNotReady" - message: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "Namespace is active and ready" : "Namespace is not active and ready" - - # Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config. - - type: "Health" - status: - expression: | - adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") - reason: - expression: | - adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" - message: - expression: | - adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" - - # Use CEL expression for numeric fields to preserve type (not Go template which outputs strings) - observed_generation: - expression: "generationId" - - # Use Go template with now and date functions for timestamps - observed_time: "{{ now | date \"2006-01-02T15:04:05Z07:00\" }}" - - # Optional data field for adapter-specific metrics extracted from resources - data: - namespace: - name: - expression: | - resources.?clusterNamespace.?metadata.?name.orValue("") - status: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") - - # ============================================================================ - # Post Actions - # ============================================================================ - # Post actions are executed after resources are created/updated - postActions: - # Report cluster status to HyperFleet API (always executed) - - name: "reportClusterStatus" - apiCall: - method: "POST" - # NOTE: API path includes /api/hyperfleet/ prefix and ends with /statuses - url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}/statuses" - body: "{{ .clusterStatusPayload }}" - timeout: 30s - retryAttempts: 3 - retryBackoff: "exponential" - headers: - - name: "Content-Type" - value: "application/json" diff --git a/configs/adapter-deployment-config.yaml b/configs/adapter-deployment-config.yaml deleted file mode 100644 index f14c5ac..0000000 --- a/configs/adapter-deployment-config.yaml +++ /dev/null @@ -1,120 +0,0 @@ -# HyperFleet Adapter Deployment Configuration -# -# This file contains ONLY infrastructure and deployment-related settings: -# - Client connections (Maestro, HyperFleet API, Kubernetes) -# - Authentication and TLS configuration -# - Connection timeouts and retry policies -# -# NOTE: This is a SAMPLE configuration file for reference and local development. -# It is NOT automatically packaged with the container image (see Dockerfile). -# -# In production, provide configuration via one of these methods: -# 1. ADAPTER_CONFIG_PATH environment variable pointing to a config file (highest priority) -# 2. ConfigMap mounted at /etc/adapter/config/adapter-deployment-config.yaml -# -# Example Kubernetes deployment: -# env: -# - name: ADAPTER_CONFIG_PATH -# value: /etc/adapter/config/adapter-deployment-config.yaml -# volumeMounts: -# - name: config -# mountPath: /etc/adapter/config -# -# For business logic configuration (params, preconditions, resources, post-actions), -# use a separate business config file. See configs/adapter-config-template.yaml - -apiVersion: hyperfleet.redhat.com/v1alpha1 -kind: AdapterConfig -metadata: - name: hyperfleet-adapter - labels: - hyperfleet.io/component: adapter - -spec: - adapter: - version: "0.1.0" - - # Log the full merged configuration after load (default: false) - # Environment variable: HYPERFLEET_DEBUG_CONFIG - # Flag: --debug-config - debugConfig: false - - # Client configurations for external services - clients: - # Maestro transport client configuration - maestro: - # gRPC server address - # Environment variable: HYPERFLEET_MAESTRO_GRPC_SERVER_ADDRESS - # Flag: --maestro-grpc-server-address - grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090" - - # HTTPS server address for REST API operations (optional) - # Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS - httpServerAddress: "https://maestro-api.maestro.svc.cluster.local" - - # Source identifier for CloudEvents routing (must be unique across adapters) - # Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID - sourceId: "hyperfleet-adapter" - - # Client identifier (defaults to sourceId if not specified) - # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID - clientId: "hyperfleet-adapter-client" - - # Authentication configuration - auth: - type: "tls" # TLS certificate-based mTLS - - tlsConfig: - # gRPC TLS configuration - # Certificate paths (mounted from Kubernetes secrets) - # Environment variable: HYPERFLEET_MAESTRO_CA_FILE - caFile: "/etc/maestro/certs/grpc/ca.crt" - - # Environment variable: HYPERFLEET_MAESTRO_CERT_FILE - certFile: "/etc/maestro/certs/grpc/client.crt" - - # Environment variable: HYPERFLEET_MAESTRO_KEY_FILE - keyFile: "/etc/maestro/certs/grpc/client.key" - - # Server name for TLS verification - # Environment variable: HYPERFLEET_MAESTRO_SERVER_NAME - serverName: "maestro-grpc.maestro.svc.cluster.local" - - # HTTP API TLS configuration (may use different CA than gRPC) - # If not set, falls back to caFile for backwards compatibility - # Environment variable: HYPERFLEET_MAESTRO_HTTP_CA_FILE - httpCaFile: "/etc/maestro/certs/https/ca.crt" - - # Connection settings - timeout: "30s" - retryAttempts: 3 - retryBackoff: "exponential" - - # Keep-alive for long-lived gRPC connections - keepalive: - time: "30s" - timeout: "10s" - permitWithoutStream: true - - # HyperFleet HTTP API client - hyperfleetApi: - baseUrl: http://hyperfleet-api:8000 - version: v1 - timeout: 2s - retryAttempts: 3 - retryBackoff: exponential - - # Broker consumer configuration (adapter-level) - broker: - subscriptionId: "amarin-ns1-clusters-validation-gcp-adapter" - topic: "amarin-ns1-clusters" - - # Kubernetes client (for direct K8s resources) - kubernetes: - apiVersion: "v1" - # Uses in-cluster service account by default - # Set kubeConfigPath for out-of-cluster access - kubeConfigPath: PATH_TO_KUBECONFIG_FILE - # Optional rate limits (0 uses defaults) - qps: 100 - burst: 200 diff --git a/configs/adapter-task-config-template.yaml b/configs/adapter-task-config-template.yaml deleted file mode 100644 index eb0e6e3..0000000 --- a/configs/adapter-task-config-template.yaml +++ /dev/null @@ -1,296 +0,0 @@ -# HyperFleet Adapter Task Configuration Template (MVP) -# -# This is a Configuration Template for configuring cloud provider adapters -# using the HyperFleet Adapter Framework with CEL (Common Expression Language). -# -# TEMPLATE SYNTAX: -# ================ -# 1. Go Templates ({{ .var }}) - Variable interpolation throughout -# 2. field: "path" - Simple JSON path extraction (translated to CEL internally) -# 3. expression: "cel" - Full CEL expressions for complex logic -# -# CONDITION SYNTAX (when:): -# ========================= -# Option 1: Expression syntax (CEL) -# when: -# expression: | -# readyConditionStatus == "False" -# -# Option 2: Structured conditions (field + operator + value) -# when: -# conditions: -# - field: "readyConditionStatus" -# operator: "equals" -# value: "Terminating" -# -# Supported operators: equals, notEquals, in, notIn, contains, greaterThan, lessThan, exists -# -# CEL OPTIONAL CHAINING: -# ====================== -# Use optional chaining with orValue() to safely access potentially missing fields: -# resources.?clusterNamespace.?status.?phase.orValue("") -# adapter.?executionStatus.orValue("") -# -# Copy this file to your adapter repository and customize for your needs. - -apiVersion: hyperfleet.redhat.com/v1alpha1 -kind: AdapterTaskConfig -metadata: - # Adapter name (used as resource name and in logs/metrics) - name: example-adapter - labels: - hyperfleet.io/adapter-type: example - hyperfleet.io/component: adapter - -# ============================================================================ -# Task Specification -# ============================================================================ -spec: - # ============================================================================ - # Global params - # ============================================================================ - # params to extract from CloudEvent and environment variables - # - # SUPPORTED TYPES: - # ================ - # - string: Default, any value converted to string - # - int/int64: Integer value (strings parsed, floats truncated) - # - float/float64: Floating point value - # - bool: Boolean (supports: true/false, yes/no, on/off, 1/0) - # - params: - # Environment variables from deployment - - name: "hyperfleetApiBaseUrl" - source: "env.HYPERFLEET_API_BASE_URL" - type: "string" - description: "Base URL for the HyperFleet API" - required: true - - - name: "hyperfleetApiVersion" - source: "env.HYPERFLEET_API_VERSION" - type: "string" - default: "v1" - description: "API version to use" - - # Extract from CloudEvent data - - name: "clusterId" - source: "event.id" - type: "string" - description: "Unique identifier for the target cluster" - required: true - - # Example: Extract and convert to int - # - name: "nodeCount" - # source: "event.spec.nodeCount" - # type: "int" - # default: 3 - # description: "Number of nodes in the cluster" - - # Example: Extract and convert to bool - # - name: "enableFeature" - # source: "env.ENABLE_FEATURE" - # type: "bool" - # default: false - # description: "Enable experimental feature" - - - # ============================================================================ - # Global Preconditions - # ============================================================================ - # These preconditions run sequentially and validate cluster state before resource operations. - # - # DATA SCOPES: - # ============ - # Capture scope (field/expression): API response data only - # - Access: status.conditions, items[0].name, etc. - # - # Conditions scope (conditions/expression): Full execution context - # - params.* : Original extracted params - # - .*: Full API response (e.g., clusterStatus.status.conditions) - # - capturedField : Explicitly captured values - # - adapter.* : Adapter metadata - # - resources.* : Created resources (empty during preconditions) - # - preconditions: - # ========================================================================== - # Step 1: Get cluster status - # ========================================================================== - - name: "clusterStatus" - apiCall: - method: "GET" - # NOTE: API path includes /api/hyperfleet/ prefix - url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}" - timeout: 10s - retryAttempts: 3 - retryBackoff: "exponential" - # Capture fields from the API response. Captured values become variables for use in resources section. - # SCOPE: API response data only - # Supports two modes: - # - field: Simple dot notation or JSONPath expression for extracting values - # - expression: CEL expression for computed values - # Only one of 'field' or 'expression' can be set per capture. - capture: - # Simple dot notation - - name: "clusterName" - field: "name" - - name: "readyConditionStatus" - expression: | - status.conditions.filter(c, c.type == "Ready").size() > 0 - ? status.conditions.filter(c, c.type == "Ready")[0].status - : "False" - - name: "generationId" - field: "generation" - - # JSONPath for complex extraction (filter by field value) - # See: https://kubernetes.io/docs/reference/kubectl/jsonpath/ - # - name: "lzNamespaceStatus" - # field: "{.items[?(@.adapter=='landing-zone-adapter')].data.namespace.status}" - - # CEL expression for computed values - # - name: "activeItemCount" - # expression: "items.filter(i, i.status == 'active').size()" - - # Conditions to check. SCOPE: Full execution context - # You can access: - # - Captured values: readyConditionStatus, clusterName, etc. - # - Full API response: clusterStatus.status.conditions, clusterStatus.spec.nodeCount - # - Params: clusterId, hyperfleetApiBaseUrl, etc. - conditions: - # Using captured value - - field: "readyConditionStatus" - operator: "equals" - value: "True" - - # Or dig directly into API response using precondition name - # - field: "clusterStatus.status.nodeCount" - # operator: "greaterThan" - # value: 0 - - # Alternative: CEL expression with full access - # expression: | - # clusterStatus.status.conditions.filter(c, c.type == "Ready")[0].status == "True" && - # clusterStatus.spec.nodeCount > 0 - - # ============================================================================ - # Resources (Create/Update Resources) - # ============================================================================ - # All resources are created/updated sequentially in the order defined below - resources: - # ========================================================================== - # Resource 1: Cluster Namespace - # ========================================================================== - - name: "clusterNamespace" - manifest: - apiVersion: v1 - kind: Namespace - metadata: - # Use | lower to ensure valid K8s resource name (lowercase RFC 1123) - name: "{{ .clusterId | lower }}" - labels: - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/managed-by: "{{ .metadata.name }}" - hyperfleet.io/resource-type: "namespace" - annotations: - hyperfleet.io/created-by: "hyperfleet-adapter" - hyperfleet.io/generation: "{{ .generationId }}" - discovery: - # The "namespace" field within discovery is optional: - # - For namespaced resources: set namespace to target the specific namespace - # - For cluster-scoped resources (like Namespace, ClusterRole): omit or leave empty - # Here we omit it since Namespace is cluster-scoped - bySelectors: - labelSelector: - hyperfleet.io/resource-type: "namespace" - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/managed-by: "{{ .metadata.name }}" - - - # ============================================================================ - # Post-Processing - # ============================================================================ - post: - payloads: - # Build status payload inline - - name: "clusterStatusPayload" - build: - # Adapter name for tracking which adapter reported this status - adapter: "{{ .metadata.name }}" - - # Conditions array - each condition has type, status, reason, message - # Use CEL optional chaining ?.orValue() for safe field access - conditions: - # Applied: Resources successfully created - - type: "Applied" - status: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False" - reason: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" - ? "NamespaceCreated" - : "NamespacePending" - message: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" - ? "Namespace created successfully" - : "Namespace creation in progress" - - # Available: Resources are active and ready - - type: "Available" - status: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "True" : "False" - reason: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "NamespaceReady" : "NamespaceNotReady" - message: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") == "Active" ? "Namespace is active and ready" : "Namespace is not active and ready" - - # Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config. - - type: "Health" - status: - expression: | - adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") - reason: - expression: | - adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" - message: - expression: | - adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" - - # Use CEL expression for numeric fields to preserve type (not Go template which outputs strings) - observed_generation: - expression: "generationId" - - # Use Go template with now and date functions for timestamps - observed_time: "{{ now | date \"2006-01-02T15:04:05Z07:00\" }}" - - # Optional data field for adapter-specific metrics extracted from resources - data: - namespace: - name: - expression: | - resources.?clusterNamespace.?metadata.?name.orValue("") - status: - expression: | - resources.?clusterNamespace.?status.?phase.orValue("") - - # ============================================================================ - # Post Actions - # ============================================================================ - # Post actions are executed after resources are created/updated - postActions: - # Report cluster status to HyperFleet API (always executed) - - name: "reportClusterStatus" - apiCall: - method: "POST" - # NOTE: API path includes /api/hyperfleet/ prefix and ends with /statuses - url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}/statuses" - body: "{{ .clusterStatusPayload }}" - timeout: 30s - retryAttempts: 3 - retryBackoff: "exponential" - headers: - - name: "Content-Type" - value: "application/json" diff --git a/configs/broker-configmap-pubsub-template.yaml b/configs/broker-configmap-pubsub-template.yaml deleted file mode 100644 index 1fedc6d..0000000 --- a/configs/broker-configmap-pubsub-template.yaml +++ /dev/null @@ -1,187 +0,0 @@ -# Broker ConfigMap Template for Google Pub/Sub -# This ConfigMap provides broker configuration for the hyperfleet-adapter to consume CloudEvents from Google Pub/Sub -# -# Usage: -# 1. Copy this template and customize values for your environment -# 2. Apply to your Kubernetes cluster: kubectl apply -f broker-configmap-pubsub.yaml -# 3. Mount as a file in your adapter deployment (see example below) - -apiVersion: v1 -kind: ConfigMap -metadata: - name: hyperfleet-broker-config - namespace: hyperfleet-system - labels: - app.kubernetes.io/name: hyperfleet-adapter - app.kubernetes.io/component: broker-config - hyperfleet.io/broker-type: googlepubsub -data: - # ============================================================================ - # Broker Configuration (broker.yaml) - # ============================================================================ - # Note: Adapter broker topic/subscription are configured in the adapter - # deployment config under spec.clients.broker. - # This is the standard configuration format for hyperfleet-broker library - # Mount this file at /etc/broker/broker.yaml (or set BROKER_CONFIG_FILE) - # - # Note: You can override any setting using environment variables: - # BROKER_TYPE=googlepubsub - # BROKER_GOOGLEPUBSUB_PROJECT_ID=my-project - # SUBSCRIBER_PARALLELISM=5 - broker.yaml: | - # Set to true to log the loaded configuration on startup (useful for debugging) - log_config: false - - broker: - # Broker type: "rabbitmq" or "googlepubsub" - type: googlepubsub - - # Google Pub/Sub Configuration - googlepubsub: - # ==== Connection Settings (required) ==== - # GCP Project ID - project_id: "my-gcp-project" - - # ==== Subscription Settings ==== - # Time for subscriber to acknowledge message (10-600 seconds, default: 10) - ack_deadline_seconds: 60 - - # How long to retain unacknowledged messages (10m to 31d, default: 7d) - # Format: "Ns" (seconds), "Nm" (minutes), "Nh" (hours), "Nd" (days) - message_retention_duration: "604800s" # 7 days - - # Time of inactivity before subscription is deleted (min 1d, or 0 = never expire) - expiration_ttl: "2678400s" # 31 days - - # Enable ordered message delivery by ordering key - enable_message_ordering: false - - # ==== Retry Policy ==== - # Retry policy for failed message delivery (0s to 600s) - retry_min_backoff: "10s" - retry_max_backoff: "600s" - - # ==== Dead Letter Settings ==== - # Dead letter topic for messages that fail repeatedly - # If create_topic_if_missing is true, a dead letter topic named "{subscription_id}-dlq" - # will be created automatically - # dead_letter_topic: "my-dead-letter-topic" # Optional: customize dead letter topic name - dead_letter_max_attempts: 5 # 5-100, default: 5 - - # ==== Topic Settings ==== - # How long the topic retains messages for replay scenarios (0 = disabled) - # topic_retention_duration: "86400s" # 1 day - - # ==== Receive Settings (client-side flow control) ==== - max_outstanding_messages: 1000 - max_outstanding_bytes: 104857600 # 100MB - num_goroutines: 10 - - # ==== Behavior Flags ==== - # Default: false - infrastructure must exist (recommended for production) - # Set to true to automatically create topics/subscriptions if they don't exist - create_topic_if_missing: true - create_subscription_if_missing: true - - # Subscriber Configuration - subscriber: - # Number of parallel workers for processing messages (default: 1) - parallelism: 10 - ---- -# ============================================================================ -# Example Deployment -# ============================================================================ -# apiVersion: apps/v1 -# kind: Deployment -# metadata: -# name: hyperfleet-adapter -# namespace: hyperfleet-system -# spec: -# replicas: 1 -# selector: -# matchLabels: -# app: hyperfleet-adapter -# template: -# metadata: -# labels: -# app: hyperfleet-adapter -# spec: -# serviceAccountName: hyperfleet-adapter -# containers: -# - name: adapter -# image: quay.io/openshift-hyperfleet/hyperfleet-adapter:latest -# imagePullPolicy: Always -# env: -# # Adapter-specific configuration -# - name: BROKER_SUBSCRIPTION_ID -# valueFrom: -# configMapKeyRef: -# name: hyperfleet-broker-config -# key: BROKER_SUBSCRIPTION_ID -# - name: BROKER_TOPIC -# valueFrom: -# configMapKeyRef: -# name: hyperfleet-broker-config -# key: BROKER_TOPIC -# # Point to broker config file -# - name: BROKER_CONFIG_FILE -# value: /etc/broker/broker.yaml -# # Optional: Override broker.yaml settings with environment variables -# # - name: BROKER_GOOGLEPUBSUB_PROJECT_ID -# # value: "my-other-project" -# # - name: SUBSCRIBER_PARALLELISM -# # value: "5" -# volumeMounts: -# - name: broker-config -# mountPath: /etc/broker -# readOnly: true -# volumes: -# - name: broker-config -# configMap: -# name: hyperfleet-broker-config -# items: -# - key: broker.yaml -# path: broker.yaml - ---- -# ============================================================================ -# Optional: GCP Service Account Secret (if not using Workload Identity) -# ============================================================================ -# If running outside GKE or not using Workload Identity, create a secret -# with your GCP service account key: -# -# apiVersion: v1 -# kind: Secret -# metadata: -# name: gcp-service-account-key -# namespace: hyperfleet-system -# type: Opaque -# stringData: -# key.json: | -# { -# "type": "service_account", -# "project_id": "my-gcp-project", -# "private_key_id": "...", -# "private_key": "...", -# "client_email": "...", -# ... -# } -# -# Then mount it in your deployment: -# spec: -# template: -# spec: -# containers: -# - name: adapter -# env: -# - name: GOOGLE_APPLICATION_CREDENTIALS -# value: /var/secrets/google/key.json -# volumeMounts: -# - name: gcp-credentials -# mountPath: /var/secrets/google -# readOnly: true -# volumes: -# - name: gcp-credentials -# secret: -# secretName: gcp-service-account-key diff --git a/configs/templates/cluster-status-payload.yaml b/configs/templates/cluster-status-payload.yaml deleted file mode 100644 index c6b5f90..0000000 --- a/configs/templates/cluster-status-payload.yaml +++ /dev/null @@ -1,16 +0,0 @@ -# Cluster Status Payload Template -# Used for reporting cluster status back to HyperFleet API -status: "{{ .status }}" -message: "{{ .message }}" -observedGeneration: "{{ .generationSpec }}" -lastUpdated: "{{ now | date \"2006-01-02T15:04:05Z07:00\" }}" -conditions: - - type: "Ready" - status: "{{ .readyStatus | default \"Unknown\" }}" - reason: "{{ .readyReason | default \"Pending\" }}" - message: "{{ .readyMessage | default \"Cluster status is being determined\" }}" - - type: "Configured" - status: "{{ .configuredStatus | default \"Unknown\" }}" - reason: "{{ .configuredReason | default \"Pending\" }}" - message: "{{ .configuredMessage | default \"Configuration is being applied\" }}" - diff --git a/configs/templates/deployment.yaml b/configs/templates/deployment.yaml deleted file mode 100644 index 4e76962..0000000 --- a/configs/templates/deployment.yaml +++ /dev/null @@ -1,37 +0,0 @@ -# Cluster Controller Deployment Template -apiVersion: apps/v1 -kind: Deployment -metadata: - name: "cluster-controller-{{ .clusterId }}" - namespace: "cluster-{{ .clusterId }}" - labels: - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/component: "controller" -spec: - replicas: 1 - selector: - matchLabels: - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/component: "controller" - template: - metadata: - labels: - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/component: "controller" - spec: - containers: - - name: controller - image: "quay.io/hyperfleet/controller:{{ .imageTag }}" - env: - - name: CLUSTER_ID - value: "{{ .clusterId }}" - - name: RESOURCE_ID - value: "{{ .resourceId }}" - resources: - requests: - cpu: "100m" - memory: "128Mi" - limits: - cpu: "500m" - memory: "512Mi" - diff --git a/configs/templates/job.yaml b/configs/templates/job.yaml deleted file mode 100644 index 196dcfe..0000000 --- a/configs/templates/job.yaml +++ /dev/null @@ -1,29 +0,0 @@ -# Validation Job Template -# This job is used to validate cluster configuration -apiVersion: batch/v1 -kind: Job -metadata: - name: "validation-{{ .clusterId }}" - namespace: "cluster-{{ .clusterId }}" - labels: - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/job-type: "validation" - hyperfleet.io/resource-type: "job" - hyperfleet.io/managed-by: "{{ .metadata.name }}" -spec: - template: - metadata: - labels: - hyperfleet.io/cluster-id: "{{ .clusterId }}" - hyperfleet.io/job-type: "validation" - spec: - restartPolicy: Never - containers: - - name: validator - image: "quay.io/hyperfleet/validator:v1.0.0" - env: - - name: CLUSTER_ID - value: "{{ .clusterId }}" - - name: GENERATION_ID - value: "{{ .generationSpec }}" - diff --git a/internal/client_factory/api_client.go b/internal/client_factory/api_client.go new file mode 100644 index 0000000..9352a6e --- /dev/null +++ b/internal/client_factory/api_client.go @@ -0,0 +1,56 @@ +package client_factory + +import ( + "fmt" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/hyperfleet_api" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" +) + +// CreateAPIClient creates a HyperFleet API client from the config +func CreateAPIClient(apiConfig config_loader.HyperfleetAPIConfig, log logger.Logger) (hyperfleet_api.Client, error) { + var opts []hyperfleet_api.ClientOption + + // Set base URL if configured (env fallback handled in NewClient) + if apiConfig.BaseURL != "" { + opts = append(opts, hyperfleet_api.WithBaseURL(apiConfig.BaseURL)) + } + + // Set timeout if configured (0 means use default) + if apiConfig.Timeout > 0 { + opts = append(opts, hyperfleet_api.WithTimeout(apiConfig.Timeout)) + } + + // Set retry attempts + if apiConfig.RetryAttempts > 0 { + opts = append(opts, hyperfleet_api.WithRetryAttempts(apiConfig.RetryAttempts)) + } + + // Set retry backoff strategy + if apiConfig.RetryBackoff != "" { + switch apiConfig.RetryBackoff { + case hyperfleet_api.BackoffExponential, hyperfleet_api.BackoffLinear, hyperfleet_api.BackoffConstant: + opts = append(opts, hyperfleet_api.WithRetryBackoff(apiConfig.RetryBackoff)) + default: + return nil, fmt.Errorf("invalid retry backoff strategy %q (supported: exponential, linear, constant)", apiConfig.RetryBackoff) + } + } + + // Set retry base delay + if apiConfig.BaseDelay > 0 { + opts = append(opts, hyperfleet_api.WithBaseDelay(apiConfig.BaseDelay)) + } + + // Set retry max delay + if apiConfig.MaxDelay > 0 { + opts = append(opts, hyperfleet_api.WithMaxDelay(apiConfig.MaxDelay)) + } + + // Set default headers + for key, value := range apiConfig.DefaultHeaders { + opts = append(opts, hyperfleet_api.WithDefaultHeader(key, value)) + } + + return hyperfleet_api.NewClient(log, opts...) +} diff --git a/internal/client_factory/k8s_client.go b/internal/client_factory/k8s_client.go new file mode 100644 index 0000000..1ae14b9 --- /dev/null +++ b/internal/client_factory/k8s_client.go @@ -0,0 +1,19 @@ +package client_factory + +import ( + "context" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/k8s_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" +) + +// CreateK8sClient creates a Kubernetes client from the config +func CreateK8sClient(ctx context.Context, k8sConfig config_loader.KubernetesConfig, log logger.Logger) (*k8s_client.Client, error) { + clientConfig := k8s_client.ClientConfig{ + KubeConfigPath: k8sConfig.KubeConfigPath, + QPS: k8sConfig.QPS, + Burst: k8sConfig.Burst, + } + return k8s_client.NewClient(ctx, clientConfig, log) +} diff --git a/internal/client_factory/maestro_client.go b/internal/client_factory/maestro_client.go new file mode 100644 index 0000000..6eecc3a --- /dev/null +++ b/internal/client_factory/maestro_client.go @@ -0,0 +1,42 @@ +package client_factory + +import ( + "context" + "fmt" + "time" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/maestro_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" +) + +// CreateMaestroClient creates a Maestro client from the config +func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) { + clientConfig := &maestro_client.Config{ + MaestroServerAddr: maestroConfig.HTTPServerAddress, + GRPCServerAddr: maestroConfig.GRPCServerAddress, + SourceID: maestroConfig.SourceID, + Insecure: maestroConfig.Insecure, + } + + // Parse timeout if specified + if maestroConfig.Timeout != "" { + timeout, err := time.ParseDuration(maestroConfig.Timeout) + if err != nil { + return nil, fmt.Errorf("invalid maestro timeout %q: %w", maestroConfig.Timeout, err) + } + clientConfig.HTTPTimeout = timeout + } + + // Configure TLS if auth type is "tls" + if maestroConfig.Auth.Type == "tls" { + if maestroConfig.Auth.TLSConfig == nil { + return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided") + } + clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile + clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile + clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile + } + + return maestro_client.NewMaestroClient(ctx, clientConfig, log) +} diff --git a/internal/config_loader/accessors.go b/internal/config_loader/accessors.go index b17b2bc..e26e872 100644 --- a/internal/config_loader/accessors.go +++ b/internal/config_loader/accessors.go @@ -167,6 +167,29 @@ func (c *Config) ResourceNames() []string { return names } +// ----------------------------------------------------------------------------- +// NamedManifest Accessors +// ----------------------------------------------------------------------------- + +// GetManifestContent returns manifest content, preferring loaded ref content +func (nm *NamedManifest) GetManifestContent() interface{} { + if nm == nil { + return nil + } + if nm.ManifestRefContent != nil { + return nm.ManifestRefContent + } + return nm.Manifest +} + +// HasManifestRef returns true if using file reference +func (nm *NamedManifest) HasManifestRef() bool { + if nm == nil { + return false + } + return nm.ManifestRef != "" +} + // ----------------------------------------------------------------------------- // Resource Accessors // ----------------------------------------------------------------------------- diff --git a/internal/config_loader/constants.go b/internal/config_loader/constants.go index 4e3aa31..379063b 100644 --- a/internal/config_loader/constants.go +++ b/internal/config_loader/constants.go @@ -76,8 +76,19 @@ const ( // Resource field names const ( FieldManifest = "manifest" + FieldManifests = "manifests" + FieldManifestRef = "manifestRef" FieldRecreateOnChange = "recreateOnChange" FieldDiscovery = "discovery" + FieldTransport = "transport" +) + +// Transport field names +const ( + FieldClient = "client" + FieldMaestro = "maestro" + FieldTargetCluster = "targetCluster" + FieldManifestWork = "manifestWork" ) // Manifest reference field names diff --git a/internal/config_loader/loader.go b/internal/config_loader/loader.go index 30d3650..d3801e9 100644 --- a/internal/config_loader/loader.go +++ b/internal/config_loader/loader.go @@ -3,9 +3,8 @@ package config_loader import ( "fmt" "os" - "path/filepath" - "strings" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/utils" "gopkg.in/yaml.v3" ) @@ -198,17 +197,39 @@ func loadTaskConfigFileReferences(config *AdapterTaskConfig, baseDir string) err for i := range config.Spec.Resources { resource := &config.Spec.Resources[i] ref := resource.GetManifestRef() - if ref == "" { - continue + if ref != "" { + content, err := loadYAMLFile(baseDir, ref) + if err != nil { + return fmt.Errorf("%s.%s[%d].%s.%s: %w", FieldSpec, FieldResources, i, FieldManifest, FieldRef, err) + } + + // Replace manifest with loaded content + resource.Manifest = content } - content, err := loadYAMLFile(baseDir, ref) - if err != nil { - return fmt.Errorf("%s.%s[%d].%s.%s: %w", FieldSpec, FieldResources, i, FieldManifest, FieldRef, err) + // Load manifestRef in manifests array (Maestro transport) + for j := range resource.Manifests { + namedManifest := &resource.Manifests[j] + if namedManifest.ManifestRef != "" { + content, err := loadYAMLFile(baseDir, namedManifest.ManifestRef) + if err != nil { + return fmt.Errorf("%s.%s[%d].%s[%d].%s: %w", + FieldSpec, FieldResources, i, FieldManifests, j, FieldManifestRef, err) + } + namedManifest.ManifestRefContent = content + } } - // Replace manifest with loaded content - resource.Manifest = content + // Load transport.maestro.manifestWork.ref + if resource.Transport != nil && resource.Transport.Maestro != nil && resource.Transport.Maestro.ManifestWork != nil { + if resource.Transport.Maestro.ManifestWork.Ref != "" { + content, err := loadYAMLFile(baseDir, resource.Transport.Maestro.ManifestWork.Ref) + if err != nil { + return fmt.Errorf("%s.%s[%d].%s.%s.%s.%s: %w", FieldSpec, FieldResources, i, FieldTransport, FieldMaestro, FieldManifestWork, FieldRef, err) + } + resource.Transport.Maestro.ManifestWork.RefContent = content + } + } } // Load buildRef in spec.post.payloads @@ -248,31 +269,9 @@ func loadYAMLFile(baseDir, refPath string) (map[string]interface{}, error) { return content, nil } -// resolvePath resolves a relative path against the base directory and validates -// that the resolved path does not escape the base directory. +// resolvePath resolves a path against the base directory. +// - Absolute paths are returned as-is (allows mounting files from /etc/adapter, etc.) +// - Relative paths are resolved against the base directory and validated to not escape it func resolvePath(baseDir, refPath string) (string, error) { - baseAbs, err := filepath.Abs(baseDir) - if err != nil { - return "", fmt.Errorf("failed to resolve base directory: %w", err) - } - baseClean := filepath.Clean(baseAbs) - - var targetPath string - if filepath.IsAbs(refPath) { - targetPath = filepath.Clean(refPath) - } else { - targetPath = filepath.Clean(filepath.Join(baseClean, refPath)) - } - - // Check if target path is within base directory - rel, err := filepath.Rel(baseClean, targetPath) - if err != nil { - return "", fmt.Errorf("path %q escapes base directory", refPath) - } - - if strings.HasPrefix(rel, "..") { - return "", fmt.Errorf("path %q escapes base directory", refPath) - } - - return targetPath, nil + return utils.ResolveSecurePath(baseDir, refPath) } diff --git a/internal/config_loader/loader_test.go b/internal/config_loader/loader_test.go index 624113c..632e825 100644 --- a/internal/config_loader/loader_test.go +++ b/internal/config_loader/loader_test.go @@ -18,10 +18,10 @@ func createTestConfigFiles(t *testing.T, tmpDir string, adapterYAML, taskYAML st adapterPath = filepath.Join(tmpDir, "adapter-config.yaml") taskPath = filepath.Join(tmpDir, "task-config.yaml") - err := os.WriteFile(adapterPath, []byte(adapterYAML), 0644) + err := os.WriteFile(adapterPath, []byte(adapterYAML), 0o644) require.NoError(t, err) - err = os.WriteFile(taskPath, []byte(taskYAML), 0644) + err = os.WriteFile(taskPath, []byte(taskYAML), 0o644) require.NoError(t, err) return adapterPath, taskPath @@ -95,7 +95,7 @@ spec: assert.Equal(t, "hyperfleet.redhat.com/v1alpha1", config.APIVersion) assert.Equal(t, "Config", config.Kind) // Metadata comes from task config - assert.Equal(t, "test-adapter", config.Metadata.Name) + assert.Equal(t, "deployment-config", config.Metadata.Name) // Adapter info comes from adapter config assert.Equal(t, "0.1.0", config.Spec.Adapter.Version) // Clients config comes from adapter config @@ -119,7 +119,7 @@ kind: AdapterTaskConfig metadata: name: test-adapter spec: {} -`), 0644) +`), 0o644) require.NoError(t, err) config, err := LoadConfig( @@ -147,7 +147,7 @@ spec: timeout: 5s kubernetes: apiVersion: v1 -`), 0644) +`), 0o644) require.NoError(t, err) config, err := LoadConfig( @@ -546,7 +546,7 @@ spec: - name: "testNamespace" `, wantError: true, - errorMsg: "manifest is required", + errorMsg: "discovery is required", // manifest validation moved to semantic phase }, } @@ -620,7 +620,7 @@ func TestMergeConfigs(t *testing.T) { assert.Equal(t, "hyperfleet.redhat.com/v1alpha1", merged.APIVersion) assert.Equal(t, "Config", merged.Kind) // Metadata comes from task config - assert.Equal(t, "task-processor", merged.Metadata.Name) + assert.Equal(t, "adapter-deployment", merged.Metadata.Name) // Adapter info from adapter config assert.Equal(t, "1.0.0", merged.Spec.Adapter.Version) // Clients from adapter config @@ -732,9 +732,9 @@ func TestValidateFileReferencesInTaskConfig(t *testing.T) { // Create a test template file templatePath := filepath.Join(tmpDir, "templates") - require.NoError(t, os.MkdirAll(templatePath, 0755)) + require.NoError(t, os.MkdirAll(templatePath, 0o755)) templateFile := filepath.Join(templatePath, "test-template.yaml") - require.NoError(t, os.WriteFile(templateFile, []byte("test: value"), 0644)) + require.NoError(t, os.WriteFile(templateFile, []byte("test: value"), 0o644)) tests := []struct { name string @@ -892,11 +892,11 @@ func TestLoadConfigWithFileReferences(t *testing.T) { // Create a template file templateDir := filepath.Join(tmpDir, "templates") - require.NoError(t, os.MkdirAll(templateDir, 0755)) + require.NoError(t, os.MkdirAll(templateDir, 0o755)) templateFile := filepath.Join(templateDir, "status-payload.yaml") require.NoError(t, os.WriteFile(templateFile, []byte(` status: "{{ .status }}" -`), 0644)) +`), 0o644)) // Create adapter config file adapterYAML := ` @@ -915,7 +915,7 @@ spec: apiVersion: "v1" ` adapterPath := filepath.Join(tmpDir, "adapter-config.yaml") - require.NoError(t, os.WriteFile(adapterPath, []byte(adapterYAML), 0644)) + require.NoError(t, os.WriteFile(adapterPath, []byte(adapterYAML), 0o644)) // Create task config file with buildRef taskYAML := ` @@ -943,7 +943,7 @@ spec: buildRef: "templates/status-payload.yaml" ` taskPath := filepath.Join(tmpDir, "task-config.yaml") - require.NoError(t, os.WriteFile(taskPath, []byte(taskYAML), 0644)) + require.NoError(t, os.WriteFile(taskPath, []byte(taskYAML), 0o644)) // Load should succeed because template file exists config, err := LoadConfig( @@ -986,7 +986,7 @@ spec: buildRef: "templates/nonexistent.yaml" ` taskPathBad := filepath.Join(tmpDir, "task-config-bad.yaml") - require.NoError(t, os.WriteFile(taskPathBad, []byte(taskYAMLBad), 0644)) + require.NoError(t, os.WriteFile(taskPathBad, []byte(taskYAMLBad), 0o644)) // Load should fail because template file doesn't exist config, err = LoadConfig( @@ -1003,14 +1003,14 @@ func TestLoadFileReferencesContent(t *testing.T) { // Create temporary directory tmpDir := t.TempDir() templateDir := filepath.Join(tmpDir, "templates") - require.NoError(t, os.MkdirAll(templateDir, 0755)) + require.NoError(t, os.MkdirAll(templateDir, 0o755)) // Create a buildRef template file buildRefFile := filepath.Join(templateDir, "status-payload.yaml") require.NoError(t, os.WriteFile(buildRefFile, []byte(` status: "{{ .status }}" message: "Operation completed" -`), 0644)) +`), 0o644)) // Create a manifest.ref template file manifestRefFile := filepath.Join(templateDir, "deployment.yaml") @@ -1022,7 +1022,7 @@ metadata: namespace: "{{ .namespace }}" spec: replicas: 1 -`), 0644)) +`), 0o644)) // Create adapter config adapterYAML := ` @@ -1041,7 +1041,7 @@ spec: apiVersion: "v1" ` adapterPath := filepath.Join(tmpDir, "adapter-config.yaml") - require.NoError(t, os.WriteFile(adapterPath, []byte(adapterYAML), 0644)) + require.NoError(t, os.WriteFile(adapterPath, []byte(adapterYAML), 0o644)) // Create task config file with both buildRef and manifest.ref taskYAML := ` @@ -1068,7 +1068,7 @@ spec: buildRef: "templates/status-payload.yaml" ` taskPath := filepath.Join(tmpDir, "task-config.yaml") - require.NoError(t, os.WriteFile(taskPath, []byte(taskYAML), 0644)) + require.NoError(t, os.WriteFile(taskPath, []byte(taskYAML), 0o644)) // Load config config, err := LoadConfig( diff --git a/internal/config_loader/struct_validator.go b/internal/config_loader/struct_validator.go index f1bb3ae..4554dd5 100644 --- a/internal/config_loader/struct_validator.go +++ b/internal/config_loader/struct_validator.go @@ -97,11 +97,27 @@ func validateOperator(fl validator.FieldLevel) bool { } // validateParameterEnvRequired is a struct-level validator for Parameter. -// Checks that required env params have their environment variables set. +// Checks that required params have their source starting with "event.", "env.", or "config.". func validateParameterEnvRequired(sl validator.StructLevel) { param := sl.Current().Interface().(Parameter) //nolint:errcheck // type is guaranteed by RegisterStructValidation - // Only validate if Required=true and Source starts with "env." + if !param.Required { + return + } + + validPrefixes := []string{"event.", "env.", "config."} + hasValidPrefix := false + for _, prefix := range validPrefixes { + if strings.HasPrefix(param.Source, prefix) { + hasValidPrefix = true + break + } + } + + if !hasValidPrefix { + sl.ReportError(param.Source, "source", "Source", "invalidsourceprefix", param.Source) + } + if !param.Required || !strings.HasPrefix(param.Source, "env.") { return } diff --git a/internal/config_loader/types.go b/internal/config_loader/types.go index 09c541f..540932c 100644 --- a/internal/config_loader/types.go +++ b/internal/config_loader/types.go @@ -48,7 +48,7 @@ func (c *Config) GetMetadata() Metadata { } // Merge combines AdapterConfig (deployment) and AdapterTaskConfig (task) into a unified Config. -// The metadata is taken from the task config since it contains the adapter task name. +// The metadata is taken from the config since it contains the adapter name. // The adapter info and clients come from the deployment config. // The params, preconditions, resources, and post-processing come from the task config. func Merge(adapterCfg *AdapterConfig, taskCfg *AdapterTaskConfig) *Config { @@ -59,7 +59,7 @@ func Merge(adapterCfg *AdapterConfig, taskCfg *AdapterTaskConfig) *Config { return &Config{ APIVersion: adapterCfg.APIVersion, Kind: ExpectedKindConfig, - Metadata: taskCfg.Metadata, // Use task metadata for adapter name + Metadata: adapterCfg.Metadata, // Use config metadata for adapter name Spec: ConfigSpec{ // From deployment config Adapter: adapterCfg.Spec.Adapter, @@ -293,14 +293,63 @@ func (c *Condition) UnmarshalYAML(unmarshal func(interface{}) error) error { return nil } +// NamedManifest represents a manifest with an identifying name within a Maestro resource. +// Used for bundling multiple manifests into a single ManifestWork. +type NamedManifest struct { + Name string `yaml:"name" validate:"required,resourcename"` + Manifest interface{} `yaml:"manifest,omitempty"` + ManifestRef string `yaml:"manifestRef,omitempty"` + // ManifestRefContent is populated by loader from ManifestRef file + ManifestRefContent map[string]interface{} `yaml:"-"` +} + // Resource represents a Kubernetes resource configuration type Resource struct { Name string `yaml:"name" validate:"required,resourcename"` - Manifest interface{} `yaml:"manifest,omitempty" validate:"required"` + Transport *TransportConfig `yaml:"transport,omitempty"` + Manifest interface{} `yaml:"manifest,omitempty"` // For Kubernetes transport + Manifests []NamedManifest `yaml:"manifests,omitempty"` // For Maestro transport (multiple manifests bundled in one ManifestWork) RecreateOnChange bool `yaml:"recreateOnChange,omitempty"` Discovery *DiscoveryConfig `yaml:"discovery,omitempty" validate:"required"` } +// TransportClientType represents the transport client type +type TransportClientType string + +const ( + // TransportClientKubernetes indicates direct Kubernetes API transport + TransportClientKubernetes TransportClientType = "kubernetes" + // TransportClientMaestro indicates Maestro ManifestWork transport + TransportClientMaestro TransportClientType = "maestro" +) + +// TransportConfig defines transport configuration for a resource +type TransportConfig struct { + Client TransportClientType `yaml:"client,omitempty" validate:"omitempty,oneof=kubernetes maestro"` + Maestro *MaestroTransportConfig `yaml:"maestro,omitempty"` +} + +// GetClientType returns the transport client type, defaulting to "kubernetes" +func (t *TransportConfig) GetClientType() TransportClientType { + if t == nil || t.Client == "" { + return TransportClientKubernetes + } + return t.Client +} + +// MaestroTransportConfig contains Maestro-specific transport settings +type MaestroTransportConfig struct { + TargetCluster string `yaml:"targetCluster" validate:"required"` + ManifestWork *ManifestWorkConfig `yaml:"manifestWork,omitempty"` +} + +// ManifestWorkConfig contains ManifestWork-specific settings +type ManifestWorkConfig struct { + Ref string `yaml:"ref,omitempty"` + RefContent map[string]interface{} `yaml:"-"` + Name string `yaml:"name,omitempty"` +} + // DiscoveryConfig represents resource discovery configuration type DiscoveryConfig struct { Namespace string `yaml:"namespace,omitempty"` diff --git a/internal/config_loader/validator.go b/internal/config_loader/validator.go index b2b7fca..8d99427 100644 --- a/internal/config_loader/validator.go +++ b/internal/config_loader/validator.go @@ -122,6 +122,26 @@ func (v *TaskConfigValidator) ValidateFileReferences() error { errors = append(errors, err.Error()) } } + + // Validate manifestRef in manifests array (Maestro transport) + for j, nm := range resource.Manifests { + if nm.ManifestRef != "" { + path := fmt.Sprintf("%s.%s[%d].%s[%d].%s", FieldSpec, FieldResources, i, FieldManifests, j, FieldManifestRef) + if err := v.validateFileExists(nm.ManifestRef, path); err != nil { + errors = append(errors, err.Error()) + } + } + } + + // Validate transport.maestro.manifestWork.ref in spec.resources + if resource.Transport != nil && resource.Transport.Maestro != nil && resource.Transport.Maestro.ManifestWork != nil { + if resource.Transport.Maestro.ManifestWork.Ref != "" { + path := fmt.Sprintf("%s.%s[%d].%s.%s.%s.%s", FieldSpec, FieldResources, i, FieldTransport, FieldMaestro, FieldManifestWork, FieldRef) + if err := v.validateFileExists(resource.Transport.Maestro.ManifestWork.Ref, path); err != nil { + errors = append(errors, err.Error()) + } + } + } } if len(errors) > 0 { @@ -172,7 +192,9 @@ func (v *TaskConfigValidator) ValidateSemantic() error { v.validateCaptureFieldExpressions() v.validateTemplateVariables() v.validateCELExpressions() + v.validateManifestFields() v.validateK8sManifests() + v.validateTransportConfig() if v.errors.HasErrors() { return v.errors @@ -327,9 +349,21 @@ func (v *TaskConfigValidator) validateTemplateVariables() { // Validate resource manifests for i, resource := range v.config.Spec.Resources { resourcePath := fmt.Sprintf("%s.%s[%d]", FieldSpec, FieldResources, i) + + // Validate single manifest (Kubernetes transport) if manifest, ok := resource.Manifest.(map[string]interface{}); ok { v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) } + + // Validate manifests array (Maestro transport) + for j, nm := range resource.Manifests { + content := nm.GetManifestContent() + if manifest, ok := content.(map[string]interface{}); ok { + manifestPath := fmt.Sprintf("%s.%s[%d].%s", resourcePath, FieldManifests, j, FieldManifest) + v.validateTemplateMap(manifest, manifestPath) + } + } + if resource.Discovery != nil { discoveryPath := resourcePath + "." + FieldDiscovery v.validateTemplateString(resource.Discovery.Namespace, discoveryPath+"."+FieldNamespace) @@ -486,17 +520,68 @@ func (v *TaskConfigValidator) validateBuildExpressions(m map[string]interface{}, } } +// validateManifestFields validates that manifest/manifests fields are used correctly per transport type +func (v *TaskConfigValidator) validateManifestFields() { + for i, resource := range v.config.Spec.Resources { + path := fmt.Sprintf("%s.%s[%d]", FieldSpec, FieldResources, i) + clientType := resource.Transport.GetClientType() + + switch clientType { + case TransportClientKubernetes: + if resource.Manifest == nil { + v.errors.Add(path, "kubernetes transport requires 'manifest' field") + } + if len(resource.Manifests) > 0 { + v.errors.Add(path, "kubernetes transport does not support 'manifests' array") + } + case TransportClientMaestro: + if len(resource.Manifests) == 0 { + v.errors.Add(path, "maestro transport requires 'manifests' array") + } + if resource.Manifest != nil { + v.errors.Add(path, "maestro transport uses 'manifests' array, not 'manifest'") + } + // Validate each named manifest has either manifest or manifestRef + for j, nm := range resource.Manifests { + nmPath := fmt.Sprintf("%s.%s[%d]", path, FieldManifests, j) + if nm.Manifest == nil && nm.ManifestRef == "" { + v.errors.Add(nmPath, "named manifest requires either 'manifest' or 'manifestRef'") + } + if nm.Manifest != nil && nm.ManifestRef != "" { + v.errors.Add(nmPath, "'manifest' and 'manifestRef' are mutually exclusive") + } + } + } + } +} + func (v *TaskConfigValidator) validateK8sManifests() { for i, resource := range v.config.Spec.Resources { - path := fmt.Sprintf("%s.%s[%d].%s", FieldSpec, FieldResources, i, FieldManifest) + clientType := resource.Transport.GetClientType() + + // For Kubernetes transport, validate the single manifest field + if clientType == TransportClientKubernetes { + path := fmt.Sprintf("%s.%s[%d].%s", FieldSpec, FieldResources, i, FieldManifest) + + if manifest, ok := resource.Manifest.(map[string]interface{}); ok { + if ref, hasRef := manifest[FieldRef].(string); hasRef { + if ref == "" { + v.errors.Add(path+"."+FieldRef, "manifest ref cannot be empty") + } + } else { + v.validateK8sManifest(manifest, path) + } + } + } - if manifest, ok := resource.Manifest.(map[string]interface{}); ok { - if ref, hasRef := manifest[FieldRef].(string); hasRef { - if ref == "" { - v.errors.Add(path+"."+FieldRef, "manifest ref cannot be empty") + // For Maestro transport, validate each manifest in the manifests array + if clientType == TransportClientMaestro { + for j, nm := range resource.Manifests { + path := fmt.Sprintf("%s.%s[%d].%s[%d].%s", FieldSpec, FieldResources, i, FieldManifests, j, FieldManifest) + content := nm.GetManifestContent() + if manifest, ok := content.(map[string]interface{}); ok { + v.validateK8sManifest(manifest, path) } - } else { - v.validateK8sManifest(manifest, path) } } } @@ -530,6 +615,39 @@ func (v *TaskConfigValidator) validateK8sManifest(manifest map[string]interface{ } } +// validateTransportConfig validates transport configuration for all resources +func (v *TaskConfigValidator) validateTransportConfig() { + for i, resource := range v.config.Spec.Resources { + if resource.Transport == nil { + continue + } + + basePath := fmt.Sprintf("%s.%s[%d].%s", FieldSpec, FieldResources, i, FieldTransport) + + // Validate maestro config is present when client=maestro + if resource.Transport.Client == TransportClientMaestro { + if resource.Transport.Maestro == nil { + v.errors.Add(basePath, "maestro configuration is required when client is 'maestro'") + continue + } + + // Validate targetCluster is present and validate its template variables + maestroPath := basePath + "." + FieldMaestro + if resource.Transport.Maestro.TargetCluster == "" { + v.errors.Add(maestroPath, "targetCluster is required") + } else { + // Validate template variables in targetCluster + v.validateTemplateString(resource.Transport.Maestro.TargetCluster, maestroPath+"."+FieldTargetCluster) + } + + // Validate manifestWork.name template if present + if resource.Transport.Maestro.ManifestWork != nil && resource.Transport.Maestro.ManifestWork.Name != "" { + v.validateTemplateString(resource.Transport.Maestro.ManifestWork.Name, maestroPath+"."+FieldManifestWork+"."+FieldName) + } + } + } +} + // ============================================================================= // HELPER FUNCTIONS // ============================================================================= diff --git a/internal/config_loader/validator_test.go b/internal/config_loader/validator_test.go index 0cabf31..01c7058 100644 --- a/internal/config_loader/validator_test.go +++ b/internal/config_loader/validator_test.go @@ -559,3 +559,351 @@ func TestFieldNameCachePopulated(t *testing.T) { }) } } + +func TestTransportConfigGetClientType(t *testing.T) { + t.Run("nil transport returns kubernetes", func(t *testing.T) { + var tc *TransportConfig + assert.Equal(t, TransportClientKubernetes, tc.GetClientType()) + }) + + t.Run("empty client returns kubernetes", func(t *testing.T) { + tc := &TransportConfig{} + assert.Equal(t, TransportClientKubernetes, tc.GetClientType()) + }) + + t.Run("kubernetes client returns kubernetes", func(t *testing.T) { + tc := &TransportConfig{Client: TransportClientKubernetes} + assert.Equal(t, TransportClientKubernetes, tc.GetClientType()) + }) + + t.Run("maestro client returns maestro", func(t *testing.T) { + tc := &TransportConfig{Client: TransportClientMaestro} + assert.Equal(t, TransportClientMaestro, tc.GetClientType()) + }) +} + +func TestValidateTransportConfig(t *testing.T) { + validManifest := map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{ + "name": "test-cm", + "namespace": "default", + }, + } + + // Helper to create resource with transport config for Kubernetes transport + withK8sTransport := func(transport *TransportConfig) *AdapterTaskConfig { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: transport, + Manifest: validManifest, + Discovery: &DiscoveryConfig{ + Namespace: "default", + ByName: "test-cm", + }, + }} + return cfg + } + + // Helper to create resource with transport config for Maestro transport + withMaestroTransport := func(transport *TransportConfig) *AdapterTaskConfig { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: transport, + Manifests: []NamedManifest{ + {Name: "configmap", Manifest: validManifest}, + }, + Discovery: &DiscoveryConfig{ + Namespace: "default", + ByName: "test-cm", + }, + }} + return cfg + } + + t.Run("valid kubernetes transport", func(t *testing.T) { + cfg := withK8sTransport(&TransportConfig{Client: TransportClientKubernetes}) + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("valid nil transport defaults to kubernetes", func(t *testing.T) { + cfg := withK8sTransport(nil) + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("valid maestro transport", func(t *testing.T) { + cfg := withMaestroTransport(&TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "{{ .targetCluster }}", + }, + }) + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("valid maestro transport with manifestWork name", func(t *testing.T) { + cfg := withMaestroTransport(&TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "{{ .targetCluster }}", + ManifestWork: &ManifestWorkConfig{ + Name: "work-{{ .targetCluster }}", + }, + }, + }) + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("invalid maestro transport missing maestro config", func(t *testing.T) { + cfg := withMaestroTransport(&TransportConfig{ + Client: TransportClientMaestro, + }) + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + // Semantic validation catches missing maestro config + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "maestro configuration is required") + }) + + t.Run("invalid maestro transport missing targetCluster", func(t *testing.T) { + cfg := withMaestroTransport(&TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{}, + }) + v := newTaskValidator(cfg) + // Struct validation catches this via required tag on TargetCluster + err := v.ValidateStructure() + require.Error(t, err) + assert.Contains(t, err.Error(), "targetCluster") + }) + + t.Run("invalid maestro transport undefined template variable", func(t *testing.T) { + cfg := withMaestroTransport(&TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{ + TargetCluster: "{{ .undefinedVar }}", + }, + }) + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "undefined template variable") + }) +} + +func TestValidateManifestFields(t *testing.T) { + validManifest := map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{"name": "test-namespace"}, + } + + t.Run("kubernetes transport requires manifest field", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: &TransportConfig{Client: TransportClientKubernetes}, + // Missing Manifest field + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "kubernetes transport requires 'manifest' field") + }) + + t.Run("kubernetes transport does not support manifests array", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: &TransportConfig{Client: TransportClientKubernetes}, + Manifest: validManifest, + Manifests: []NamedManifest{ + {Name: "ns", Manifest: validManifest}, + }, + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "kubernetes transport does not support 'manifests' array") + }) + + t.Run("maestro transport requires manifests array", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{TargetCluster: "{{ .targetCluster }}"}, + }, + // Missing Manifests array + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "maestro transport requires 'manifests' array") + }) + + t.Run("maestro transport does not support manifest field", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{TargetCluster: "{{ .targetCluster }}"}, + }, + Manifest: validManifest, // Should not be used with maestro + Manifests: []NamedManifest{ + {Name: "ns", Manifest: validManifest}, + }, + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "maestro transport uses 'manifests' array, not 'manifest'") + }) + + t.Run("valid maestro transport with manifests array", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{TargetCluster: "{{ .targetCluster }}"}, + }, + Manifests: []NamedManifest{ + {Name: "namespace", Manifest: validManifest}, + {Name: "configmap", Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{"name": "test-cm", "namespace": "default"}, + }}, + }, + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + require.NoError(t, v.ValidateSemantic()) + }) + + t.Run("named manifest requires manifest or manifestRef", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{TargetCluster: "{{ .targetCluster }}"}, + }, + Manifests: []NamedManifest{ + {Name: "empty"}, // Missing both manifest and manifestRef + }, + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "named manifest requires either 'manifest' or 'manifestRef'") + }) + + t.Run("named manifest cannot have both manifest and manifestRef", func(t *testing.T) { + cfg := baseTaskConfig() + cfg.Spec.Params = []Parameter{ + {Name: "targetCluster", Source: "event.targetCluster"}, + } + cfg.Spec.Resources = []Resource{{ + Name: "testResource", + Transport: &TransportConfig{ + Client: TransportClientMaestro, + Maestro: &MaestroTransportConfig{TargetCluster: "{{ .targetCluster }}"}, + }, + Manifests: []NamedManifest{ + {Name: "conflicting", Manifest: validManifest, ManifestRef: "templates/manifest.yaml"}, + }, + Discovery: &DiscoveryConfig{Namespace: "*", ByName: "test"}, + }} + v := newTaskValidator(cfg) + require.NoError(t, v.ValidateStructure()) + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "'manifest' and 'manifestRef' are mutually exclusive") + }) +} + +func TestNamedManifestAccessors(t *testing.T) { + t.Run("GetManifestContent returns manifest when set", func(t *testing.T) { + manifest := map[string]interface{}{"apiVersion": "v1", "kind": "Namespace"} + nm := &NamedManifest{Name: "test", Manifest: manifest} + content := nm.GetManifestContent() + assert.Equal(t, manifest, content) + }) + + t.Run("GetManifestContent returns refContent when set", func(t *testing.T) { + manifest := map[string]interface{}{"apiVersion": "v1", "kind": "Namespace"} + refContent := map[string]interface{}{"apiVersion": "v1", "kind": "ConfigMap"} + nm := &NamedManifest{Name: "test", Manifest: manifest, ManifestRefContent: refContent} + content := nm.GetManifestContent() + assert.Equal(t, refContent, content, "should prefer refContent over manifest") + }) + + t.Run("GetManifestContent returns nil for nil receiver", func(t *testing.T) { + var nm *NamedManifest + assert.Nil(t, nm.GetManifestContent()) + }) + + t.Run("HasManifestRef returns true when manifestRef is set", func(t *testing.T) { + nm := &NamedManifest{Name: "test", ManifestRef: "templates/manifest.yaml"} + assert.True(t, nm.HasManifestRef()) + }) + + t.Run("HasManifestRef returns false when manifestRef is empty", func(t *testing.T) { + nm := &NamedManifest{Name: "test", Manifest: map[string]interface{}{}} + assert.False(t, nm.HasManifestRef()) + }) + + t.Run("HasManifestRef returns false for nil receiver", func(t *testing.T) { + var nm *NamedManifest + assert.False(t, nm.HasManifestRef()) + }) +} diff --git a/internal/executor/executor.go b/internal/executor/executor.go index f5b4134..724f848 100644 --- a/internal/executor/executor.go +++ b/internal/executor/executor.go @@ -10,7 +10,7 @@ import ( "github.com/cloudevents/sdk-go/v2/event" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/hyperfleet_api" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/k8s_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" pkgotel "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/otel" "go.opentelemetry.io/otel" @@ -44,7 +44,7 @@ func validateExecutorConfig(config *ExecutorConfig) error { requiredFields := []string{ "APIClient", "Logger", - "K8sClient"} + "TransportClient"} for _, field := range requiredFields { if reflect.ValueOf(config).Elem().FieldByName(field).IsNil() { @@ -208,7 +208,7 @@ func (e *Executor) Execute(ctx context.Context, data interface{}) *ExecutionResu // executeParamExtraction extracts parameters from the event and environment func (e *Executor) executeParamExtraction(execCtx *ExecutionContext) error { // Extract configured parameters - if err := extractConfigParams(e.config.Config, execCtx, e.config.K8sClient); err != nil { + if err := extractConfigParams(e.config.Config, execCtx); err != nil { return err } @@ -334,9 +334,9 @@ func (b *ExecutorBuilder) WithAPIClient(client hyperfleet_api.Client) *ExecutorB return b } -// WithK8sClient sets the Kubernetes client -func (b *ExecutorBuilder) WithK8sClient(client k8s_client.K8sClient) *ExecutorBuilder { - b.config.K8sClient = client +// WithTransportClient sets the transport client for resource operations +func (b *ExecutorBuilder) WithTransportClient(client transport_client.TransportClient) *ExecutorBuilder { + b.config.TransportClient = client return b } diff --git a/internal/executor/executor_test.go b/internal/executor/executor_test.go index 38f668b..f136ebc 100644 --- a/internal/executor/executor_test.go +++ b/internal/executor/executor_test.go @@ -58,10 +58,10 @@ func TestNewExecutor(t *testing.T) { { name: "valid config", config: &ExecutorConfig{ - Config: &config_loader.Config{}, - APIClient: newMockAPIClient(), - K8sClient: k8s_client.NewMockK8sClient(), - Logger: logger.NewTestLogger(), + Config: &config_loader.Config{}, + APIClient: newMockAPIClient(), + TransportClient: k8s_client.NewMockK8sClient(), + Logger: logger.NewTestLogger(), }, expectError: false, }, @@ -89,7 +89,7 @@ func TestExecutorBuilder(t *testing.T) { exec, err := NewBuilder(). WithConfig(config). WithAPIClient(newMockAPIClient()). - WithK8sClient(k8s_client.NewMockK8sClient()). + WithTransportClient(k8s_client.NewMockK8sClient()). WithLogger(logger.NewTestLogger()). Build() @@ -257,7 +257,7 @@ func TestExecute_ParamExtraction(t *testing.T) { exec, err := NewBuilder(). WithConfig(config). WithAPIClient(newMockAPIClient()). - WithK8sClient(k8s_client.NewMockK8sClient()). + WithTransportClient(k8s_client.NewMockK8sClient()). WithLogger(logger.NewTestLogger()). Build() @@ -362,7 +362,7 @@ func TestParamExtractor(t *testing.T) { } // Extract params using pure function - err := extractConfigParams(config, execCtx, nil) + err := extractConfigParams(config, execCtx) if tt.expectError { assert.Error(t, err) @@ -508,7 +508,7 @@ func TestSequentialExecution_Preconditions(t *testing.T) { exec, err := NewBuilder(). WithConfig(config). WithAPIClient(newMockAPIClient()). - WithK8sClient(k8s_client.NewMockK8sClient()). + WithTransportClient(k8s_client.NewMockK8sClient()). WithLogger(logger.NewTestLogger()). Build() @@ -610,7 +610,7 @@ func TestSequentialExecution_Resources(t *testing.T) { exec, err := NewBuilder(). WithConfig(config). WithAPIClient(newMockAPIClient()). - WithK8sClient(k8s_client.NewMockK8sClient()). + WithTransportClient(k8s_client.NewMockK8sClient()). WithLogger(logger.NewTestLogger()). Build() @@ -681,7 +681,7 @@ func TestSequentialExecution_PostActions(t *testing.T) { exec, err := NewBuilder(). WithConfig(config). WithAPIClient(mockClient). - WithK8sClient(k8s_client.NewMockK8sClient()). + WithTransportClient(k8s_client.NewMockK8sClient()). WithLogger(logger.NewTestLogger()). Build() @@ -751,7 +751,7 @@ func TestSequentialExecution_SkipReasonCapture(t *testing.T) { exec, err := NewBuilder(). WithConfig(config). WithAPIClient(newMockAPIClient()). - WithK8sClient(k8s_client.NewMockK8sClient()). + WithTransportClient(k8s_client.NewMockK8sClient()). WithLogger(logger.NewTestLogger()). Build() diff --git a/internal/executor/param_extractor.go b/internal/executor/param_extractor.go index 87728a3..08763d9 100644 --- a/internal/executor/param_extractor.go +++ b/internal/executor/param_extractor.go @@ -1,7 +1,6 @@ package executor import ( - "context" "fmt" "math" "os" @@ -9,7 +8,6 @@ import ( "strings" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/k8s_client" ) // ParamConfig interface allows extractConfigParams to work with both AdapterConfig and Config @@ -20,9 +18,9 @@ type ParamConfig interface { // extractConfigParams extracts all configured parameters and populates execCtx.Params // This is a pure function that directly modifies execCtx for simplicity -func extractConfigParams(config ParamConfig, execCtx *ExecutionContext, k8sClient k8s_client.K8sClient) error { +func extractConfigParams(config ParamConfig, execCtx *ExecutionContext) error { for _, param := range config.GetParams() { - value, err := extractParam(execCtx.Ctx, param, execCtx.EventData, k8sClient) + value, err := extractParam(param, execCtx.EventData) if err != nil { if param.Required { return NewExecutorError(PhaseParamExtraction, param.Name, @@ -70,7 +68,7 @@ func extractConfigParams(config ParamConfig, execCtx *ExecutionContext, k8sClien } // extractParam extracts a single parameter based on its source -func extractParam(ctx context.Context, param config_loader.Parameter, eventData map[string]interface{}, k8sClient k8s_client.K8sClient) (interface{}, error) { +func extractParam(param config_loader.Parameter, eventData map[string]interface{}) (interface{}, error) { source := param.Source // Handle different source types @@ -79,10 +77,6 @@ func extractParam(ctx context.Context, param config_loader.Parameter, eventData return extractFromEnv(source[4:]) case strings.HasPrefix(source, "event."): return extractFromEvent(source[6:], eventData) - case strings.HasPrefix(source, "secret."): - return extractFromSecret(ctx, source[7:], k8sClient) - case strings.HasPrefix(source, "configmap."): - return extractFromConfigMap(ctx, source[10:], k8sClient) case source == "": // No source specified, return default or nil return param.Default, nil @@ -128,36 +122,6 @@ func extractFromEvent(path string, eventData map[string]interface{}) (interface{ return current, nil } -// extractFromSecret extracts a value from a Kubernetes Secret -// Format: secret... (namespace is required) -func extractFromSecret(ctx context.Context, path string, k8sClient k8s_client.K8sClient) (interface{}, error) { - if k8sClient == nil { - return nil, fmt.Errorf("kubernetes client not configured, cannot extract from secret") - } - - value, err := k8sClient.ExtractFromSecret(ctx, path) - if err != nil { - return nil, err - } - - return value, nil -} - -// extractFromConfigMap extracts a value from a Kubernetes ConfigMap -// Format: configmap... (namespace is required) -func extractFromConfigMap(ctx context.Context, path string, k8sClient k8s_client.K8sClient) (interface{}, error) { - if k8sClient == nil { - return nil, fmt.Errorf("kubernetes client not configured, cannot extract from configmap") - } - - value, err := k8sClient.ExtractFromConfigMap(ctx, path) - if err != nil { - return nil, err - } - - return value, nil -} - // addMetadataParams adds adapter and event metadata to execCtx.Params func addMetadataParams(config ParamConfig, execCtx *ExecutionContext) { metadata := config.GetMetadata() diff --git a/internal/executor/resource_executor.go b/internal/executor/resource_executor.go index 4e398ca..e36e972 100644 --- a/internal/executor/resource_executor.go +++ b/internal/executor/resource_executor.go @@ -3,32 +3,31 @@ package executor import ( "context" "fmt" - "strings" - "time" "github.com/mitchellh/copystructure" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/generation" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/k8s_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" apierrors "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime/schema" + "sigs.k8s.io/yaml" ) // ResourceExecutor creates and updates Kubernetes resources type ResourceExecutor struct { - k8sClient k8s_client.K8sClient - log logger.Logger + client transport_client.TransportClient + log logger.Logger } // newResourceExecutor creates a new resource executor // NOTE: Caller (NewExecutor) is responsible for config validation func newResourceExecutor(config *ExecutorConfig) *ResourceExecutor { return &ResourceExecutor{ - k8sClient: config.K8sClient, - log: config.Logger, + client: config.TransportClient, + log: config.Logger, } } @@ -52,128 +51,177 @@ func (re *ResourceExecutor) ExecuteAll(ctx context.Context, resources []config_l return results, nil } -// executeResource creates or updates a single Kubernetes resource +// executeResource creates or updates a single resource. +// The executor no longer needs to know which transport it's talking to. +// For K8s transport: builds a single manifest, discovers existing, calls ApplyResources. +// For Maestro transport: builds multiple manifests, populates TransportConfig in ApplyOptions, +// calls the same ApplyResources — the Maestro client internally builds the ManifestWork. func (re *ResourceExecutor) executeResource(ctx context.Context, resource config_loader.Resource, execCtx *ExecutionContext) (ResourceResult, error) { result := ResourceResult{ Name: resource.Name, Status: StatusSuccess, } - // Step 1: Build the manifest - re.log.Debugf(ctx, "Building manifest from config") - manifest, err := re.buildManifest(ctx, resource, execCtx) - if err != nil { + if re.client == nil { result.Status = StatusFailed - result.Error = err - return result, NewExecutorError(PhaseResources, resource.Name, "failed to build manifest", err) + result.Error = fmt.Errorf("transport client not configured") + return result, NewExecutorError(PhaseResources, resource.Name, "transport client not configured", result.Error) } - // Extract resource info - gvk := manifest.GroupVersionKind() - result.Kind = gvk.Kind - result.Namespace = manifest.GetNamespace() - result.ResourceName = manifest.GetName() - - // Add K8s resource context fields for logging (separate from event resource_type/resource_id) - ctx = logger.WithK8sKind(ctx, result.Kind) - ctx = logger.WithK8sName(ctx, result.ResourceName) - ctx = logger.WithK8sNamespace(ctx, result.Namespace) + clientType := resource.Transport.GetClientType() - re.log.Debugf(ctx, "Resource[%s] manifest built: namespace=%s", resource.Name, manifest.GetNamespace()) + var manifests []*unstructured.Unstructured - // Step 2: Delegate to applyResource which handles discovery, generation comparison, and operations - return re.applyResource(ctx, resource, manifest, execCtx) -} + switch clientType { + case config_loader.TransportClientKubernetes: + // Build single manifest for Kubernetes transport + re.log.Debugf(ctx, "Building manifest from config for Kubernetes transport") + m, err := re.buildManifestK8s(ctx, resource, execCtx) + if err != nil { + result.Status = StatusFailed + result.Error = err + return result, NewExecutorError(PhaseResources, resource.Name, "failed to build manifest", err) + } + manifests = []*unstructured.Unstructured{m} -// applyResource handles resource discovery, generation comparison, and execution of operations. -// It discovers existing resources (via Discovery config or by name), compares generations, -// and performs the appropriate operation (create, update, recreate, or skip). -func (re *ResourceExecutor) applyResource(ctx context.Context, resource config_loader.Resource, manifest *unstructured.Unstructured, execCtx *ExecutionContext) (ResourceResult, error) { - result := ResourceResult{ - Name: resource.Name, - Kind: manifest.GetKind(), - Namespace: manifest.GetNamespace(), - ResourceName: manifest.GetName(), - Status: StatusSuccess, - } + case config_loader.TransportClientMaestro: + // Build multiple manifests for Maestro transport + re.log.Debugf(ctx, "Building manifests from config for Maestro transport") + built, err := re.buildManifestsMaestro(ctx, resource, execCtx) + if err != nil { + result.Status = StatusFailed + result.Error = err + return result, NewExecutorError(PhaseResources, resource.Name, "failed to build manifests", err) + } + manifests = built + re.log.Debugf(ctx, "Resource[%s] built %d manifests for ManifestWork", resource.Name, len(manifests)) - if re.k8sClient == nil { + default: result.Status = StatusFailed - result.Error = fmt.Errorf("kubernetes client not configured") - return result, NewExecutorError(PhaseResources, resource.Name, "kubernetes client not configured", result.Error) - } + result.Error = fmt.Errorf("unsupported transport client: %s", clientType) + return result, NewExecutorError(PhaseResources, resource.Name, "unsupported transport client", result.Error) + } + + // Set result info from first manifest + if len(manifests) > 0 { + firstManifest := manifests[0] + gvk := firstManifest.GroupVersionKind() + result.Kind = gvk.Kind + result.Namespace = firstManifest.GetNamespace() + result.ResourceName = firstManifest.GetName() + + // Add K8s resource context fields for logging + ctx = logger.WithK8sKind(ctx, result.Kind) + ctx = logger.WithK8sName(ctx, result.ResourceName) + ctx = logger.WithK8sNamespace(ctx, result.Namespace) + } + + // Add observed_generation to context + if len(manifests) > 0 { + manifestGen := manifest.GetGenerationFromUnstructured(manifests[0]) + ctx = logger.WithObservedGeneration(ctx, manifestGen) + } + + // Build resources to apply with discovery + resourcesToApply := make([]transport_client.ResourceToApply, 0, len(manifests)) + + if clientType == config_loader.TransportClientKubernetes { + // For K8s: discover existing resource for generation comparison + m := manifests[0] + gvk := m.GroupVersionKind() + + var existingResource *unstructured.Unstructured + var err error + if resource.Discovery != nil { + re.log.Debugf(ctx, "Discovering existing resource using discovery config...") + existingResource, err = re.discoverExistingResource(ctx, gvk, resource.Discovery, execCtx) + } else { + re.log.Debugf(ctx, "Looking up existing resource by name...") + existingResource, err = re.client.GetResource(ctx, gvk, m.GetNamespace(), m.GetName()) + } + + if err != nil && !apierrors.IsNotFound(err) { + result.Status = StatusFailed + result.Error = err + return result, NewExecutorError(PhaseResources, resource.Name, "failed to find existing resource", err) + } - gvk := manifest.GroupVersionKind() + if existingResource != nil { + re.log.Debugf(ctx, "Existing resource found: %s/%s", existingResource.GetNamespace(), existingResource.GetName()) + } else { + re.log.Debugf(ctx, "No existing resource found, will create") + } - // Discover existing resource - var existingResource *unstructured.Unstructured - var err error - if resource.Discovery != nil { - // Use Discovery config to find existing resource (e.g., by label selector) - re.log.Debugf(ctx, "Discovering existing resource using discovery config...") - existingResource, err = re.discoverExistingResource(ctx, gvk, resource.Discovery, execCtx) + resourcesToApply = append(resourcesToApply, transport_client.ResourceToApply{ + Manifest: m, + ExistingResource: existingResource, + }) } else { - // No Discovery config - lookup by name from manifest - re.log.Debugf(ctx, "Looking up existing resource by name...") - existingResource, err = re.k8sClient.GetResource(ctx, gvk, manifest.GetNamespace(), manifest.GetName()) + // For Maestro: no individual resource discovery, just bundle all manifests + for _, m := range manifests { + resourcesToApply = append(resourcesToApply, transport_client.ResourceToApply{ + Manifest: m, + ExistingResource: nil, + }) + } } - // Fail fast on any error except NotFound (which means resource doesn't exist yet) - if err != nil && !apierrors.IsNotFound(err) { - result.Status = StatusFailed - result.Error = err - return result, NewExecutorError(PhaseResources, resource.Name, "failed to find existing resource", err) + // Build ApplyOptions + opts := transport_client.ApplyOptions{ + RecreateOnChange: resource.RecreateOnChange, } - if existingResource != nil { - re.log.Debugf(ctx, "Existing resource found: %s/%s", existingResource.GetNamespace(), existingResource.GetName()) - } else { - re.log.Debugf(ctx, "No existing resource found, will create") - } + // For Maestro transport, populate TransportConfig + if clientType == config_loader.TransportClientMaestro { + if resource.Transport == nil || resource.Transport.Maestro == nil { + result.Status = StatusFailed + result.Error = fmt.Errorf("maestro transport configuration missing") + return result, NewExecutorError(PhaseResources, resource.Name, "maestro transport configuration missing", result.Error) + } - // Extract manifest generation once for use in comparison and logging - manifestGen := generation.GetGenerationFromUnstructured(manifest) + maestroConfig := resource.Transport.Maestro - // Add observed_generation to context early so it appears in all subsequent logs - ctx = logger.WithObservedGeneration(ctx, manifestGen) + // Render targetCluster template + targetCluster, err := renderTemplate(maestroConfig.TargetCluster, execCtx.Params) + if err != nil { + result.Status = StatusFailed + result.Error = fmt.Errorf("failed to render targetCluster template: %w", err) + return result, NewExecutorError(PhaseResources, resource.Name, "failed to render targetCluster template", err) + } - // Get existing generation (0 if not found) - var existingGen int64 - if existingResource != nil { - existingGen = generation.GetGenerationFromUnstructured(existingResource) - } + re.log.Debugf(ctx, "Resource[%s] using Maestro transport to cluster=%s with %d manifests", + resource.Name, targetCluster, len(manifests)) - // Compare generations to determine operation - decision := generation.CompareGenerations(manifestGen, existingGen, existingResource != nil) + tc := map[string]interface{}{ + "targetCluster": targetCluster, + "resourceName": resource.Name, + "params": execCtx.Params, + } - // Handle recreateOnChange override - result.Operation = decision.Operation - result.OperationReason = decision.Reason - if decision.Operation == generation.OperationUpdate && resource.RecreateOnChange { - result.Operation = generation.OperationRecreate - result.OperationReason = fmt.Sprintf("%s, recreateOnChange=true", decision.Reason) - } + // Render manifestWork name if configured + if maestroConfig.ManifestWork != nil && maestroConfig.ManifestWork.Name != "" { + workName, err := renderTemplate(maestroConfig.ManifestWork.Name, execCtx.Params) + if err != nil { + result.Status = StatusFailed + result.Error = fmt.Errorf("failed to render manifestWork.name template: %w", err) + return result, NewExecutorError(PhaseResources, resource.Name, "failed to render manifestWork.name template", err) + } + tc["manifestWorkName"] = workName + } - // Log the operation decision - re.log.Infof(ctx, "Resource[%s] is processing: operation=%s reason=%s", - resource.Name, strings.ToUpper(string(result.Operation)), result.OperationReason) + // Pass refContent if present + if maestroConfig.ManifestWork != nil && maestroConfig.ManifestWork.RefContent != nil { + tc["manifestWorkRefContent"] = maestroConfig.ManifestWork.RefContent + } - // Execute the operation - switch result.Operation { - case generation.OperationCreate: - result.Resource, err = re.createResource(ctx, manifest) - case generation.OperationUpdate: - result.Resource, err = re.updateResource(ctx, existingResource, manifest) - case generation.OperationRecreate: - result.Resource, err = re.recreateResource(ctx, existingResource, manifest) - case generation.OperationSkip: - result.Resource = existingResource + opts.TransportConfig = tc } + // Call ApplyResources uniformly + applyResults, err := re.client.ApplyResources(ctx, resourcesToApply, opts) if err != nil { result.Status = StatusFailed result.Error = err - // Set ExecutionError for K8s operation failure execCtx.Adapter.ExecutionError = &ExecutionError{ Phase: string(PhaseResources), Step: resource.Name, @@ -181,26 +229,80 @@ func (re *ResourceExecutor) applyResource(ctx context.Context, resource config_l } errCtx := logger.WithK8sResult(ctx, "FAILED") errCtx = logger.WithErrorField(errCtx, err) + re.log.Errorf(errCtx, "Resource[%s] apply failed", resource.Name) + // Log manifests for debugging + for i, m := range manifests { + if manifestYAML, marshalErr := yaml.Marshal(m.Object); marshalErr == nil { + re.log.Debugf(errCtx, "Resource[%s] failed manifest[%d]:\n%s", resource.Name, i, string(manifestYAML)) + } + } + return result, NewExecutorError(PhaseResources, resource.Name, "failed to apply resource", err) + } + + // Check for per-resource errors + if applyResults != nil && len(applyResults.Results) > 0 && applyResults.Results[0].Error != nil { + applyErr := applyResults.Results[0].Error + result.Status = StatusFailed + result.Error = applyErr + result.Operation = manifest.Operation(applyResults.Results[0].Operation) + result.OperationReason = applyResults.Results[0].Reason + execCtx.Adapter.ExecutionError = &ExecutionError{ + Phase: string(PhaseResources), + Step: resource.Name, + Message: applyErr.Error(), + } + errCtx := logger.WithK8sResult(ctx, "FAILED") + errCtx = logger.WithErrorField(errCtx, applyErr) re.log.Errorf(errCtx, "Resource[%s] processed: operation=%s reason=%s", resource.Name, result.Operation, result.OperationReason) + if manifestYAML, marshalErr := yaml.Marshal(manifests[0].Object); marshalErr == nil { + re.log.Debugf(errCtx, "Resource[%s] failed manifest:\n%s", resource.Name, string(manifestYAML)) + } return result, NewExecutorError(PhaseResources, resource.Name, - fmt.Sprintf("failed to %s resource", result.Operation), err) + fmt.Sprintf("failed to %s resource", result.Operation), applyErr) } + + // Extract result from ApplyResources + if applyResults != nil && len(applyResults.Results) > 0 { + applyResult := applyResults.Results[0] + result.Operation = manifest.Operation(applyResult.Operation) + result.OperationReason = applyResult.Reason + result.Resource = applyResult.Resource + } + successCtx := logger.WithK8sResult(ctx, "SUCCESS") re.log.Infof(successCtx, "Resource[%s] processed: operation=%s reason=%s", resource.Name, result.Operation, result.OperationReason) - // Store resource in execution context - if result.Resource != nil { - execCtx.Resources[resource.Name] = result.Resource - re.log.Debugf(ctx, "Resource stored in context as '%s'", resource.Name) + // Store resources in execution context + if clientType == config_loader.TransportClientMaestro { + // Store each manifest by compound name (resource.manifestName) + for i, m := range manifests { + if i < len(resource.Manifests) { + manifestName := resource.Manifests[i].Name + key := resource.Name + "." + manifestName + execCtx.Resources[key] = m + re.log.Debugf(ctx, "Resource stored in context as '%s'", key) + } + } + // Store first manifest under resource name for convenience + if len(manifests) > 0 { + execCtx.Resources[resource.Name] = manifests[0] + re.log.Debugf(ctx, "First manifest also stored in context as '%s'", resource.Name) + } + } else { + // K8s transport: store single resource + if result.Resource != nil { + execCtx.Resources[resource.Name] = result.Resource + re.log.Debugf(ctx, "Resource stored in context as '%s'", resource.Name) + } } return result, nil } -// buildManifest builds an unstructured manifest from the resource configuration -func (re *ResourceExecutor) buildManifest(ctx context.Context, resource config_loader.Resource, execCtx *ExecutionContext) (*unstructured.Unstructured, error) { +// buildManifestK8s builds an unstructured manifest from the resource configuration for Kubernetes transport +func (re *ResourceExecutor) buildManifestK8s(ctx context.Context, resource config_loader.Resource, execCtx *ExecutionContext) (*unstructured.Unstructured, error) { var manifestData map[string]interface{} // Get manifest (inline or loaded from ref) @@ -237,6 +339,49 @@ func (re *ResourceExecutor) buildManifest(ctx context.Context, resource config_l return obj, nil } +// buildManifestsMaestro builds unstructured manifests from the resource.Manifests array for Maestro transport +func (re *ResourceExecutor) buildManifestsMaestro(ctx context.Context, resource config_loader.Resource, execCtx *ExecutionContext) ([]*unstructured.Unstructured, error) { + results := make([]*unstructured.Unstructured, 0, len(resource.Manifests)) + + for i, nm := range resource.Manifests { + content := nm.GetManifestContent() + if content == nil { + return nil, fmt.Errorf("manifest[%d] (%s) has no content", i, nm.Name) + } + + var manifestData map[string]interface{} + switch m := content.(type) { + case map[string]interface{}: + manifestData = m + case map[interface{}]interface{}: + manifestData = convertToStringKeyMap(m) + default: + return nil, fmt.Errorf("manifest[%d] (%s): unsupported manifest type: %T", i, nm.Name, content) + } + + // Deep copy to avoid modifying the original + manifestData = deepCopyMap(ctx, manifestData, re.log) + + // Render all template strings in the manifest + renderedData, err := renderManifestTemplates(manifestData, execCtx.Params) + if err != nil { + return nil, fmt.Errorf("manifest[%d] (%s): failed to render templates: %w", i, nm.Name, err) + } + + // Convert to unstructured + obj := &unstructured.Unstructured{Object: renderedData} + + // Validate manifest + if err := validateManifest(obj); err != nil { + return nil, fmt.Errorf("manifest[%d] (%s): %w", i, nm.Name, err) + } + + results = append(results, obj) + } + + return results, nil +} + // validateManifest validates a Kubernetes manifest has all required fields and annotations func validateManifest(obj *unstructured.Unstructured) error { // Validate required Kubernetes fields @@ -251,7 +396,7 @@ func validateManifest(obj *unstructured.Unstructured) error { } // Validate required generation annotation - if generation.GetGenerationFromUnstructured(obj) == 0 { + if manifest.GetGenerationFromUnstructured(obj) == 0 { return fmt.Errorf("manifest missing required annotation %q", constants.AnnotationGeneration) } @@ -260,8 +405,8 @@ func validateManifest(obj *unstructured.Unstructured) error { // discoverExistingResource discovers an existing resource using the discovery config func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk schema.GroupVersionKind, discovery *config_loader.DiscoveryConfig, execCtx *ExecutionContext) (*unstructured.Unstructured, error) { - if re.k8sClient == nil { - return nil, fmt.Errorf("kubernetes client not configured") + if re.client == nil { + return nil, fmt.Errorf("transport client not configured") } // Render discovery namespace template @@ -277,7 +422,7 @@ func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk sc if err != nil { return nil, fmt.Errorf("failed to render byName template: %w", err) } - return re.k8sClient.GetResource(ctx, gvk, namespace, name) + return re.client.GetResource(ctx, gvk, namespace, name) } // Discover by label selector @@ -296,14 +441,14 @@ func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk sc renderedLabels[renderedK] = renderedV } - labelSelector := k8s_client.BuildLabelSelector(renderedLabels) + labelSelector := manifest.BuildLabelSelector(renderedLabels) - discoveryConfig := &k8s_client.DiscoveryConfig{ + discoveryConfig := &manifest.DiscoveryConfig{ Namespace: namespace, LabelSelector: labelSelector, } - list, err := re.k8sClient.DiscoverResources(ctx, gvk, discoveryConfig) + list, err := re.client.DiscoverResources(ctx, gvk, discoveryConfig) if err != nil { return nil, err } @@ -312,95 +457,12 @@ func (re *ResourceExecutor) discoverExistingResource(ctx context.Context, gvk sc return nil, apierrors.NewNotFound(schema.GroupResource{Group: gvk.Group, Resource: gvk.Kind}, "") } - return generation.GetLatestGenerationFromList(list), nil + return manifest.GetLatestGenerationFromList(list), nil } return nil, fmt.Errorf("discovery config must specify byName or bySelectors") } -// createResource creates a new Kubernetes resource -func (re *ResourceExecutor) createResource(ctx context.Context, manifest *unstructured.Unstructured) (*unstructured.Unstructured, error) { - if re.k8sClient == nil { - return nil, fmt.Errorf("kubernetes client not configured") - } - - return re.k8sClient.CreateResource(ctx, manifest) -} - -// updateResource updates an existing Kubernetes resource -func (re *ResourceExecutor) updateResource(ctx context.Context, existing, manifest *unstructured.Unstructured) (*unstructured.Unstructured, error) { - if re.k8sClient == nil { - return nil, fmt.Errorf("kubernetes client not configured") - } - - // Preserve resourceVersion from existing for update - manifest.SetResourceVersion(existing.GetResourceVersion()) - manifest.SetUID(existing.GetUID()) - - return re.k8sClient.UpdateResource(ctx, manifest) -} - -// recreateResource deletes and recreates a Kubernetes resource -// It waits for the resource to be fully deleted before creating the new one -// to avoid race conditions with Kubernetes asynchronous deletion -func (re *ResourceExecutor) recreateResource(ctx context.Context, existing, manifest *unstructured.Unstructured) (*unstructured.Unstructured, error) { - if re.k8sClient == nil { - return nil, fmt.Errorf("kubernetes client not configured") - } - - gvk := existing.GroupVersionKind() - namespace := existing.GetNamespace() - name := existing.GetName() - - // Delete the existing resource - re.log.Debugf(ctx, "Deleting resource for recreation") - if err := re.k8sClient.DeleteResource(ctx, gvk, namespace, name); err != nil { - return nil, fmt.Errorf("failed to delete resource for recreation: %w", err) - } - - // Wait for the resource to be fully deleted - re.log.Debugf(ctx, "Waiting for resource deletion to complete") - if err := re.waitForDeletion(ctx, gvk, namespace, name); err != nil { - return nil, fmt.Errorf("failed waiting for resource deletion: %w", err) - } - - // Create the new resource - re.log.Debugf(ctx, "Creating new resource after deletion confirmed") - return re.k8sClient.CreateResource(ctx, manifest) -} - -// waitForDeletion polls until the resource is confirmed deleted or context times out -// Returns nil when the resource is confirmed gone (NotFound), or an error otherwise -func (re *ResourceExecutor) waitForDeletion(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) error { - const pollInterval = 100 * time.Millisecond - - ticker := time.NewTicker(pollInterval) - defer ticker.Stop() - - for { - select { - case <-ctx.Done(): - re.log.Warnf(ctx, "Context cancelled/timed out while waiting for deletion") - return fmt.Errorf("context cancelled while waiting for resource deletion: %w", ctx.Err()) - case <-ticker.C: - _, err := re.k8sClient.GetResource(ctx, gvk, namespace, name) - if err != nil { - // NotFound means the resource is deleted - this is success - if apierrors.IsNotFound(err) { - re.log.Debugf(ctx, "Resource deletion confirmed") - return nil - } - // Any other error is unexpected - errCtx := logger.WithErrorField(ctx, err) - re.log.Errorf(errCtx, "Error checking resource deletion status") - return fmt.Errorf("error checking deletion status: %w", err) - } - // Resource still exists, continue polling - re.log.Debugf(ctx, "Resource still exists, waiting for deletion...") - } - } -} - // convertToStringKeyMap converts map[interface{}]interface{} to map[string]interface{} func convertToStringKeyMap(m map[interface{}]interface{}) map[string]interface{} { result := make(map[string]interface{}) diff --git a/internal/executor/resource_executor_test.go b/internal/executor/resource_executor_test.go index 47d2d99..1bea19d 100644 --- a/internal/executor/resource_executor_test.go +++ b/internal/executor/resource_executor_test.go @@ -2,10 +2,17 @@ package executor import ( "context" + "errors" + "fmt" "testing" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/maestro_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" ) func TestDeepCopyMap_BasicTypes(t *testing.T) { @@ -236,3 +243,414 @@ func TestDeepCopyMap_RealWorldContext(t *testing.T) { originalMetadata := manifest["metadata"].(map[string]interface{}) assert.Equal(t, "{{ .namespace }}", originalMetadata["name"]) } + +// ============================================================================= +// Maestro Transport Tests +// ============================================================================= + +// createTestNamespaceManifest creates a Namespace manifest for testing +func createTestNamespaceManifest(name string, generation int64) map[string]interface{} { + return map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": name, + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: fmt.Sprintf("%d", generation), + }, + }, + } +} + +// createTestConfigMapManifest creates a ConfigMap manifest for testing +func createTestConfigMapManifest(name, namespace string, generation int64) map[string]interface{} { + return map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{ + "name": name, + "namespace": namespace, + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: fmt.Sprintf("%d", generation), + }, + }, + "data": map[string]interface{}{ + "key": "value", + }, + } +} + +func TestBuildManifestsMaestro(t *testing.T) { + t.Run("builds multiple manifests with template rendering", func(t *testing.T) { + re := &ResourceExecutor{ + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Manifests: []config_loader.NamedManifest{ + { + Name: "ns", + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "{{ .namespace }}", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + }, + }, + { + Name: "cm", + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{ + "name": "{{ .configName }}", + "namespace": "{{ .namespace }}", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + "data": map[string]interface{}{ + "clusterId": "{{ .clusterId }}", + }, + }, + }, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{ + "namespace": "test-ns", + "configName": "my-config", + "clusterId": "cluster-123", + }, + } + + manifests, err := re.buildManifestsMaestro(context.Background(), resource, execCtx) + require.NoError(t, err) + require.Len(t, manifests, 2) + + // Verify first manifest (Namespace) + assert.Equal(t, "Namespace", manifests[0].GetKind()) + assert.Equal(t, "test-ns", manifests[0].GetName()) + + // Verify second manifest (ConfigMap) + assert.Equal(t, "ConfigMap", manifests[1].GetKind()) + assert.Equal(t, "my-config", manifests[1].GetName()) + assert.Equal(t, "test-ns", manifests[1].GetNamespace()) + + // Verify data was rendered + data, found := manifests[1].UnstructuredContent()["data"].(map[string]interface{}) + require.True(t, found, "ConfigMap should have data") + assert.Equal(t, "cluster-123", data["clusterId"]) + }) + + t.Run("returns error when manifest has no content", func(t *testing.T) { + re := &ResourceExecutor{ + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Manifests: []config_loader.NamedManifest{ + { + Name: "empty", + Manifest: nil, // No content + }, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{}, + } + + manifests, err := re.buildManifestsMaestro(context.Background(), resource, execCtx) + require.Error(t, err) + assert.Contains(t, err.Error(), "has no content") + assert.Nil(t, manifests) + }) + + t.Run("handles manifestRefContent over manifest", func(t *testing.T) { + re := &ResourceExecutor{ + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Manifests: []config_loader.NamedManifest{ + { + Name: "fromRef", + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Secret", + }, + // ManifestRefContent takes precedence + ManifestRefContent: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{ + "name": "from-ref", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + }, + }, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{}, + } + + manifests, err := re.buildManifestsMaestro(context.Background(), resource, execCtx) + require.NoError(t, err) + require.Len(t, manifests, 1) + + // Should use ManifestRefContent (ConfigMap), not Manifest (Secret) + assert.Equal(t, "ConfigMap", manifests[0].GetKind()) + assert.Equal(t, "from-ref", manifests[0].GetName()) + }) + + t.Run("validates each manifest", func(t *testing.T) { + re := &ResourceExecutor{ + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Manifests: []config_loader.NamedManifest{ + { + Name: "valid", + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "valid-ns", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + }, + }, + { + Name: "invalid", + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + // Missing metadata.name - invalid! + "metadata": map[string]interface{}{ + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + }, + }, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{}, + } + + manifests, err := re.buildManifestsMaestro(context.Background(), resource, execCtx) + require.Error(t, err) + assert.Contains(t, err.Error(), "invalid") + assert.Contains(t, err.Error(), "missing metadata.name") + assert.Nil(t, manifests) + }) +} + +func TestExecuteResourceMaestro(t *testing.T) { + t.Run("applies ManifestWork via transport client", func(t *testing.T) { + mockClient := maestro_client.NewMockMaestroClient() + + re := &ResourceExecutor{ + client: mockClient, + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Transport: &config_loader.TransportConfig{ + Client: config_loader.TransportClientMaestro, + Maestro: &config_loader.MaestroTransportConfig{ + TargetCluster: "{{ .targetCluster }}", + }, + }, + Manifests: []config_loader.NamedManifest{ + {Name: "ns", Manifest: createTestNamespaceManifest("test-ns", 1)}, + {Name: "cm", Manifest: createTestConfigMapManifest("test-cm", "test-ns", 1)}, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{ + "targetCluster": "my-cluster-123", + }, + Resources: make(map[string]*unstructured.Unstructured), + Adapter: AdapterMetadata{}, + } + + result, err := re.executeResource(context.Background(), resource, execCtx) + require.NoError(t, err) + + assert.Equal(t, StatusSuccess, result.Status) + assert.Equal(t, "testResource", result.Name) + + // Verify maestro client was called + appliedWorks := mockClient.GetAppliedWorks() + require.Len(t, appliedWorks, 1) + assert.Len(t, appliedWorks[0].Spec.Workload.Manifests, 2) + + // Verify correct consumer was used + consumers := mockClient.GetApplyConsumers() + require.Len(t, consumers, 1) + assert.Equal(t, "my-cluster-123", consumers[0]) + }) + + t.Run("stores manifests in execution context by compound name", func(t *testing.T) { + mockClient := maestro_client.NewMockMaestroClient() + + re := &ResourceExecutor{ + client: mockClient, + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "clusterSetup", + Transport: &config_loader.TransportConfig{ + Client: config_loader.TransportClientMaestro, + Maestro: &config_loader.MaestroTransportConfig{ + TargetCluster: "test-cluster", + }, + }, + Manifests: []config_loader.NamedManifest{ + {Name: "namespace", Manifest: createTestNamespaceManifest("ns1", 1)}, + {Name: "config", Manifest: createTestConfigMapManifest("cm1", "ns1", 1)}, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{}, + Resources: make(map[string]*unstructured.Unstructured), + Adapter: AdapterMetadata{}, + } + + _, err := re.executeResource(context.Background(), resource, execCtx) + require.NoError(t, err) + + // Verify manifests stored by compound name (resource.manifestName) + assert.NotNil(t, execCtx.Resources["clusterSetup.namespace"], "First manifest should be stored as clusterSetup.namespace") + assert.NotNil(t, execCtx.Resources["clusterSetup.config"], "Second manifest should be stored as clusterSetup.config") + + // First manifest should also be stored under resource name for convenience + assert.NotNil(t, execCtx.Resources["clusterSetup"], "First manifest should also be stored as clusterSetup") + assert.Equal(t, execCtx.Resources["clusterSetup.namespace"], execCtx.Resources["clusterSetup"]) + }) + + t.Run("returns error when transport client not configured", func(t *testing.T) { + re := &ResourceExecutor{ + client: nil, // Not configured + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Transport: &config_loader.TransportConfig{ + Client: config_loader.TransportClientMaestro, + Maestro: &config_loader.MaestroTransportConfig{ + TargetCluster: "test-cluster", + }, + }, + Manifests: []config_loader.NamedManifest{ + {Name: "ns", Manifest: createTestNamespaceManifest("test-ns", 1)}, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{}, + Resources: make(map[string]*unstructured.Unstructured), + Adapter: AdapterMetadata{}, + } + + result, err := re.executeResource(context.Background(), resource, execCtx) + require.Error(t, err) + assert.Contains(t, err.Error(), "transport client not configured") + assert.Equal(t, StatusFailed, result.Status) + }) + + t.Run("returns error when maestro config missing", func(t *testing.T) { + mockClient := maestro_client.NewMockMaestroClient() + + re := &ResourceExecutor{ + client: mockClient, + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Transport: &config_loader.TransportConfig{ + Client: config_loader.TransportClientMaestro, + Maestro: nil, // Missing config + }, + Manifests: []config_loader.NamedManifest{ + {Name: "ns", Manifest: createTestNamespaceManifest("test-ns", 1)}, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{}, + Resources: make(map[string]*unstructured.Unstructured), + Adapter: AdapterMetadata{}, + } + + result, err := re.executeResource(context.Background(), resource, execCtx) + require.Error(t, err) + assert.Contains(t, err.Error(), "maestro transport configuration missing") + assert.Equal(t, StatusFailed, result.Status) + }) + + t.Run("returns error when maestro client returns error", func(t *testing.T) { + mockClient := maestro_client.NewMockMaestroClient() + mockClient.ApplyManifestWorkError = errors.New("connection refused") + + re := &ResourceExecutor{ + client: mockClient, + log: logger.NewTestLogger(), + } + + resource := config_loader.Resource{ + Name: "testResource", + Transport: &config_loader.TransportConfig{ + Client: config_loader.TransportClientMaestro, + Maestro: &config_loader.MaestroTransportConfig{ + TargetCluster: "test-cluster", + }, + }, + Manifests: []config_loader.NamedManifest{ + {Name: "ns", Manifest: createTestNamespaceManifest("test-ns", 1)}, + }, + } + + execCtx := &ExecutionContext{ + Params: map[string]interface{}{}, + Resources: make(map[string]*unstructured.Unstructured), + Adapter: AdapterMetadata{}, + } + + result, err := re.executeResource(context.Background(), resource, execCtx) + require.Error(t, err) + assert.Contains(t, err.Error(), "failed to apply ManifestWork") + assert.Equal(t, StatusFailed, result.Status) + + // Verify error was set in execution context + assert.NotNil(t, execCtx.Adapter.ExecutionError) + assert.Equal(t, "resources", execCtx.Adapter.ExecutionError.Phase) + assert.Equal(t, "testResource", execCtx.Adapter.ExecutionError.Step) + }) +} diff --git a/internal/executor/types.go b/internal/executor/types.go index 72f1f94..c1142ac 100644 --- a/internal/executor/types.go +++ b/internal/executor/types.go @@ -7,9 +7,9 @@ import ( "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/criteria" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/generation" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/hyperfleet_api" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/k8s_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" ) @@ -60,8 +60,8 @@ type ExecutorConfig struct { Config *config_loader.Config // APIClient is the HyperFleet API client APIClient hyperfleet_api.Client - // K8sClient is the Kubernetes client - K8sClient k8s_client.K8sClient + // TransportClient is the transport client for resource operations (K8s or Maestro) + TransportClient transport_client.TransportClient // Logger is the logger instance Logger logger.Logger } @@ -134,7 +134,7 @@ type ResourceResult struct { // Status is the result status Status ExecutionStatus // Operation is the operation performed (create, update, recreate, skip) - Operation generation.Operation + Operation manifest.Operation // Resource is the created/updated resource (if successful) Resource *unstructured.Unstructured // OperationReason explains why this operation was performed diff --git a/internal/executor/utils.go b/internal/executor/utils.go index 05ce96f..834cab1 100644 --- a/internal/executor/utils.go +++ b/internal/executor/utils.go @@ -5,6 +5,7 @@ import ( "context" "fmt" "net/http" + "net/url" "strconv" "strings" "text/template" @@ -217,6 +218,10 @@ func ExecuteAPICall(ctx context.Context, apiCall *config_loader.APICall, execCtx // buildHyperfleetAPICallURL builds a full HyperFleet API URL when a relative path is provided. // It uses hyperfleet API client settings from execution context config. // If the URL already includes base/version templates or is absolute, it is returned as-is. +// +// NOTE: The URL may contain Go template expressions (e.g. "{{ .clusterId }}") that are +// rendered AFTER this function returns. We use string concatenation (not url.URL.String()) +// to avoid URL-encoding the template delimiters, which would prevent rendering. func buildHyperfleetAPICallURL(apiCallURL string, execCtx *ExecutionContext) string { if apiCallURL == "" { return apiCallURL @@ -225,22 +230,34 @@ func buildHyperfleetAPICallURL(apiCallURL string, execCtx *ExecutionContext) str return apiCallURL } - lowerURL := strings.ToLower(apiCallURL) - if strings.HasPrefix(lowerURL, "http://") || strings.HasPrefix(lowerURL, "https://") { + // If the URL is already absolute, return as-is + parsed, err := url.Parse(apiCallURL) + if err == nil && parsed.IsAbs() { return apiCallURL } - baseURL := strings.TrimRight(execCtx.Config.Spec.Clients.HyperfleetAPI.BaseURL, "/") + baseURL := execCtx.Config.Spec.Clients.HyperfleetAPI.BaseURL if baseURL == "" { return apiCallURL } + // Parse base URL to extract scheme+host, then build the path via string + // concatenation to preserve Go template expressions like {{ .clusterId }}. + base, err := url.Parse(baseURL) + if err != nil { + return apiCallURL + } + + // Reconstruct origin (scheme + host) without path encoding + origin := base.Scheme + "://" + base.Host + basePath := strings.TrimRight(base.Path, "/") + relative := strings.TrimLeft(apiCallURL, "/") if strings.HasPrefix(relative, "api/") { - return baseURL + "/" + relative + return origin + basePath + "/" + relative } - return fmt.Sprintf("%s/api/hyperfleet/%s/%s", baseURL, execCtx.Config.Spec.Clients.HyperfleetAPI.Version, relative) + return fmt.Sprintf("%s%s/api/hyperfleet/%s/%s", origin, basePath, execCtx.Config.Spec.Clients.HyperfleetAPI.Version, relative) } // ValidateAPIResponse checks if an API response is valid and successful diff --git a/internal/executor/utils_test.go b/internal/executor/utils_test.go index af2ec75..4e27b06 100644 --- a/internal/executor/utils_test.go +++ b/internal/executor/utils_test.go @@ -1025,6 +1025,214 @@ func TestBuildResourcesMap(t *testing.T) { } } +// TestBuildHyperfleetAPICallURL tests URL building for HyperFleet API calls +func TestBuildHyperfleetAPICallURL(t *testing.T) { + tests := []struct { + name string + url string + execCtx *ExecutionContext + expected string + }{ + { + name: "empty URL returns empty", + url: "", + execCtx: &ExecutionContext{}, + expected: "", + }, + { + name: "nil execCtx returns URL as-is", + url: "clusters/123", + execCtx: nil, + expected: "clusters/123", + }, + { + name: "nil config returns URL as-is", + url: "clusters/123", + execCtx: &ExecutionContext{ + Config: nil, + }, + expected: "clusters/123", + }, + { + name: "absolute HTTP URL returned as-is", + url: "http://other-service.example.com/api/v1/clusters", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8080", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://other-service.example.com/api/v1/clusters", + }, + { + name: "absolute HTTPS URL returned as-is", + url: "https://secure.example.com/api/resources", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8080", + Version: "v1", + }, + }, + }, + }, + }, + expected: "https://secure.example.com/api/resources", + }, + { + name: "empty baseURL returns URL as-is", + url: "clusters/123", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "", + Version: "v1", + }, + }, + }, + }, + }, + expected: "clusters/123", + }, + { + name: "relative path gets full URL with version", + url: "clusters/123", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8080", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://hyperfleet-api:8080/api/hyperfleet/v1/clusters/123", + }, + { + name: "relative path with leading slash", + url: "/clusters/123", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8080", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://hyperfleet-api:8080/api/hyperfleet/v1/clusters/123", + }, + { + name: "path starting with api/ skips version prefix", + url: "api/hyperfleet/v2/clusters/123", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8080", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://hyperfleet-api:8080/api/hyperfleet/v2/clusters/123", + }, + { + name: "path starting with /api/ skips version prefix", + url: "/api/hyperfleet/v2/clusters/123", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8080", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://hyperfleet-api:8080/api/hyperfleet/v2/clusters/123", + }, + { + name: "baseURL with trailing slash", + url: "clusters/123", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8080/", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://hyperfleet-api:8080/api/hyperfleet/v1/clusters/123", + }, + { + name: "baseURL with existing path", + url: "clusters/123", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://gateway:8080/hyperfleet-api", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://gateway:8080/hyperfleet-api/api/hyperfleet/v1/clusters/123", + }, + { + name: "template expressions are preserved unencoded", + url: "clusters/{{ .clusterId }}/nodepools/{{ .nodepoolId }}", + execCtx: &ExecutionContext{ + Config: &config_loader.Config{ + Spec: config_loader.ConfigSpec{ + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + BaseURL: "http://hyperfleet-api:8000", + Version: "v1", + }, + }, + }, + }, + }, + expected: "http://hyperfleet-api:8000/api/hyperfleet/v1/clusters/{{ .clusterId }}/nodepools/{{ .nodepoolId }}", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := buildHyperfleetAPICallURL(tt.url, tt.execCtx) + assert.Equal(t, tt.expected, result) + }) + } +} + // TestGetResourceAsMap tests resource to map conversion func TestGetResourceAsMap(t *testing.T) { tests := []struct { diff --git a/internal/k8s_client/apply.go b/internal/k8s_client/apply.go new file mode 100644 index 0000000..8158c92 --- /dev/null +++ b/internal/k8s_client/apply.go @@ -0,0 +1,131 @@ +package k8s_client + +import ( + "context" + "fmt" + "strings" + "time" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" +) + +// ApplyResource applies a single resource, handling generation comparison and create/update/recreate logic. +func (c *Client) ApplyResource(ctx context.Context, resource transport_client.ResourceToApply, opts transport_client.ApplyOptions) (*transport_client.ApplyResult, error) { + m := resource.Manifest + existing := resource.ExistingResource + + manifestGen := manifest.GetGenerationFromUnstructured(m) + var existingGen int64 + if existing != nil { + existingGen = manifest.GetGenerationFromUnstructured(existing) + } + + decision := manifest.CompareGenerations(manifestGen, existingGen, existing != nil) + + result := &transport_client.ApplyResult{ + Operation: string(decision.Operation), + Reason: decision.Reason, + } + + // Handle recreateOnChange override + if decision.Operation == manifest.OperationUpdate && opts.RecreateOnChange { + result.Operation = string(manifest.OperationRecreate) + result.Reason = fmt.Sprintf("%s, recreateOnChange=true", decision.Reason) + } + + c.log.Infof(ctx, "Resource applying: operation=%s reason=%s", + strings.ToUpper(result.Operation), result.Reason) + + var err error + switch manifest.Operation(result.Operation) { + case manifest.OperationCreate: + result.Resource, err = c.CreateResource(ctx, m) + case manifest.OperationUpdate: + m.SetResourceVersion(existing.GetResourceVersion()) + m.SetUID(existing.GetUID()) + result.Resource, err = c.UpdateResource(ctx, m) + case manifest.OperationRecreate: + result.Resource, err = c.recreateResource(ctx, existing, m) + case manifest.OperationSkip: + result.Resource = existing + } + + if err != nil { + result.Error = err + return result, err + } + + return result, nil +} + +// ApplyResources applies a set of resources according to the given options. +func (c *Client) ApplyResources(ctx context.Context, resources []transport_client.ResourceToApply, opts transport_client.ApplyOptions) (*transport_client.ApplyResourcesResult, error) { + results := &transport_client.ApplyResourcesResult{ + Results: make([]transport_client.ApplyResult, 0, len(resources)), + } + + for _, res := range resources { + result, err := c.ApplyResource(ctx, res, opts) + if result != nil { + results.Results = append(results.Results, *result) + } + if err != nil { + return results, err + } + } + + return results, nil +} + +// recreateResource deletes and recreates a Kubernetes resource. +func (c *Client) recreateResource(ctx context.Context, existing, newManifest *unstructured.Unstructured) (*unstructured.Unstructured, error) { + gvk := existing.GroupVersionKind() + namespace := existing.GetNamespace() + name := existing.GetName() + + c.log.Debugf(ctx, "Deleting resource for recreation") + if err := c.DeleteResource(ctx, gvk, namespace, name); err != nil { + return nil, fmt.Errorf("failed to delete resource for recreation: %w", err) + } + + c.log.Debugf(ctx, "Waiting for resource deletion to complete") + if err := c.waitForDeletion(ctx, gvk, namespace, name); err != nil { + return nil, fmt.Errorf("failed waiting for resource deletion: %w", err) + } + + c.log.Debugf(ctx, "Creating new resource after deletion confirmed") + return c.CreateResource(ctx, newManifest) +} + +// waitForDeletion polls until the resource is confirmed deleted or context times out. +func (c *Client) waitForDeletion(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) error { + const pollInterval = 100 * time.Millisecond + + ticker := time.NewTicker(pollInterval) + defer ticker.Stop() + + for { + select { + case <-ctx.Done(): + c.log.Warnf(ctx, "Context cancelled/timed out while waiting for deletion") + return fmt.Errorf("context cancelled while waiting for resource deletion: %w", ctx.Err()) + case <-ticker.C: + _, err := c.GetResource(ctx, gvk, namespace, name) + if err != nil { + if apierrors.IsNotFound(err) { + c.log.Debugf(ctx, "Resource deletion confirmed") + return nil + } + errCtx := logger.WithErrorField(ctx, err) + c.log.Errorf(errCtx, "Error checking resource deletion status") + return fmt.Errorf("error checking deletion status: %w", err) + } + c.log.Debugf(ctx, "Resource still exists, waiting for deletion...") + } + } +} diff --git a/internal/k8s_client/data_extractor.go b/internal/k8s_client/data_extractor.go deleted file mode 100644 index 2c5f1ef..0000000 --- a/internal/k8s_client/data_extractor.go +++ /dev/null @@ -1,113 +0,0 @@ -package k8s_client - -import ( - "context" - "encoding/base64" - "fmt" - "strings" - - apperrors "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/errors" - "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" - "k8s.io/apimachinery/pkg/runtime/schema" -) - -// ResourcePath represents a parsed Kubernetes resource path -type ResourcePath struct { - Namespace string - ResourceName string - Key string -} - -// ParseResourcePath parses a path in the format: namespace.name.key -func ParseResourcePath(path, resourceType string) (*ResourcePath, error) { - parts := strings.Split(path, ".") - if len(parts) < 3 { - return nil, apperrors.NewK8sInvalidPathError(resourceType, path, "namespace.name.key") - } - - return &ResourcePath{ - Namespace: parts[0], - ResourceName: parts[1], - Key: strings.Join(parts[2:], "."), // Allow dots in key name - }, nil -} - -// GetResourceData retrieves data from a Kubernetes resource (Secret or ConfigMap) -func (c *Client) GetResourceData(ctx context.Context, gvk schema.GroupVersionKind, namespace, name, resourceType string) (map[string]interface{}, error) { - resource, err := c.GetResource(ctx, gvk, namespace, name) - if err != nil { - return nil, apperrors.NewK8sResourceDataError(resourceType, namespace, name, "failed to get resource", err) - } - - data, found, err := unstructured.NestedMap(resource.Object, "data") - if err != nil { - return nil, apperrors.NewK8sResourceDataError(resourceType, namespace, name, "failed to access data field", err) - } - if !found { - return nil, apperrors.NewK8sResourceDataError(resourceType, namespace, name, "no data field found", nil) - } - - return data, nil -} - -// ExtractFromSecret extracts a value from a Kubernetes Secret -// Format: namespace.name.key (namespace is required) -func (c *Client) ExtractFromSecret(ctx context.Context, path string) (string, error) { - resourcePath, err := ParseResourcePath(path, "secret") - if err != nil { - return "", err - } - - secretGVK := schema.GroupVersionKind{Group: "", Version: "v1", Kind: "Secret"} - data, err := c.GetResourceData(ctx, secretGVK, resourcePath.Namespace, resourcePath.ResourceName, "Secret") - if err != nil { - return "", err - } - - encodedValue, ok := data[resourcePath.Key] - if !ok { - return "", apperrors.NewK8sResourceKeyNotFoundError("Secret", resourcePath.Namespace, resourcePath.ResourceName, resourcePath.Key) - } - - encodedStr, ok := encodedValue.(string) - if !ok { - return "", apperrors.NewK8sResourceDataError("Secret", resourcePath.Namespace, resourcePath.ResourceName, - fmt.Sprintf("data for key '%s' is not a string", resourcePath.Key), nil) - } - - decodedBytes, err := base64.StdEncoding.DecodeString(encodedStr) - if err != nil { - return "", apperrors.NewK8sResourceDataError("Secret", resourcePath.Namespace, resourcePath.ResourceName, - fmt.Sprintf("failed to decode data for key '%s'", resourcePath.Key), err) - } - - return string(decodedBytes), nil -} - -// ExtractFromConfigMap extracts a value from a Kubernetes ConfigMap -// Format: namespace.name.key (namespace is required) -func (c *Client) ExtractFromConfigMap(ctx context.Context, path string) (string, error) { - resourcePath, err := ParseResourcePath(path, "configmap") - if err != nil { - return "", err - } - - configMapGVK := schema.GroupVersionKind{Group: "", Version: "v1", Kind: "ConfigMap"} - data, err := c.GetResourceData(ctx, configMapGVK, resourcePath.Namespace, resourcePath.ResourceName, "ConfigMap") - if err != nil { - return "", err - } - - value, ok := data[resourcePath.Key] - if !ok { - return "", apperrors.NewK8sResourceKeyNotFoundError("ConfigMap", resourcePath.Namespace, resourcePath.ResourceName, resourcePath.Key) - } - - valueStr, ok := value.(string) - if !ok { - return "", apperrors.NewK8sResourceDataError("ConfigMap", resourcePath.Namespace, resourcePath.ResourceName, - fmt.Sprintf("data for key '%s' is not a string", resourcePath.Key), nil) - } - - return valueStr, nil -} diff --git a/internal/k8s_client/discovery.go b/internal/k8s_client/discovery.go index 00ff9cd..5a99d48 100644 --- a/internal/k8s_client/discovery.go +++ b/internal/k8s_client/discovery.go @@ -2,77 +2,23 @@ package k8s_client import ( "context" - "sort" - "strings" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime/schema" ) -// Discovery defines the interface for resource discovery configuration. -// Any type implementing this interface can be used with Client.DiscoverResources(). -type Discovery interface { - // GetNamespace returns the namespace to search in. - // Empty string means cluster-scoped or all namespaces. - GetNamespace() string +// Discovery is an alias for the transport_client.Discovery interface. +type Discovery = transport_client.Discovery - // GetName returns the resource name for single-resource discovery. - // Empty string means use selector-based discovery. - GetName() string - - // GetLabelSelector returns the label selector string (e.g., "app=myapp,env=prod"). - // Empty string means no label filtering. - GetLabelSelector() string - - // IsSingleResource returns true if discovering by name (single resource). - IsSingleResource() bool -} - -// DiscoveryConfig is the default implementation of the Discovery interface. -type DiscoveryConfig struct { - // Namespace to search in (empty for cluster-scoped or all namespaces) - Namespace string - - // ByName specifies the resource name for single-resource discovery. - // If set, GetResource is used instead of ListResources. - ByName string - - // LabelSelector is the label selector string (e.g., "app=myapp,env=prod") - LabelSelector string -} - -// GetNamespace implements Discovery.GetNamespace -func (d *DiscoveryConfig) GetNamespace() string { - return d.Namespace -} - -// GetName implements Discovery.GetName -func (d *DiscoveryConfig) GetName() string { - return d.ByName -} - -// GetLabelSelector implements Discovery.GetLabelSelector -func (d *DiscoveryConfig) GetLabelSelector() string { - return d.LabelSelector -} - -// IsSingleResource implements Discovery.IsSingleResource -func (d *DiscoveryConfig) IsSingleResource() bool { - return d.ByName != "" -} +// DiscoveryConfig is an alias for manifest.DiscoveryConfig. +type DiscoveryConfig = manifest.DiscoveryConfig // DiscoverResources discovers Kubernetes resources based on the Discovery configuration. // // If Discovery.IsSingleResource() is true, it fetches a single resource by name. // Otherwise, it lists resources matching the label selector. -// -// Example: -// -// discovery := &k8s_client.DiscoveryConfig{ -// Namespace: "default", -// LabelSelector: "app=myapp", -// } -// list, err := client.DiscoverResources(ctx, gvk, discovery) func (c *Client) DiscoverResources(ctx context.Context, gvk schema.GroupVersionKind, discovery Discovery) (*unstructured.UnstructuredList, error) { list := &unstructured.UnstructuredList{} list.SetGroupVersionKind(gvk) @@ -81,7 +27,6 @@ func (c *Client) DiscoverResources(ctx context.Context, gvk schema.GroupVersionK } if discovery.IsSingleResource() { - // Single resource by name c.log.Infof(ctx, "Discovering single resource: %s/%s (namespace: %s)", gvk.Kind, discovery.GetName(), discovery.GetNamespace()) @@ -90,12 +35,10 @@ func (c *Client) DiscoverResources(ctx context.Context, gvk schema.GroupVersionK return list, err } - // Wrap single resource in a list for consistent return type list.Items = []unstructured.Unstructured{*obj} return list, nil } - // List resources by selector return c.ListResources(ctx, gvk, discovery.GetNamespace(), discovery.GetLabelSelector()) } @@ -103,20 +46,5 @@ func (c *Client) DiscoverResources(ctx context.Context, gvk schema.GroupVersionK // Keys are sorted alphabetically for deterministic output. // Example: {"env": "prod", "app": "myapp"} -> "app=myapp,env=prod" func BuildLabelSelector(labels map[string]string) string { - if len(labels) == 0 { - return "" - } - - // Sort keys for deterministic output - keys := make([]string, 0, len(labels)) - for k := range labels { - keys = append(keys, k) - } - sort.Strings(keys) - - pairs := make([]string, 0, len(labels)) - for _, k := range keys { - pairs = append(pairs, k+"="+labels[k]) - } - return strings.Join(pairs, ",") + return manifest.BuildLabelSelector(labels) } diff --git a/internal/k8s_client/interface.go b/internal/k8s_client/interface.go index a8d0a04..3cc12b7 100644 --- a/internal/k8s_client/interface.go +++ b/internal/k8s_client/interface.go @@ -3,19 +3,22 @@ package k8s_client import ( "context" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime/schema" ) // K8sClient defines the interface for Kubernetes operations. -// This interface allows for easy mocking in unit tests without requiring -// a real Kubernetes cluster or DryRun mode. +// It embeds TransportClient for the standard transport abstraction layer, +// and adds Kubernetes-specific CRUD operations. type K8sClient interface { - // Resource CRUD operations + // TransportClient provides ApplyResources, GetResource, and DiscoverResources + transport_client.TransportClient - // GetResource retrieves a single Kubernetes resource by GVK, namespace, and name. - // Returns the resource or an error if not found. - GetResource(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) (*unstructured.Unstructured, error) + // Kubernetes-specific resource CRUD operations + + // ApplyResource applies a single resource with generation-based comparison. + ApplyResource(ctx context.Context, resource transport_client.ResourceToApply, opts transport_client.ApplyOptions) (*transport_client.ApplyResult, error) // CreateResource creates a new Kubernetes resource. // Returns the created resource with server-generated fields populated. @@ -27,24 +30,10 @@ type K8sClient interface { // DeleteResource deletes a Kubernetes resource by GVK, namespace, and name. DeleteResource(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) error - - // Discovery operations - - // DiscoverResources discovers Kubernetes resources based on the Discovery configuration. - // If Discovery.IsSingleResource() is true, it fetches a single resource by name. - // Otherwise, it lists resources matching the label selector. - DiscoverResources(ctx context.Context, gvk schema.GroupVersionKind, discovery Discovery) (*unstructured.UnstructuredList, error) - - // Data extraction operations - - // ExtractFromSecret extracts a value from a Kubernetes Secret. - // Format: namespace.name.key (namespace is required) - ExtractFromSecret(ctx context.Context, path string) (string, error) - - // ExtractFromConfigMap extracts a value from a Kubernetes ConfigMap. - // Format: namespace.name.key (namespace is required) - ExtractFromConfigMap(ctx context.Context, path string) (string, error) } // Ensure Client implements K8sClient interface var _ K8sClient = (*Client)(nil) + +// Ensure Client implements TransportClient interface +var _ transport_client.TransportClient = (*Client)(nil) diff --git a/internal/k8s_client/mock.go b/internal/k8s_client/mock.go index f148b2a..0ff01d8 100644 --- a/internal/k8s_client/mock.go +++ b/internal/k8s_client/mock.go @@ -3,6 +3,8 @@ package k8s_client import ( "context" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" apierrors "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime/schema" @@ -24,10 +26,10 @@ type MockK8sClient struct { DeleteResourceError error DiscoverResult *unstructured.UnstructuredList DiscoverError error - ExtractSecretResult string - ExtractSecretError error - ExtractConfigResult string - ExtractConfigError error + ApplyResult *transport_client.ApplyResult + ApplyError error + ApplyResourcesResult *transport_client.ApplyResourcesResult + ApplyResourcesError error } // NewMockK8sClient creates a new mock K8s client for testing @@ -38,22 +40,17 @@ func NewMockK8sClient() *MockK8sClient { } // GetResource implements K8sClient.GetResource -// Returns a NotFound error when the resource doesn't exist, matching real K8s client behavior. func (m *MockK8sClient) GetResource(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) (*unstructured.Unstructured, error) { - // Check explicit error override first if m.GetResourceError != nil { return nil, m.GetResourceError } - // Check explicit result override if m.GetResourceResult != nil { return m.GetResourceResult, nil } - // Check stored resources key := namespace + "/" + name if res, ok := m.Resources[key]; ok { return res, nil } - // Resource not found - return proper K8s NotFound error (matches real client behavior) gr := schema.GroupResource{Group: gvk.Group, Resource: gvk.Kind + "s"} return nil, apierrors.NewNotFound(gr, name) } @@ -66,7 +63,6 @@ func (m *MockK8sClient) CreateResource(ctx context.Context, obj *unstructured.Un if m.CreateResourceResult != nil { return m.CreateResourceResult, nil } - // Store the resource key := obj.GetNamespace() + "/" + obj.GetName() m.Resources[key] = obj.DeepCopy() return obj, nil @@ -80,7 +76,6 @@ func (m *MockK8sClient) UpdateResource(ctx context.Context, obj *unstructured.Un if m.UpdateResourceResult != nil { return m.UpdateResourceResult, nil } - // Store the resource key := obj.GetNamespace() + "/" + obj.GetName() m.Resources[key] = obj.DeepCopy() return obj, nil @@ -91,7 +86,6 @@ func (m *MockK8sClient) DeleteResource(ctx context.Context, gvk schema.GroupVers if m.DeleteResourceError != nil { return m.DeleteResourceError } - // Remove from stored resources key := namespace + "/" + name delete(m.Resources, key) return nil @@ -108,20 +102,78 @@ func (m *MockK8sClient) DiscoverResources(ctx context.Context, gvk schema.GroupV return &unstructured.UnstructuredList{}, nil } -// ExtractFromSecret implements K8sClient.ExtractFromSecret -func (m *MockK8sClient) ExtractFromSecret(ctx context.Context, path string) (string, error) { - if m.ExtractSecretError != nil { - return "", m.ExtractSecretError +// ApplyResource implements K8sClient.ApplyResource +func (m *MockK8sClient) ApplyResource(ctx context.Context, resource transport_client.ResourceToApply, opts transport_client.ApplyOptions) (*transport_client.ApplyResult, error) { + if m.ApplyError != nil { + return nil, m.ApplyError } - return m.ExtractSecretResult, nil + if m.ApplyResult != nil { + return m.ApplyResult, nil + } + + // Default behavior: determine operation from generation comparison + existing := resource.ExistingResource + newManifest := resource.Manifest + + manifestGen := manifest.GetGenerationFromUnstructured(newManifest) + var existingGen int64 + if existing != nil { + existingGen = manifest.GetGenerationFromUnstructured(existing) + } + + decision := manifest.CompareGenerations(manifestGen, existingGen, existing != nil) + operation := string(decision.Operation) + if decision.Operation == manifest.OperationUpdate && opts.RecreateOnChange { + operation = string(manifest.OperationRecreate) + } + + switch manifest.Operation(operation) { + case manifest.OperationCreate: + obj, err := m.CreateResource(ctx, newManifest) + return &transport_client.ApplyResult{Operation: operation, Reason: decision.Reason, Resource: obj, Error: err}, err + case manifest.OperationUpdate: + if existing != nil { + newManifest.SetResourceVersion(existing.GetResourceVersion()) + newManifest.SetUID(existing.GetUID()) + } + obj, err := m.UpdateResource(ctx, newManifest) + return &transport_client.ApplyResult{Operation: operation, Reason: decision.Reason, Resource: obj, Error: err}, err + case manifest.OperationRecreate: + if existing != nil { + gvk := existing.GroupVersionKind() + _ = m.DeleteResource(ctx, gvk, existing.GetNamespace(), existing.GetName()) //nolint:errcheck // mock: best-effort delete before recreate + } + obj, err := m.CreateResource(ctx, newManifest) + return &transport_client.ApplyResult{Operation: operation, Reason: decision.Reason, Resource: obj, Error: err}, err + case manifest.OperationSkip: + return &transport_client.ApplyResult{Operation: operation, Reason: decision.Reason, Resource: existing}, nil + } + + return &transport_client.ApplyResult{Operation: operation, Reason: decision.Reason, Resource: newManifest}, nil } -// ExtractFromConfigMap implements K8sClient.ExtractFromConfigMap -func (m *MockK8sClient) ExtractFromConfigMap(ctx context.Context, path string) (string, error) { - if m.ExtractConfigError != nil { - return "", m.ExtractConfigError +// ApplyResources implements TransportClient.ApplyResources +func (m *MockK8sClient) ApplyResources(ctx context.Context, resources []transport_client.ResourceToApply, opts transport_client.ApplyOptions) (*transport_client.ApplyResourcesResult, error) { + if m.ApplyResourcesError != nil { + return nil, m.ApplyResourcesError + } + if m.ApplyResourcesResult != nil { + return m.ApplyResourcesResult, nil + } + + results := &transport_client.ApplyResourcesResult{ + Results: make([]transport_client.ApplyResult, 0, len(resources)), + } + for _, res := range resources { + result, err := m.ApplyResource(ctx, res, opts) + if result != nil { + results.Results = append(results.Results, *result) + } + if err != nil { + return results, err + } } - return m.ExtractConfigResult, nil + return results, nil } // Ensure MockK8sClient implements K8sClient diff --git a/internal/maestro_client/client.go b/internal/maestro_client/client.go index 4c665a4..faf8e90 100644 --- a/internal/maestro_client/client.go +++ b/internal/maestro_client/client.go @@ -11,12 +11,21 @@ import ( "strings" "time" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" apperrors "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/errors" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/utils" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/version" "github.com/openshift-online/maestro/pkg/api/openapi" "github.com/openshift-online/maestro/pkg/client/cloudevents/grpcsource" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" workv1client "open-cluster-management.io/api/client/work/clientset/versioned/typed/work/v1" + workv1 "open-cluster-management.io/api/work/v1" "open-cluster-management.io/sdk-go/pkg/cloudevents/generic/options/cert" "open-cluster-management.io/sdk-go/pkg/cloudevents/generic/options/grpc" ) @@ -396,3 +405,236 @@ func (c *Client) WorkClient() workv1client.WorkV1Interface { func (c *Client) SourceID() string { return c.config.SourceID } + +// --- TransportClient implementation --- +// The following methods implement transport_client.TransportClient, +// enabling the maestro_client to be used as a transport backend. + +// ApplyResources bundles manifests into a ManifestWork and applies it to a target cluster. +// It extracts Maestro-specific settings from opts.TransportConfig: +// - "targetCluster" (string, required): the consumer name for Maestro +// - "manifestWorkName" (string, optional): the ManifestWork name +// - "manifestWorkRefContent" (map[string]interface{}, optional): labels, annotations, deleteOption settings +// - "resourceName" (string, optional): the resource name from config (for auto-naming) +// - "params" (map[string]interface{}, optional): template params for rendering refContent values +func (c *Client) ApplyResources(ctx context.Context, resources []transport_client.ResourceToApply, opts transport_client.ApplyOptions) (*transport_client.ApplyResourcesResult, error) { + // Extract transport config + tc := opts.TransportConfig + if tc == nil { + return nil, apperrors.MaestroError("TransportConfig is required for Maestro transport") + } + + targetCluster, _ := tc["targetCluster"].(string) //nolint:errcheck // type assertion with zero-value default + if targetCluster == "" { + return nil, apperrors.MaestroError("targetCluster is required in TransportConfig") + } + + manifestWorkName, _ := tc["manifestWorkName"].(string) //nolint:errcheck // optional, zero-value default + resourceName, _ := tc["resourceName"].(string) //nolint:errcheck // optional, zero-value default + refContent, _ := tc["manifestWorkRefContent"].(map[string]interface{}) //nolint:errcheck // optional, nil default + params, _ := tc["params"].(map[string]interface{}) //nolint:errcheck // optional, nil default + + // Collect manifests from resources + manifests := make([]*unstructured.Unstructured, 0, len(resources)) + for _, res := range resources { + manifests = append(manifests, res.Manifest) + } + + // Build ManifestWork + work, err := buildManifestWork(ctx, c.log, manifests, manifestWorkName, resourceName, refContent, params) + if err != nil { + return nil, fmt.Errorf("failed to build ManifestWork: %w", err) + } + + c.log.Infof(ctx, "Applying ManifestWork via Maestro: name=%s targetCluster=%s manifestCount=%d", + work.Name, targetCluster, len(manifests)) + + // Apply ManifestWork + _, err = c.ApplyManifestWork(ctx, targetCluster, work) + if err != nil { + return nil, fmt.Errorf("failed to apply ManifestWork: %w", err) + } + + // Build results + results := &transport_client.ApplyResourcesResult{ + Results: make([]transport_client.ApplyResult, 0, len(resources)), + } + for _, res := range resources { + results.Results = append(results.Results, transport_client.ApplyResult{ + Operation: string(manifest.OperationCreate), + Reason: fmt.Sprintf("ManifestWork applied to cluster %s with %d manifests", targetCluster, len(manifests)), + Resource: res.Manifest, + }) + } + + return results, nil +} + +// GetResource retrieves a resource by GVK, namespace, and name. +// For Maestro transport, individual resource lookup is not supported; +// returns NotFound so that the executor treats it as a new resource. +func (c *Client) GetResource(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) (*unstructured.Unstructured, error) { + return nil, apierrors.NewNotFound( + schema.GroupResource{Group: gvk.Group, Resource: gvk.Kind}, + name, + ) +} + +// DiscoverResources discovers resources by GVK and discovery config. +// For Maestro transport, discovery returns an empty list. +func (c *Client) DiscoverResources(ctx context.Context, gvk schema.GroupVersionKind, discovery transport_client.Discovery) (*unstructured.UnstructuredList, error) { + return &unstructured.UnstructuredList{}, nil +} + +// buildManifestWork creates a ManifestWork containing the given manifests. +// log may be nil (e.g., when called from mock). +func buildManifestWork(_ context.Context, log logger.Logger, manifests []*unstructured.Unstructured, workName, resourceName string, refContent map[string]interface{}, params map[string]interface{}) (*workv1.ManifestWork, error) { + // Determine ManifestWork name + if workName == "" { + if len(manifests) > 0 { + workName = fmt.Sprintf("%s-%s", resourceName, manifests[0].GetName()) + } else { + workName = resourceName + } + } + + // Build manifests array for ManifestWork + manifestEntries := make([]workv1.Manifest, len(manifests)) + for i, m := range manifests { + manifestBytes, err := m.MarshalJSON() + if err != nil { + return nil, fmt.Errorf("failed to marshal manifest[%d]: %w", i, err) + } + manifestEntries[i] = workv1.Manifest{ + RawExtension: runtime.RawExtension{Raw: manifestBytes}, + } + } + + work := &workv1.ManifestWork{ + Spec: workv1.ManifestWorkSpec{ + Workload: workv1.ManifestsTemplate{ + Manifests: manifestEntries, + }, + }, + } + work.SetName(workName) + + // Copy the generation annotation from the first manifest to the ManifestWork + if len(manifests) > 0 { + manifestAnnotations := manifests[0].GetAnnotations() + if manifestAnnotations != nil { + if gen, ok := manifestAnnotations[constants.AnnotationGeneration]; ok { + work.SetAnnotations(map[string]string{ + constants.AnnotationGeneration: gen, + }) + } + } + } + + // Apply any additional settings from refContent if present + if refContent != nil { + if err := applyManifestWorkSettings(log, work, refContent, params); err != nil { + return nil, fmt.Errorf("failed to apply manifestWork settings: %w", err) + } + } + + return work, nil +} + +// applyManifestWorkSettings applies settings from the manifestWork ref file to the ManifestWork. +// The ref file can contain metadata (labels, annotations) and spec fields. +// Template variables in string values are rendered using the provided params. +// log may be nil (e.g., when called from mock). +func applyManifestWorkSettings(_ logger.Logger, work *workv1.ManifestWork, settings map[string]interface{}, params map[string]interface{}) error { + // Apply metadata if present + if metadata, ok := settings["metadata"].(map[string]interface{}); ok { + // Apply labels from metadata + if labels, ok := metadata["labels"].(map[string]interface{}); ok { + labelMap := make(map[string]string) + for k, v := range labels { + if str, ok := v.(string); ok { + rendered, err := utils.RenderTemplate(str, params) + if err != nil { + return fmt.Errorf("failed to render label value for key %s: %w", k, err) + } + labelMap[k] = rendered + } + } + work.SetLabels(labelMap) + } + + // Apply annotations from metadata + if annotations, ok := metadata["annotations"].(map[string]interface{}); ok { + annotationMap := make(map[string]string) + for k, v := range annotations { + if str, ok := v.(string); ok { + rendered, err := utils.RenderTemplate(str, params) + if err != nil { + return fmt.Errorf("failed to render annotation value for key %s: %w", k, err) + } + annotationMap[k] = rendered + } + } + work.SetAnnotations(annotationMap) + } + } + + // Also check for labels/annotations at root level (backwards compatibility) + if labels, ok := settings["labels"].(map[string]interface{}); ok { + labelMap := make(map[string]string) + for k, v := range labels { + if str, ok := v.(string); ok { + rendered, err := utils.RenderTemplate(str, params) + if err != nil { + return fmt.Errorf("failed to render label value for key %s: %w", k, err) + } + labelMap[k] = rendered + } + } + // Merge with existing labels + existing := work.GetLabels() + if existing == nil { + existing = make(map[string]string) + } + for k, v := range labelMap { + existing[k] = v + } + work.SetLabels(existing) + } + + if annotations, ok := settings["annotations"].(map[string]interface{}); ok { + annotationMap := make(map[string]string) + for k, v := range annotations { + if str, ok := v.(string); ok { + rendered, err := utils.RenderTemplate(str, params) + if err != nil { + return fmt.Errorf("failed to render annotation value for key %s: %w", k, err) + } + annotationMap[k] = rendered + } + } + // Merge with existing annotations + existing := work.GetAnnotations() + if existing == nil { + existing = make(map[string]string) + } + for k, v := range annotationMap { + existing[k] = v + } + work.SetAnnotations(existing) + } + + // Apply spec fields if present + if spec, ok := settings["spec"].(map[string]interface{}); ok { + // Apply deleteOption if present + if deleteOption, ok := spec["deleteOption"].(map[string]interface{}); ok { + if propagationPolicy, ok := deleteOption["propagationPolicy"].(string); ok { + work.Spec.DeleteOption = &workv1.DeleteOption{ + PropagationPolicy: workv1.DeletePropagationPolicyType(propagationPolicy), + } + } + } + } + + return nil +} diff --git a/internal/maestro_client/interface.go b/internal/maestro_client/interface.go index 4df65d6..0a94138 100644 --- a/internal/maestro_client/interface.go +++ b/internal/maestro_client/interface.go @@ -3,6 +3,7 @@ package maestro_client import ( "context" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" workv1 "open-cluster-management.io/api/work/v1" ) @@ -30,3 +31,6 @@ type ManifestWorkClient interface { // Ensure Client implements ManifestWorkClient var _ ManifestWorkClient = (*Client)(nil) + +// Ensure Client implements TransportClient +var _ transport_client.TransportClient = (*Client)(nil) diff --git a/internal/maestro_client/mock.go b/internal/maestro_client/mock.go new file mode 100644 index 0000000..b468cec --- /dev/null +++ b/internal/maestro_client/mock.go @@ -0,0 +1,324 @@ +package maestro_client + +import ( + "context" + "fmt" + "sync" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/transport_client" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + workv1 "open-cluster-management.io/api/work/v1" +) + +// MockMaestroClient provides a mock implementation of ManifestWorkClient for unit testing. +// It tracks all calls made and allows configuring responses. +type MockMaestroClient struct { + mu sync.Mutex + + // ApplyManifestWorkResult is returned from ApplyManifestWork when ApplyManifestWorkError is nil + ApplyManifestWorkResult *workv1.ManifestWork + + // ApplyManifestWorkError is returned from ApplyManifestWork when set + ApplyManifestWorkError error + + // AppliedWorks tracks all ManifestWorks passed to ApplyManifestWork + AppliedWorks []*workv1.ManifestWork + + // ApplyManifestWorkConsumers tracks the consumer names passed to ApplyManifestWork + ApplyManifestWorkConsumers []string + + // CreateManifestWorkResult is returned from CreateManifestWork when CreateManifestWorkError is nil + CreateManifestWorkResult *workv1.ManifestWork + + // CreateManifestWorkError is returned from CreateManifestWork when set + CreateManifestWorkError error + + // CreatedWorks tracks all ManifestWorks passed to CreateManifestWork + CreatedWorks []*workv1.ManifestWork + + // GetManifestWorkResult is returned from GetManifestWork when GetManifestWorkError is nil + GetManifestWorkResult *workv1.ManifestWork + + // GetManifestWorkError is returned from GetManifestWork when set + GetManifestWorkError error + + // ListManifestWorksResult is returned from ListManifestWorks when ListManifestWorksError is nil + ListManifestWorksResult *workv1.ManifestWorkList + + // ListManifestWorksError is returned from ListManifestWorks when set + ListManifestWorksError error + + // DeleteManifestWorkError is returned from DeleteManifestWork when set + DeleteManifestWorkError error + + // DeletedWorks tracks all (consumer, workName) pairs passed to DeleteManifestWork + DeletedWorks []DeletedWorkRef + + // PatchManifestWorkResult is returned from PatchManifestWork when PatchManifestWorkError is nil + PatchManifestWorkResult *workv1.ManifestWork + + // PatchManifestWorkError is returned from PatchManifestWork when set + PatchManifestWorkError error + + // PatchedWorks tracks all patch operations + PatchedWorks []PatchedWorkRef +} + +// DeletedWorkRef tracks a delete operation +type DeletedWorkRef struct { + ConsumerName string + WorkName string +} + +// PatchedWorkRef tracks a patch operation +type PatchedWorkRef struct { + ConsumerName string + WorkName string + PatchData []byte +} + +// NewMockMaestroClient creates a new MockMaestroClient with default settings. +// By default, ApplyManifestWork returns the input work with ResourceVersion "1". +func NewMockMaestroClient() *MockMaestroClient { + return &MockMaestroClient{ + AppliedWorks: make([]*workv1.ManifestWork, 0), + ApplyManifestWorkConsumers: make([]string, 0), + CreatedWorks: make([]*workv1.ManifestWork, 0), + DeletedWorks: make([]DeletedWorkRef, 0), + PatchedWorks: make([]PatchedWorkRef, 0), + } +} + +// Ensure MockMaestroClient implements ManifestWorkClient +var _ ManifestWorkClient = (*MockMaestroClient)(nil) + +// Ensure MockMaestroClient implements TransportClient +var _ transport_client.TransportClient = (*MockMaestroClient)(nil) + +// ApplyManifestWork creates or updates a ManifestWork (upsert operation) +func (m *MockMaestroClient) ApplyManifestWork(ctx context.Context, consumerName string, work *workv1.ManifestWork) (*workv1.ManifestWork, error) { + m.mu.Lock() + defer m.mu.Unlock() + + m.AppliedWorks = append(m.AppliedWorks, work.DeepCopy()) + m.ApplyManifestWorkConsumers = append(m.ApplyManifestWorkConsumers, consumerName) + + if m.ApplyManifestWorkError != nil { + return nil, m.ApplyManifestWorkError + } + + if m.ApplyManifestWorkResult != nil { + return m.ApplyManifestWorkResult.DeepCopy(), nil + } + + // Default: return the work with a resource version + result := work.DeepCopy() + result.ResourceVersion = "1" + return result, nil +} + +// CreateManifestWork creates a new ManifestWork for a target cluster (consumer) +func (m *MockMaestroClient) CreateManifestWork(ctx context.Context, consumerName string, work *workv1.ManifestWork) (*workv1.ManifestWork, error) { + m.mu.Lock() + defer m.mu.Unlock() + + m.CreatedWorks = append(m.CreatedWorks, work.DeepCopy()) + + if m.CreateManifestWorkError != nil { + return nil, m.CreateManifestWorkError + } + + if m.CreateManifestWorkResult != nil { + return m.CreateManifestWorkResult.DeepCopy(), nil + } + + // Default: return the work with a resource version + result := work.DeepCopy() + result.ResourceVersion = "1" + return result, nil +} + +// GetManifestWork retrieves a ManifestWork by name from a target cluster +func (m *MockMaestroClient) GetManifestWork(ctx context.Context, consumerName string, workName string) (*workv1.ManifestWork, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.GetManifestWorkError != nil { + return nil, m.GetManifestWorkError + } + + if m.GetManifestWorkResult != nil { + return m.GetManifestWorkResult.DeepCopy(), nil + } + + return nil, nil +} + +// DeleteManifestWork deletes a ManifestWork from a target cluster +func (m *MockMaestroClient) DeleteManifestWork(ctx context.Context, consumerName string, workName string) error { + m.mu.Lock() + defer m.mu.Unlock() + + m.DeletedWorks = append(m.DeletedWorks, DeletedWorkRef{ + ConsumerName: consumerName, + WorkName: workName, + }) + + return m.DeleteManifestWorkError +} + +// ListManifestWorks lists all ManifestWorks for a target cluster +func (m *MockMaestroClient) ListManifestWorks(ctx context.Context, consumerName string, labelSelector string) (*workv1.ManifestWorkList, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.ListManifestWorksError != nil { + return nil, m.ListManifestWorksError + } + + if m.ListManifestWorksResult != nil { + return m.ListManifestWorksResult.DeepCopy(), nil + } + + return &workv1.ManifestWorkList{}, nil +} + +// PatchManifestWork patches an existing ManifestWork using JSON merge patch +func (m *MockMaestroClient) PatchManifestWork(ctx context.Context, consumerName string, workName string, patchData []byte) (*workv1.ManifestWork, error) { + m.mu.Lock() + defer m.mu.Unlock() + + m.PatchedWorks = append(m.PatchedWorks, PatchedWorkRef{ + ConsumerName: consumerName, + WorkName: workName, + PatchData: patchData, + }) + + if m.PatchManifestWorkError != nil { + return nil, m.PatchManifestWorkError + } + + if m.PatchManifestWorkResult != nil { + return m.PatchManifestWorkResult.DeepCopy(), nil + } + + return nil, nil +} + +// Reset clears all tracked calls and resets configured responses +func (m *MockMaestroClient) Reset() { + m.mu.Lock() + defer m.mu.Unlock() + + m.ApplyManifestWorkResult = nil + m.ApplyManifestWorkError = nil + m.AppliedWorks = make([]*workv1.ManifestWork, 0) + m.ApplyManifestWorkConsumers = make([]string, 0) + m.CreateManifestWorkResult = nil + m.CreateManifestWorkError = nil + m.CreatedWorks = make([]*workv1.ManifestWork, 0) + m.GetManifestWorkResult = nil + m.GetManifestWorkError = nil + m.ListManifestWorksResult = nil + m.ListManifestWorksError = nil + m.DeleteManifestWorkError = nil + m.DeletedWorks = make([]DeletedWorkRef, 0) + m.PatchManifestWorkResult = nil + m.PatchManifestWorkError = nil + m.PatchedWorks = make([]PatchedWorkRef, 0) +} + +// GetAppliedWorks returns a copy of all applied works (thread-safe) +func (m *MockMaestroClient) GetAppliedWorks() []*workv1.ManifestWork { + m.mu.Lock() + defer m.mu.Unlock() + + result := make([]*workv1.ManifestWork, len(m.AppliedWorks)) + for i, w := range m.AppliedWorks { + result[i] = w.DeepCopy() + } + return result +} + +// GetApplyConsumers returns a copy of all consumer names used in ApplyManifestWork (thread-safe) +func (m *MockMaestroClient) GetApplyConsumers() []string { + m.mu.Lock() + defer m.mu.Unlock() + + result := make([]string, len(m.ApplyManifestWorkConsumers)) + copy(result, m.ApplyManifestWorkConsumers) + return result +} + +// --- TransportClient implementation --- + +// ApplyResources implements transport_client.TransportClient. +// It delegates to ApplyManifestWork internally so that existing test assertions +// on GetAppliedWorks() and GetApplyConsumers() continue to work. +func (m *MockMaestroClient) ApplyResources(ctx context.Context, resources []transport_client.ResourceToApply, opts transport_client.ApplyOptions) (*transport_client.ApplyResourcesResult, error) { + // Extract transport config + tc := opts.TransportConfig + if tc == nil { + return nil, fmt.Errorf("TransportConfig is required for Maestro mock") + } + + targetCluster, _ := tc["targetCluster"].(string) //nolint:errcheck // type assertion with zero-value default + if targetCluster == "" { + return nil, fmt.Errorf("targetCluster is required in TransportConfig") + } + + // Collect manifests + manifests := make([]*unstructured.Unstructured, 0, len(resources)) + for _, res := range resources { + manifests = append(manifests, res.Manifest) + } + + // Build ManifestWork using the shared helper (from client.go) + manifestWorkName, _ := tc["manifestWorkName"].(string) //nolint:errcheck // optional, zero-value default + resourceName, _ := tc["resourceName"].(string) //nolint:errcheck // optional, zero-value default + refContent, _ := tc["manifestWorkRefContent"].(map[string]interface{}) //nolint:errcheck // optional, nil default + params, _ := tc["params"].(map[string]interface{}) //nolint:errcheck // optional, nil default + + work, err := buildManifestWork(ctx, nil, manifests, manifestWorkName, resourceName, refContent, params) + if err != nil { + return nil, fmt.Errorf("failed to build ManifestWork: %w", err) + } + + // Delegate to ApplyManifestWork so existing test assertions work + _, err = m.ApplyManifestWork(ctx, targetCluster, work) + if err != nil { + return nil, fmt.Errorf("failed to apply ManifestWork: %w", err) + } + + // Build results + results := &transport_client.ApplyResourcesResult{ + Results: make([]transport_client.ApplyResult, 0, len(resources)), + } + for _, res := range resources { + results.Results = append(results.Results, transport_client.ApplyResult{ + Operation: string(manifest.OperationCreate), + Reason: fmt.Sprintf("ManifestWork applied to cluster %s with %d manifests", targetCluster, len(manifests)), + Resource: res.Manifest, + }) + } + + return results, nil +} + +// GetResource implements transport_client.TransportClient. +// Returns NotFound for consistency with the real Maestro client. +func (m *MockMaestroClient) GetResource(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) (*unstructured.Unstructured, error) { + return nil, apierrors.NewNotFound( + schema.GroupResource{Group: gvk.Group, Resource: gvk.Kind}, + name, + ) +} + +// DiscoverResources implements transport_client.TransportClient. +// Returns an empty list for Maestro transport. +func (m *MockMaestroClient) DiscoverResources(ctx context.Context, gvk schema.GroupVersionKind, discovery transport_client.Discovery) (*unstructured.UnstructuredList, error) { + return &unstructured.UnstructuredList{}, nil +} diff --git a/internal/maestro_client/operations.go b/internal/maestro_client/operations.go index 768e362..c792ec7 100644 --- a/internal/maestro_client/operations.go +++ b/internal/maestro_client/operations.go @@ -4,7 +4,7 @@ import ( "context" "encoding/json" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/generation" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" apperrors "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/errors" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" apierrors "k8s.io/apimachinery/pkg/api/errors" @@ -37,14 +37,14 @@ func (c *Client) CreateManifestWork( } // Validate that generation annotations are present (required on ManifestWork and all manifests) - if err := generation.ValidateManifestWorkGeneration(work); err != nil { + if err := manifest.ValidateManifestWorkGeneration(work); err != nil { return nil, apperrors.MaestroError("invalid ManifestWork: %v", err) } // Enrich context with common fields ctx = logger.WithMaestroConsumer(ctx, consumerName) ctx = logger.WithLogField(ctx, "manifestwork", work.Name) - ctx = logger.WithObservedGeneration(ctx, generation.GetGeneration(work.ObjectMeta)) + ctx = logger.WithObservedGeneration(ctx, manifest.GetGeneration(work.ObjectMeta)) c.log.WithFields(map[string]interface{}{ "manifests": len(work.Spec.Workload.Manifests), @@ -197,12 +197,12 @@ func (c *Client) ApplyManifestWork( } // Validate that generation annotations are present (required on ManifestWork and all manifests) - if err := generation.ValidateManifestWorkGeneration(manifestWork); err != nil { + if err := manifest.ValidateManifestWorkGeneration(manifestWork); err != nil { return nil, apperrors.MaestroError("invalid ManifestWork: %v", err) } // Get generation from the work (set by template) - newGeneration := generation.GetGeneration(manifestWork.ObjectMeta) + newGeneration := manifest.GetGeneration(manifestWork.ObjectMeta) // Enrich context with common fields ctx = logger.WithMaestroConsumer(ctx, consumerName) @@ -221,11 +221,11 @@ func (c *Client) ApplyManifestWork( // Get existing generation (0 if not found) var existingGeneration int64 if exists { - existingGeneration = generation.GetGeneration(existing.ObjectMeta) + existingGeneration = manifest.GetGeneration(existing.ObjectMeta) } // Compare generations to determine operation - decision := generation.CompareGenerations(newGeneration, existingGeneration, exists) + decision := manifest.CompareGenerations(newGeneration, existingGeneration, exists) c.log.WithFields(map[string]interface{}{ "operation": decision.Operation, @@ -234,11 +234,11 @@ func (c *Client) ApplyManifestWork( // Execute operation based on comparison result switch decision.Operation { - case generation.OperationCreate: + case manifest.OperationCreate: return c.CreateManifestWork(ctx, consumerName, manifestWork) - case generation.OperationSkip: + case manifest.OperationSkip: return existing, nil - case generation.OperationUpdate: + case manifest.OperationUpdate: // Use Patch instead of Update since Maestro gRPC doesn't support Update patchData, err := createManifestWorkPatch(manifestWork) if err != nil { diff --git a/internal/maestro_client/operations_test.go b/internal/maestro_client/operations_test.go index c8d624b..29aeef8 100644 --- a/internal/maestro_client/operations_test.go +++ b/internal/maestro_client/operations_test.go @@ -1,14 +1,14 @@ // Package maestro_client tests // -// Note: Tests for generation.ValidateGeneration, generation.ValidateGenerationFromUnstructured, -// and generation.ValidateManifestWorkGeneration are in internal/generation/generation_test.go. +// Note: Tests for manifest.ValidateGeneration, manifest.ValidateGenerationFromUnstructured, +// and manifest.ValidateManifestWorkGeneration are in internal/generation/generation_test.go. // This file contains tests specific to maestro_client functionality. package maestro_client import ( "testing" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/generation" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" workv1 "open-cluster-management.io/api/work/v1" @@ -73,7 +73,7 @@ func TestGetGenerationFromManifestWork(t *testing.T) { if tt.work == nil { result = 0 } else { - result = generation.GetGeneration(tt.work.ObjectMeta) + result = manifest.GetGeneration(tt.work.ObjectMeta) } if result != tt.expected { t.Errorf("expected generation %d, got %d", tt.expected, result) diff --git a/internal/manifest/generation.go b/internal/manifest/generation.go new file mode 100644 index 0000000..fe667d9 --- /dev/null +++ b/internal/manifest/generation.go @@ -0,0 +1,246 @@ +// Package manifest provides generation-based resource tracking, manifest validation, +// and rendering utilities for the transport abstraction layer. +package manifest + +import ( + "fmt" + "sort" + "strconv" + "strings" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" + apperrors "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + workv1 "open-cluster-management.io/api/work/v1" +) + +// Operation represents the type of operation to perform on a resource +type Operation string + +const ( + OperationCreate Operation = "create" + OperationUpdate Operation = "update" + OperationRecreate Operation = "recreate" + OperationSkip Operation = "skip" +) + +// ApplyDecision contains the decision about what operation to perform +type ApplyDecision struct { + Operation Operation + Reason string + NewGeneration int64 + ExistingGeneration int64 +} + +// CompareGenerations compares the generation of a new resource against an existing one +func CompareGenerations(newGen, existingGen int64, exists bool) ApplyDecision { + if !exists { + return ApplyDecision{ + Operation: OperationCreate, + Reason: "resource not found", + NewGeneration: newGen, + ExistingGeneration: 0, + } + } + + if existingGen == newGen { + return ApplyDecision{ + Operation: OperationSkip, + Reason: fmt.Sprintf("generation %d unchanged", existingGen), + NewGeneration: newGen, + ExistingGeneration: existingGen, + } + } + + return ApplyDecision{ + Operation: OperationUpdate, + Reason: fmt.Sprintf("generation changed %d->%d", existingGen, newGen), + NewGeneration: newGen, + ExistingGeneration: existingGen, + } +} + +// GetGeneration extracts the generation annotation value from ObjectMeta. +func GetGeneration(meta metav1.ObjectMeta) int64 { + if meta.Annotations == nil { + return 0 + } + + genStr, ok := meta.Annotations[constants.AnnotationGeneration] + if !ok || genStr == "" { + return 0 + } + + gen, err := strconv.ParseInt(genStr, 10, 64) + if err != nil { + return 0 + } + + return gen +} + +// GetGenerationFromUnstructured is a convenience wrapper for getting generation from unstructured.Unstructured. +func GetGenerationFromUnstructured(obj *unstructured.Unstructured) int64 { + if obj == nil { + return 0 + } + annotations := obj.GetAnnotations() + if annotations == nil { + return 0 + } + genStr, ok := annotations[constants.AnnotationGeneration] + if !ok || genStr == "" { + return 0 + } + gen, err := strconv.ParseInt(genStr, 10, 64) + if err != nil { + return 0 + } + return gen +} + +// ValidateGeneration validates that the generation annotation exists and is valid on ObjectMeta. +func ValidateGeneration(meta metav1.ObjectMeta) error { + if meta.Annotations == nil { + return apperrors.Validation("missing %s annotation", constants.AnnotationGeneration).AsError() + } + + genStr, ok := meta.Annotations[constants.AnnotationGeneration] + if !ok { + return apperrors.Validation("missing %s annotation", constants.AnnotationGeneration).AsError() + } + + if genStr == "" { + return apperrors.Validation("%s annotation is empty", constants.AnnotationGeneration).AsError() + } + + gen, err := strconv.ParseInt(genStr, 10, 64) + if err != nil { + return apperrors.Validation("invalid %s annotation value %q: %v", constants.AnnotationGeneration, genStr, err).AsError() + } + + if gen <= 0 { + return apperrors.Validation("%s annotation must be > 0, got %d", constants.AnnotationGeneration, gen).AsError() + } + + return nil +} + +// ValidateManifestWorkGeneration validates that the generation annotation exists on both +// the ManifestWork metadata and all manifests within the workload. +func ValidateManifestWorkGeneration(work *workv1.ManifestWork) error { + if work == nil { + return apperrors.Validation("work cannot be nil").AsError() + } + + if err := ValidateGeneration(work.ObjectMeta); err != nil { + return apperrors.Validation("ManifestWork %q: %v", work.Name, err).AsError() + } + + for i, m := range work.Spec.Workload.Manifests { + obj := &unstructured.Unstructured{} + if err := obj.UnmarshalJSON(m.Raw); err != nil { + return apperrors.Validation("ManifestWork %q manifest[%d]: failed to unmarshal: %v", work.Name, i, err).AsError() + } + + if err := ValidateGenerationFromUnstructured(obj); err != nil { + kind := obj.GetKind() + name := obj.GetName() + return apperrors.Validation("ManifestWork %q manifest[%d] %s/%s: %v", work.Name, i, kind, name, err).AsError() + } + } + + return nil +} + +// ValidateGenerationFromUnstructured validates that the generation annotation exists and is valid on an Unstructured object. +func ValidateGenerationFromUnstructured(obj *unstructured.Unstructured) error { + if obj == nil { + return apperrors.Validation("object cannot be nil").AsError() + } + + annotations := obj.GetAnnotations() + if annotations == nil { + return apperrors.Validation("missing %s annotation", constants.AnnotationGeneration).AsError() + } + + genStr, ok := annotations[constants.AnnotationGeneration] + if !ok { + return apperrors.Validation("missing %s annotation", constants.AnnotationGeneration).AsError() + } + + if genStr == "" { + return apperrors.Validation("%s annotation is empty", constants.AnnotationGeneration).AsError() + } + + gen, err := strconv.ParseInt(genStr, 10, 64) + if err != nil { + return apperrors.Validation("invalid %s annotation value %q: %v", constants.AnnotationGeneration, genStr, err).AsError() + } + + if gen <= 0 { + return apperrors.Validation("%s annotation must be > 0, got %d", constants.AnnotationGeneration, gen).AsError() + } + + return nil +} + +// GetLatestGenerationFromList returns the resource with the highest generation annotation from a list. +func GetLatestGenerationFromList(list *unstructured.UnstructuredList) *unstructured.Unstructured { + if list == nil || len(list.Items) == 0 { + return nil + } + + items := make([]unstructured.Unstructured, len(list.Items)) + copy(items, list.Items) + + sort.Slice(items, func(i, j int) bool { + genI := GetGenerationFromUnstructured(&items[i]) + genJ := GetGenerationFromUnstructured(&items[j]) + if genI != genJ { + return genI > genJ + } + return items[i].GetName() < items[j].GetName() + }) + + return &items[0] +} + +// DiscoveryConfig is the default implementation of the Discovery interface. +type DiscoveryConfig struct { + Namespace string + ByName string + LabelSelector string +} + +// GetNamespace implements Discovery.GetNamespace +func (d *DiscoveryConfig) GetNamespace() string { return d.Namespace } + +// GetName implements Discovery.GetName +func (d *DiscoveryConfig) GetName() string { return d.ByName } + +// GetLabelSelector implements Discovery.GetLabelSelector +func (d *DiscoveryConfig) GetLabelSelector() string { return d.LabelSelector } + +// IsSingleResource implements Discovery.IsSingleResource +func (d *DiscoveryConfig) IsSingleResource() bool { return d.ByName != "" } + +// BuildLabelSelector converts a map of labels to a selector string. +func BuildLabelSelector(labels map[string]string) string { + if len(labels) == 0 { + return "" + } + + keys := make([]string, 0, len(labels)) + for k := range labels { + keys = append(keys, k) + } + sort.Strings(keys) + + pairs := make([]string, 0, len(labels)) + for _, k := range keys { + pairs = append(pairs, k+"="+labels[k]) + } + return strings.Join(pairs, ",") +} diff --git a/internal/manifest/generation_test.go b/internal/manifest/generation_test.go new file mode 100644 index 0000000..5147430 --- /dev/null +++ b/internal/manifest/generation_test.go @@ -0,0 +1,718 @@ +package manifest + +import ( + "testing" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime" + workv1 "open-cluster-management.io/api/work/v1" +) + +func TestCompareGenerations(t *testing.T) { + tests := []struct { + name string + newGen int64 + existingGen int64 + exists bool + expectedOperation Operation + expectedReason string + }{ + { + name: "resource does not exist - create", + newGen: 5, + existingGen: 0, + exists: false, + expectedOperation: OperationCreate, + expectedReason: "resource not found", + }, + { + name: "generations match - skip", + newGen: 5, + existingGen: 5, + exists: true, + expectedOperation: OperationSkip, + expectedReason: "generation 5 unchanged", + }, + { + name: "newer generation - update", + newGen: 6, + existingGen: 5, + exists: true, + expectedOperation: OperationUpdate, + expectedReason: "generation changed 5->6", + }, + { + name: "older generation (rollback) - update", + newGen: 4, + existingGen: 5, + exists: true, + expectedOperation: OperationUpdate, + expectedReason: "generation changed 5->4", + }, + { + name: "large generation difference - update", + newGen: 100, + existingGen: 1, + exists: true, + expectedOperation: OperationUpdate, + expectedReason: "generation changed 1->100", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := CompareGenerations(tt.newGen, tt.existingGen, tt.exists) + + if result.Operation != tt.expectedOperation { + t.Errorf("Operation = %v, want %v", result.Operation, tt.expectedOperation) + } + + if result.Reason != tt.expectedReason { + t.Errorf("Reason = %v, want %v", result.Reason, tt.expectedReason) + } + + if result.NewGeneration != tt.newGen { + t.Errorf("NewGeneration = %v, want %v", result.NewGeneration, tt.newGen) + } + + if tt.exists && result.ExistingGeneration != tt.existingGen { + t.Errorf("ExistingGeneration = %v, want %v", result.ExistingGeneration, tt.existingGen) + } + }) + } +} + +func TestGetGeneration(t *testing.T) { + tests := []struct { + name string + meta metav1.ObjectMeta + expected int64 + }{ + { + name: "with valid generation annotation", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "42", + }, + }, + expected: 42, + }, + { + name: "with no annotations", + meta: metav1.ObjectMeta{}, + expected: 0, + }, + { + name: "with empty generation annotation", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "", + }, + }, + expected: 0, + }, + { + name: "with invalid generation annotation", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "not-a-number", + }, + }, + expected: 0, + }, + { + name: "with other annotations only (no generation)", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "other": "value", + }, + }, + expected: 0, + }, + { + name: "with generation and other annotations", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "other": "value", + "another/annotation": "foo", + constants.AnnotationGeneration: "5", + }, + }, + expected: 5, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GetGeneration(tt.meta) + if result != tt.expected { + t.Errorf("GetGeneration() = %d, want %d", result, tt.expected) + } + }) + } +} + +func TestGetGenerationFromUnstructured(t *testing.T) { + tests := []struct { + name string + obj *unstructured.Unstructured + expected int64 + }{ + { + name: "with valid generation", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "100", + }, + }, + }, + }, + expected: 100, + }, + { + name: "nil object", + obj: nil, + expected: 0, + }, + { + name: "no annotations", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "metadata": map[string]interface{}{}, + }, + }, + expected: 0, + }, + { + name: "with generation and other annotations", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "annotations": map[string]interface{}{ + "other": "value", + constants.AnnotationGeneration: "42", + }, + }, + }, + }, + expected: 42, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GetGenerationFromUnstructured(tt.obj) + if result != tt.expected { + t.Errorf("GetGenerationFromUnstructured() = %d, want %d", result, tt.expected) + } + }) + } +} + +func TestValidateGeneration(t *testing.T) { + tests := []struct { + name string + meta metav1.ObjectMeta + expectError bool + }{ + { + name: "valid generation annotation", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "42", + }, + }, + expectError: false, + }, + { + name: "generation 0 is invalid (must be > 0)", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "0", + }, + }, + expectError: true, + }, + { + name: "large generation is valid", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "9999999999", + }, + }, + expectError: false, + }, + { + name: "valid generation with other annotations", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "other": "value", + constants.AnnotationGeneration: "10", + }, + }, + expectError: false, + }, + { + name: "missing annotations", + meta: metav1.ObjectMeta{}, + expectError: true, + }, + { + name: "missing generation annotation", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "other": "annotation", + }, + }, + expectError: true, + }, + { + name: "empty generation annotation", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "", + }, + }, + expectError: true, + }, + { + name: "invalid generation value", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "not-a-number", + }, + }, + expectError: true, + }, + { + name: "negative generation", + meta: metav1.ObjectMeta{ + Annotations: map[string]string{ + constants.AnnotationGeneration: "-5", + }, + }, + expectError: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + err := ValidateGeneration(tt.meta) + + if tt.expectError { + if err == nil { + t.Error("expected error, got nil") + } + return + } + + if err != nil { + t.Errorf("unexpected error: %v", err) + } + }) + } +} + +func TestValidateGenerationFromUnstructured(t *testing.T) { + tests := []struct { + name string + obj *unstructured.Unstructured + expectError bool + }{ + { + name: "valid generation annotation", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "test", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "5", + }, + }, + }, + }, + expectError: false, + }, + { + name: "nil object", + obj: nil, + expectError: true, + }, + { + name: "missing annotations", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "test", + }, + }, + }, + expectError: true, + }, + { + name: "invalid generation value", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "test", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "invalid", + }, + }, + }, + }, + expectError: true, + }, + { + name: "negative generation", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "test", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "-10", + }, + }, + }, + }, + expectError: true, + }, + { + name: "generation 0 is invalid (must be > 0)", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "test", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "0", + }, + }, + }, + }, + expectError: true, + }, + { + name: "valid generation with other annotations", + obj: &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "test", + "annotations": map[string]interface{}{ + "other": "value", + constants.AnnotationGeneration: "15", + }, + }, + }, + }, + expectError: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + err := ValidateGenerationFromUnstructured(tt.obj) + + if tt.expectError { + if err == nil { + t.Error("expected error, got nil") + } + return + } + + if err != nil { + t.Errorf("unexpected error: %v", err) + } + }) + } +} + +func TestValidateManifestWorkGeneration(t *testing.T) { + createManifest := func(kind, name, generation string) workv1.Manifest { + obj := &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": kind, + "metadata": map[string]interface{}{ + "name": name, + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: generation, + }, + }, + }, + } + raw, _ := obj.MarshalJSON() + return workv1.Manifest{RawExtension: runtime.RawExtension{Raw: raw}} + } + + createManifestNoGeneration := func(kind, name string) workv1.Manifest { + obj := &unstructured.Unstructured{ + Object: map[string]interface{}{ + "apiVersion": "v1", + "kind": kind, + "metadata": map[string]interface{}{ + "name": name, + }, + }, + } + raw, _ := obj.MarshalJSON() + return workv1.Manifest{RawExtension: runtime.RawExtension{Raw: raw}} + } + + tests := []struct { + name string + work *workv1.ManifestWork + expectError bool + }{ + { + name: "valid ManifestWork with generation on all", + work: &workv1.ManifestWork{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-work", + Annotations: map[string]string{ + constants.AnnotationGeneration: "5", + }, + }, + Spec: workv1.ManifestWorkSpec{ + Workload: workv1.ManifestsTemplate{ + Manifests: []workv1.Manifest{ + createManifest("Namespace", "test-ns", "5"), + createManifest("ConfigMap", "test-cm", "5"), + }, + }, + }, + }, + expectError: false, + }, + { + name: "nil work", + work: nil, + expectError: true, + }, + { + name: "ManifestWork without generation annotation", + work: &workv1.ManifestWork{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-work", + }, + Spec: workv1.ManifestWorkSpec{ + Workload: workv1.ManifestsTemplate{ + Manifests: []workv1.Manifest{ + createManifest("Namespace", "test-ns", "5"), + }, + }, + }, + }, + expectError: true, + }, + { + name: "manifest without generation annotation fails", + work: &workv1.ManifestWork{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-work", + Annotations: map[string]string{ + constants.AnnotationGeneration: "5", + }, + }, + Spec: workv1.ManifestWorkSpec{ + Workload: workv1.ManifestsTemplate{ + Manifests: []workv1.Manifest{ + createManifest("Namespace", "test-ns", "5"), + createManifestNoGeneration("ConfigMap", "test-cm"), + }, + }, + }, + }, + expectError: true, + }, + { + name: "empty manifests is valid", + work: &workv1.ManifestWork{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-work", + Annotations: map[string]string{ + constants.AnnotationGeneration: "5", + }, + }, + Spec: workv1.ManifestWorkSpec{ + Workload: workv1.ManifestsTemplate{ + Manifests: []workv1.Manifest{}, + }, + }, + }, + expectError: false, + }, + { + name: "different generation values is valid", + work: &workv1.ManifestWork{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-work", + Annotations: map[string]string{ + constants.AnnotationGeneration: "5", + }, + }, + Spec: workv1.ManifestWorkSpec{ + Workload: workv1.ManifestsTemplate{ + Manifests: []workv1.Manifest{ + createManifest("Namespace", "test-ns", "3"), + createManifest("ConfigMap", "test-cm", "7"), + }, + }, + }, + }, + expectError: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + err := ValidateManifestWorkGeneration(tt.work) + + if tt.expectError { + if err == nil { + t.Error("expected error, got nil") + } + return + } + + if err != nil { + t.Errorf("unexpected error: %v", err) + } + }) + } +} + +func TestGetLatestGenerationFromList(t *testing.T) { + tests := []struct { + name string + list *unstructured.UnstructuredList + expectedName string + expectNil bool + }{ + { + name: "nil list returns nil", + list: nil, + expectNil: true, + }, + { + name: "empty list returns nil", + list: &unstructured.UnstructuredList{ + Items: []unstructured.Unstructured{}, + }, + expectNil: true, + }, + { + name: "returns resource with highest generation", + list: &unstructured.UnstructuredList{ + Items: []unstructured.Unstructured{ + { + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "name": "resource1", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "10", + }, + }, + }, + }, + { + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "name": "resource2", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "42", + }, + }, + }, + }, + { + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "name": "resource3", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "5", + }, + }, + }, + }, + }, + }, + expectedName: "resource2", + }, + { + name: "sorts by name when generations are equal", + list: &unstructured.UnstructuredList{ + Items: []unstructured.Unstructured{ + { + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "name": "resource-c", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "10", + }, + }, + }, + }, + { + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "name": "resource-a", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "10", + }, + }, + }, + }, + { + Object: map[string]interface{}{ + "metadata": map[string]interface{}{ + "name": "resource-b", + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "10", + }, + }, + }, + }, + }, + }, + expectedName: "resource-a", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GetLatestGenerationFromList(tt.list) + + if tt.expectNil { + if result != nil { + t.Errorf("GetLatestGenerationFromList() = %v, want nil", result) + } + return + } + + if result == nil { + t.Errorf("GetLatestGenerationFromList() = nil, want non-nil") + return + } + + if result.GetName() != tt.expectedName { + t.Errorf("GetLatestGenerationFromList() name = %s, want %s", result.GetName(), tt.expectedName) + } + }) + } +} diff --git a/internal/manifest/manifest.go b/internal/manifest/manifest.go new file mode 100644 index 0000000..a587bf5 --- /dev/null +++ b/internal/manifest/manifest.go @@ -0,0 +1,27 @@ +package manifest + +import ( + "fmt" + + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" +) + +// ValidateManifest validates a Kubernetes manifest has all required fields and annotations +func ValidateManifest(obj *unstructured.Unstructured) error { + if obj.GetAPIVersion() == "" { + return fmt.Errorf("manifest missing apiVersion") + } + if obj.GetKind() == "" { + return fmt.Errorf("manifest missing kind") + } + if obj.GetName() == "" { + return fmt.Errorf("manifest missing metadata.name") + } + + if GetGenerationFromUnstructured(obj) == 0 { + return fmt.Errorf("manifest missing required annotation %q", constants.AnnotationGeneration) + } + + return nil +} diff --git a/internal/manifest/render.go b/internal/manifest/render.go new file mode 100644 index 0000000..22300ef --- /dev/null +++ b/internal/manifest/render.go @@ -0,0 +1,85 @@ +package manifest + +import ( + "fmt" +) + +// RenderManifestData recursively renders all template strings in a manifest data map. +// Keys and string values are rendered using the provided render function. +func RenderManifestData(data map[string]interface{}, renderFn func(string, map[string]interface{}) (string, error), params map[string]interface{}) (map[string]interface{}, error) { + result := make(map[string]interface{}) + + for k, v := range data { + renderedKey, err := renderFn(k, params) + if err != nil { + return nil, fmt.Errorf("failed to render key '%s': %w", k, err) + } + + renderedValue, err := renderManifestValue(v, renderFn, params) + if err != nil { + return nil, fmt.Errorf("failed to render value for key '%s': %w", k, err) + } + + result[renderedKey] = renderedValue + } + + return result, nil +} + +// renderManifestValue renders a value recursively +func renderManifestValue(v interface{}, renderFn func(string, map[string]interface{}) (string, error), params map[string]interface{}) (interface{}, error) { + switch val := v.(type) { + case string: + return renderFn(val, params) + case map[string]interface{}: + return RenderManifestData(val, renderFn, params) + case map[interface{}]interface{}: + converted := ConvertToStringKeyMap(val) + return RenderManifestData(converted, renderFn, params) + case []interface{}: + result := make([]interface{}, len(val)) + for i, item := range val { + rendered, err := renderManifestValue(item, renderFn, params) + if err != nil { + return nil, err + } + result[i] = rendered + } + return result, nil + default: + return v, nil + } +} + +// ConvertToStringKeyMap converts map[interface{}]interface{} to map[string]interface{} +func ConvertToStringKeyMap(m map[interface{}]interface{}) map[string]interface{} { + result := make(map[string]interface{}) + for k, v := range m { + strKey := fmt.Sprintf("%v", k) + switch val := v.(type) { + case map[interface{}]interface{}: + result[strKey] = ConvertToStringKeyMap(val) + case []interface{}: + result[strKey] = convertSlice(val) + default: + result[strKey] = v + } + } + return result +} + +// convertSlice converts slice elements recursively +func convertSlice(s []interface{}) []interface{} { + result := make([]interface{}, len(s)) + for i, v := range s { + switch val := v.(type) { + case map[interface{}]interface{}: + result[i] = ConvertToStringKeyMap(val) + case []interface{}: + result[i] = convertSlice(val) + default: + result[i] = v + } + } + return result +} diff --git a/internal/transport_client/interface.go b/internal/transport_client/interface.go new file mode 100644 index 0000000..def7b29 --- /dev/null +++ b/internal/transport_client/interface.go @@ -0,0 +1,44 @@ +// Package transport_client defines the transport abstraction layer for resource operations. +// +// TransportClient decouples the executor from specific resource-application backends +// (e.g., Kubernetes direct, Maestro/OCM). Both k8s_client.Client and maestro_client.Client +// implement this interface, allowing the executor to operate uniformly. +package transport_client + +import ( + "context" + + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" +) + +// TransportClient defines the interface for resource transport operations. +// Implementations handle the details of delivering resources to the target +// infrastructure (e.g., direct K8s API, Maestro ManifestWork). +type TransportClient interface { + // ApplyResources applies a set of resources according to the given options. + // It handles generation comparison, create/update/recreate logic, and returns + // the result for each resource. + ApplyResources(ctx context.Context, resources []ResourceToApply, opts ApplyOptions) (*ApplyResourcesResult, error) + + // GetResource retrieves a single resource by GVK, namespace, and name. + GetResource(ctx context.Context, gvk schema.GroupVersionKind, namespace, name string) (*unstructured.Unstructured, error) + + // DiscoverResources discovers resources based on the Discovery configuration. + DiscoverResources(ctx context.Context, gvk schema.GroupVersionKind, discovery Discovery) (*unstructured.UnstructuredList, error) +} + +// Discovery defines the interface for resource discovery configuration. +type Discovery interface { + // GetNamespace returns the namespace to search in. + GetNamespace() string + + // GetName returns the resource name for single-resource discovery. + GetName() string + + // GetLabelSelector returns the label selector string. + GetLabelSelector() string + + // IsSingleResource returns true if discovering by name. + IsSingleResource() bool +} diff --git a/internal/transport_client/types.go b/internal/transport_client/types.go new file mode 100644 index 0000000..24e8ef8 --- /dev/null +++ b/internal/transport_client/types.go @@ -0,0 +1,48 @@ +package transport_client + +import ( + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" +) + +// ApplyOptions configures the behavior of ApplyResources. +type ApplyOptions struct { + // RecreateOnChange indicates whether to delete and recreate resources + // when generation changes, instead of updating in place. + RecreateOnChange bool + + // TransportConfig carries transport-specific settings (e.g., Maestro targetCluster, + // manifestWork name, refContent) from the executor to the transport client. + // The executor populates this without knowing about transport internals. + TransportConfig map[string]interface{} +} + +// ResourceToApply represents a single resource to be applied. +type ResourceToApply struct { + // Manifest is the desired state of the resource. + Manifest *unstructured.Unstructured + + // ExistingResource is the current state of the resource (if it exists). + // nil means the resource does not exist yet. + ExistingResource *unstructured.Unstructured +} + +// ApplyResult represents the result of applying a single resource. +type ApplyResult struct { + // Operation is the operation that was performed (create, update, recreate, skip). + Operation string + + // Reason explains why this operation was chosen. + Reason string + + // Resource is the resulting resource after the operation. + Resource *unstructured.Unstructured + + // Error is the error if the operation failed. + Error error +} + +// ApplyResourcesResult represents the result of applying multiple resources. +type ApplyResourcesResult struct { + // Results contains the result for each resource, in order. + Results []ApplyResult +} diff --git a/pkg/utils/convert.go b/pkg/utils/convert.go new file mode 100644 index 0000000..0998cdb --- /dev/null +++ b/pkg/utils/convert.go @@ -0,0 +1,191 @@ +package utils + +import ( + "fmt" + "math" + "strconv" + "strings" +) + +// ConvertToType converts a value to the specified type. +// Supported types: string, int, int64, float, float64, bool +func ConvertToType(value interface{}, targetType string) (interface{}, error) { + switch targetType { + case "string": + return ConvertToString(value) + case "int", "int64": + return ConvertToInt64(value) + case "float", "float64": + return ConvertToFloat64(value) + case "bool": + return ConvertToBool(value) + default: + return nil, fmt.Errorf("unsupported type: %s (supported: string, int, int64, float, float64, bool)", targetType) + } +} + +// ConvertToString converts a value to string +// +//nolint:unparam // error kept for API consistency +func ConvertToString(value interface{}) (string, error) { + switch v := value.(type) { + case string: + return v, nil + case int, int8, int16, int32, int64: + return fmt.Sprintf("%d", v), nil + case uint, uint8, uint16, uint32, uint64: + return fmt.Sprintf("%d", v), nil + case float32: + return strconv.FormatFloat(float64(v), 'f', -1, 32), nil + case float64: + return strconv.FormatFloat(v, 'f', -1, 64), nil + case bool: + return strconv.FormatBool(v), nil + default: + return fmt.Sprintf("%v", v), nil + } +} + +// ConvertToInt64 converts a value to int64 +func ConvertToInt64(value interface{}) (int64, error) { + switch v := value.(type) { + case int: + return int64(v), nil + case uint64: + if v > math.MaxInt64 { + return 0, fmt.Errorf("uint64 value %d overflows int64", v) + } + return int64(v), nil + case int8: + return int64(v), nil + case int16: + return int64(v), nil + case int32: + return int64(v), nil + case int64: + return v, nil + case uint: + if v > uint(math.MaxInt64) { + return 0, fmt.Errorf("uint value %d overflows int64", v) + } + return int64(v), nil + case uint8: + return int64(v), nil + case uint16: + return int64(v), nil + case uint32: + return int64(v), nil + case float32: + return int64(v), nil + case float64: + return int64(v), nil + case string: + if i, err := strconv.ParseInt(v, 10, 64); err == nil { + return i, nil + } + if f, err := strconv.ParseFloat(v, 64); err == nil { + return int64(f), nil + } + return 0, fmt.Errorf("cannot convert string '%s' to int", v) + case bool: + if v { + return 1, nil + } + return 0, nil + default: + return 0, fmt.Errorf("cannot convert %T to int", value) + } +} + +// ConvertToFloat64 converts a value to float64 +func ConvertToFloat64(value interface{}) (float64, error) { + switch v := value.(type) { + case float32: + return float64(v), nil + case float64: + return v, nil + case int: + return float64(v), nil + case int8: + return float64(v), nil + case int16: + return float64(v), nil + case int32: + return float64(v), nil + case int64: + return float64(v), nil + case uint: + return float64(v), nil + case uint8: + return float64(v), nil + case uint16: + return float64(v), nil + case uint32: + return float64(v), nil + case uint64: + return float64(v), nil + case string: + f, err := strconv.ParseFloat(v, 64) + if err != nil { + return 0, fmt.Errorf("cannot convert string '%s' to float: %w", v, err) + } + return f, nil + case bool: + if v { + return 1.0, nil + } + return 0.0, nil + default: + return 0, fmt.Errorf("cannot convert %T to float", value) + } +} + +// ConvertToBool converts a value to bool +func ConvertToBool(value interface{}) (bool, error) { + switch v := value.(type) { + case bool: + return v, nil + case string: + if v == "" { + return false, nil + } + b, err := strconv.ParseBool(v) + if err != nil { + lower := strings.ToLower(v) + switch lower { + case "yes", "y", "on", "1": + return true, nil + case "no", "n", "off", "0": + return false, nil + } + return false, fmt.Errorf("cannot convert string '%s' to bool", v) + } + return b, nil + case int: + return v != 0, nil + case int8: + return v != 0, nil + case int16: + return v != 0, nil + case int32: + return v != 0, nil + case int64: + return v != 0, nil + case uint: + return v != 0, nil + case uint8: + return v != 0, nil + case uint16: + return v != 0, nil + case uint32: + return v != 0, nil + case uint64: + return v != 0, nil + case float32: + return v != 0, nil + case float64: + return v != 0, nil + default: + return false, fmt.Errorf("cannot convert %T to bool", value) + } +} diff --git a/pkg/utils/env.go b/pkg/utils/env.go new file mode 100644 index 0000000..a866160 --- /dev/null +++ b/pkg/utils/env.go @@ -0,0 +1,15 @@ +package utils + +import ( + "fmt" + "os" +) + +// GetEnvOrError returns the value of an environment variable or an error if not set. +func GetEnvOrError(envVar string) (string, error) { + value, exists := os.LookupEnv(envVar) + if !exists { + return "", fmt.Errorf("environment variable %s not set", envVar) + } + return value, nil +} diff --git a/pkg/utils/map.go b/pkg/utils/map.go new file mode 100644 index 0000000..2bc7f13 --- /dev/null +++ b/pkg/utils/map.go @@ -0,0 +1,99 @@ +package utils + +import ( + "context" + "fmt" + "strings" + + "github.com/mitchellh/copystructure" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/logger" +) + +// ConvertToStringKeyMap converts map[interface{}]interface{} to map[string]interface{} +func ConvertToStringKeyMap(m map[interface{}]interface{}) map[string]interface{} { + result := make(map[string]interface{}) + for k, v := range m { + strKey := fmt.Sprintf("%v", k) + switch val := v.(type) { + case map[interface{}]interface{}: + result[strKey] = ConvertToStringKeyMap(val) + case []interface{}: + result[strKey] = convertSlice(val) + default: + result[strKey] = v + } + } + return result +} + +func convertSlice(s []interface{}) []interface{} { + result := make([]interface{}, len(s)) + for i, v := range s { + switch val := v.(type) { + case map[interface{}]interface{}: + result[i] = ConvertToStringKeyMap(val) + case []interface{}: + result[i] = convertSlice(val) + default: + result[i] = v + } + } + return result +} + +// DeepCopyMap creates a deep copy of a map using github.com/mitchellh/copystructure. +func DeepCopyMap(ctx context.Context, m map[string]interface{}, log logger.Logger) map[string]interface{} { + if m == nil { + return nil + } + return DeepCopyMapWithFallback(ctx, m, log) +} + +// DeepCopyMapWithFallback creates a deep copy with shallow copy fallback on error. +func DeepCopyMapWithFallback(ctx context.Context, m map[string]interface{}, log logger.Logger) map[string]interface{} { + if m == nil { + return nil + } + + copied, err := copystructure.Copy(m) + if err != nil { + log.Warnf(ctx, "Failed to deep copy map: %v. Falling back to shallow copy.", err) + result := make(map[string]interface{}) + for k, v := range m { + result[k] = v + } + return result + } + + result, ok := copied.(map[string]interface{}) + if !ok { + result := make(map[string]interface{}) + for k, v := range m { + result[k] = v + } + return result + } + + return result +} + +// GetNestedValue retrieves a nested value from a map using dot-separated path. +func GetNestedValue(m map[string]interface{}, path string) (interface{}, bool) { + parts := strings.Split(path, ".") + var current interface{} = m + + for _, part := range parts { + switch v := current.(type) { + case map[string]interface{}: + val, ok := v[part] + if !ok { + return nil, false + } + current = val + default: + return nil, false + } + } + + return current, true +} diff --git a/pkg/utils/path.go b/pkg/utils/path.go new file mode 100644 index 0000000..6283660 --- /dev/null +++ b/pkg/utils/path.go @@ -0,0 +1,34 @@ +package utils + +import ( + "fmt" + "path/filepath" + "strings" +) + +// ResolveSecurePath resolves a path against the base directory. +// - Absolute paths are returned as-is +// - Relative paths are resolved against the base directory and validated to not escape it +func ResolveSecurePath(baseDir, refPath string) (string, error) { + if filepath.IsAbs(refPath) { + return filepath.Clean(refPath), nil + } + + fullPath := filepath.Join(baseDir, refPath) + fullPath = filepath.Clean(fullPath) + + absBase, err := filepath.Abs(baseDir) + if err != nil { + return "", fmt.Errorf("failed to resolve base directory: %w", err) + } + absPath, err := filepath.Abs(fullPath) + if err != nil { + return "", fmt.Errorf("failed to resolve path: %w", err) + } + + if !strings.HasPrefix(absPath, absBase) { + return "", fmt.Errorf("path %q escapes base directory %q", refPath, baseDir) + } + + return fullPath, nil +} diff --git a/pkg/utils/reflect.go b/pkg/utils/reflect.go new file mode 100644 index 0000000..af765f5 --- /dev/null +++ b/pkg/utils/reflect.go @@ -0,0 +1,12 @@ +package utils + +import "reflect" + +// IsSliceOrArray returns true if the value is a slice or array. +func IsSliceOrArray(v interface{}) bool { + if v == nil { + return false + } + kind := reflect.TypeOf(v).Kind() + return kind == reflect.Slice || kind == reflect.Array +} diff --git a/pkg/utils/template.go b/pkg/utils/template.go new file mode 100644 index 0000000..7d405d0 --- /dev/null +++ b/pkg/utils/template.go @@ -0,0 +1,110 @@ +package utils + +import ( + "bytes" + "fmt" + "strconv" + "strings" + "text/template" + "time" + + "golang.org/x/text/cases" + "golang.org/x/text/language" +) + +// TemplateFuncs provides common functions for Go templates. +var TemplateFuncs = template.FuncMap{ + "now": time.Now, + "date": func(layout string, t time.Time) string { + return t.Format(layout) + }, + "dateFormat": func(layout string, t time.Time) string { + return t.Format(layout) + }, + "lower": strings.ToLower, + "upper": strings.ToUpper, + "title": func(s string) string { + return cases.Title(language.English).String(s) + }, + "trim": strings.TrimSpace, + "replace": strings.ReplaceAll, + "contains": strings.Contains, + "hasPrefix": strings.HasPrefix, + "hasSuffix": strings.HasSuffix, + "default": func(defaultVal, val interface{}) interface{} { + if val == nil || val == "" { + return defaultVal + } + return val + }, + "quote": func(s string) string { + return fmt.Sprintf("%q", s) + }, + "int": func(v interface{}) int { + switch val := v.(type) { + case int: + return val + case int64: + return int(val) + case float64: + return int(val) + case string: + i, _ := strconv.Atoi(val) //nolint:errcheck // returns 0 on error, which is acceptable + return i + default: + return 0 + } + }, + "int64": func(v interface{}) int64 { + switch val := v.(type) { + case int: + return int64(val) + case int64: + return val + case float64: + return int64(val) + case string: + i, _ := strconv.ParseInt(val, 10, 64) //nolint:errcheck // returns 0 on error, which is acceptable + return i + default: + return 0 + } + }, + "float64": func(v interface{}) float64 { + switch val := v.(type) { + case int: + return float64(val) + case int64: + return float64(val) + case float64: + return val + case string: + f, _ := strconv.ParseFloat(val, 64) //nolint:errcheck // returns 0 on error, which is acceptable + return f + default: + return 0 + } + }, + "string": func(v interface{}) string { + return fmt.Sprintf("%v", v) + }, +} + +// RenderTemplate renders a Go template string with the given data. +func RenderTemplate(templateStr string, data map[string]interface{}) (string, error) { + if !strings.Contains(templateStr, "{{") { + return templateStr, nil + } + + tmpl, err := template.New("template").Funcs(TemplateFuncs).Option("missingkey=error").Parse(templateStr) + if err != nil { + return "", fmt.Errorf("failed to parse template: %w", err) + } + + var buf bytes.Buffer + if err := tmpl.Execute(&buf, data); err != nil { + return "", fmt.Errorf("failed to execute template: %w", err) + } + + return buf.String(), nil +} diff --git a/test/integration/config-loader/loader_template_test.go b/test/integration/config-loader/loader_template_test.go index a37c635..1162583 100644 --- a/test/integration/config-loader/loader_template_test.go +++ b/test/integration/config-loader/loader_template_test.go @@ -59,7 +59,7 @@ func TestLoadSplitConfig(t *testing.T) { assert.Equal(t, "Config", config.Kind) // Metadata comes from task config - assert.Equal(t, "example-adapter", config.Metadata.Name) + assert.Equal(t, "test-adapter", config.Metadata.Name) // Adapter info comes from adapter config assert.Equal(t, "0.1.0", config.Spec.Adapter.Version) diff --git a/test/integration/executor/executor_integration_test.go b/test/integration/executor/executor_integration_test.go index eb7b5c6..f0900ed 100644 --- a/test/integration/executor/executor_integration_test.go +++ b/test/integration/executor/executor_integration_test.go @@ -175,7 +175,7 @@ func TestExecutor_FullFlow_Success(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(k8sEnv.Log). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -309,7 +309,7 @@ func TestExecutor_PreconditionNotMet(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(k8sEnv.Log). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -414,7 +414,7 @@ func TestExecutor_PreconditionAPIFailure(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(k8sEnv.Log). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -532,7 +532,7 @@ func TestExecutor_CELExpressionEvaluation(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -578,7 +578,7 @@ func TestExecutor_MultipleMessages(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -631,7 +631,7 @@ func TestExecutor_Handler_Integration(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -681,7 +681,7 @@ func TestExecutor_Handler_PreconditionNotMet_ReturnsNil(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -714,7 +714,7 @@ func TestExecutor_ContextCancellation(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -750,7 +750,7 @@ func TestExecutor_MissingRequiredParam(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -816,7 +816,7 @@ func TestExecutor_InvalidEventJSON(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -868,7 +868,7 @@ func TestExecutor_MissingEventFields(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -1016,7 +1016,7 @@ func TestExecutor_LogAction(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -1085,7 +1085,7 @@ func TestExecutor_PostActionAPIFailure(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -1270,7 +1270,7 @@ func TestExecutor_ExecutionError_CELAccess(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(getK8sEnvForTest(t).Log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) @@ -1414,7 +1414,7 @@ func TestExecutor_PayloadBuildFailure(t *testing.T) { WithConfig(config). WithAPIClient(apiClient). WithLogger(log). - WithK8sClient(getK8sEnvForTest(t).Client). + WithTransportClient(getK8sEnvForTest(t).Client). Build() if err != nil { t.Fatalf("Failed to create executor: %v", err) diff --git a/test/integration/executor/executor_k8s_integration_test.go b/test/integration/executor/executor_k8s_integration_test.go index 0bc52f8..a43fc8f 100644 --- a/test/integration/executor/executor_k8s_integration_test.go +++ b/test/integration/executor/executor_k8s_integration_test.go @@ -14,8 +14,8 @@ import ( "github.com/cloudevents/sdk-go/v2/event" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/executor" - "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/generation" "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/hyperfleet_api" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" @@ -341,7 +341,7 @@ func TestExecutor_K8s_CreateResources(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) @@ -370,7 +370,7 @@ func TestExecutor_K8s_CreateResources(t *testing.T) { cmResult := result.ResourceResults[0] assert.Equal(t, "clusterConfigMap", cmResult.Name) assert.Equal(t, executor.StatusSuccess, cmResult.Status, "ConfigMap creation should succeed") - assert.Equal(t, generation.OperationCreate, cmResult.Operation, "Should be create operation") + assert.Equal(t, manifest.OperationCreate, cmResult.Operation, "Should be create operation") assert.Equal(t, "ConfigMap", cmResult.Kind) t.Logf("ConfigMap created: %s/%s (operation: %s)", cmResult.Namespace, cmResult.ResourceName, cmResult.Operation) @@ -378,7 +378,7 @@ func TestExecutor_K8s_CreateResources(t *testing.T) { secretResult := result.ResourceResults[1] assert.Equal(t, "clusterSecret", secretResult.Name) assert.Equal(t, executor.StatusSuccess, secretResult.Status, "Secret creation should succeed") - assert.Equal(t, generation.OperationCreate, secretResult.Operation) + assert.Equal(t, manifest.OperationCreate, secretResult.Operation) assert.Equal(t, "Secret", secretResult.Kind) t.Logf("Secret created: %s/%s (operation: %s)", secretResult.Namespace, secretResult.ResourceName, secretResult.Operation) @@ -489,7 +489,7 @@ func TestExecutor_K8s_UpdateExistingResource(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) @@ -503,7 +503,7 @@ func TestExecutor_K8s_UpdateExistingResource(t *testing.T) { // Verify it was an update operation require.Len(t, result.ResourceResults, 1) cmResult := result.ResourceResults[0] - assert.Equal(t, generation.OperationUpdate, cmResult.Operation, "Should be update operation") + assert.Equal(t, manifest.OperationUpdate, cmResult.Operation, "Should be update operation") t.Logf("Resource operation: %s", cmResult.Operation) // Verify ConfigMap was updated with new data @@ -596,7 +596,7 @@ func TestExecutor_K8s_DiscoveryByLabels(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) @@ -607,14 +607,14 @@ func TestExecutor_K8s_DiscoveryByLabels(t *testing.T) { evt := createK8sTestEvent(clusterId) result1 := exec.Execute(ctx, evt) require.Equal(t, executor.StatusSuccess, result1.Status) - assert.Equal(t, generation.OperationCreate, result1.ResourceResults[0].Operation) + assert.Equal(t, manifest.OperationCreate, result1.ResourceResults[0].Operation) t.Logf("First execution: %s", result1.ResourceResults[0].Operation) // Second execution - should find by labels and update evt2 := createK8sTestEvent(clusterId) result2 := exec.Execute(ctx, evt2) require.Equal(t, executor.StatusSuccess, result2.Status) - assert.Equal(t, generation.OperationUpdate, result2.ResourceResults[0].Operation) + assert.Equal(t, manifest.OperationUpdate, result2.ResourceResults[0].Operation) t.Logf("Second execution: %s (discovered by labels)", result2.ResourceResults[0].Operation) } @@ -667,7 +667,7 @@ func TestExecutor_K8s_RecreateOnChange(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) @@ -678,7 +678,7 @@ func TestExecutor_K8s_RecreateOnChange(t *testing.T) { evt := createK8sTestEvent(clusterId) result1 := exec.Execute(ctx, evt) require.Equal(t, executor.StatusSuccess, result1.Status) - assert.Equal(t, generation.OperationCreate, result1.ResourceResults[0].Operation) + assert.Equal(t, manifest.OperationCreate, result1.ResourceResults[0].Operation) // Get the original UID cmGVK := schema.GroupVersionKind{Group: "", Version: "v1", Kind: "ConfigMap"} @@ -692,7 +692,7 @@ func TestExecutor_K8s_RecreateOnChange(t *testing.T) { evt2 := createK8sTestEvent(clusterId) result2 := exec.Execute(ctx, evt2) require.Equal(t, executor.StatusSuccess, result2.Status) - assert.Equal(t, generation.OperationRecreate, result2.ResourceResults[0].Operation) + assert.Equal(t, manifest.OperationRecreate, result2.ResourceResults[0].Operation) t.Logf("Second execution: %s", result2.ResourceResults[0].Operation) // Verify it's a new resource (different UID) @@ -725,7 +725,7 @@ func TestExecutor_K8s_MultipleResourceTypes(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) @@ -741,7 +741,7 @@ func TestExecutor_K8s_MultipleResourceTypes(t *testing.T) { // Verify both resources created for _, rr := range result.ResourceResults { assert.Equal(t, executor.StatusSuccess, rr.Status, "Resource %s should succeed", rr.Name) - assert.Equal(t, generation.OperationCreate, rr.Operation) + assert.Equal(t, manifest.OperationCreate, rr.Operation) t.Logf("Created %s: %s/%s", rr.Kind, rr.Namespace, rr.ResourceName) } @@ -773,7 +773,7 @@ func TestExecutor_K8s_ResourceCreationFailure(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) @@ -928,7 +928,7 @@ func TestExecutor_K8s_MultipleMatchingResources(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) @@ -942,7 +942,7 @@ func TestExecutor_K8s_MultipleMatchingResources(t *testing.T) { // Should create a new resource (no discovery configured) rr := result.ResourceResults[0] - assert.Equal(t, generation.OperationCreate, rr.Operation, + assert.Equal(t, manifest.OperationCreate, rr.Operation, "Should create new resource (no discovery configured)") t.Logf("Operation: %s on resource: %s/%s", rr.Operation, rr.Namespace, rr.ResourceName) @@ -1000,7 +1000,7 @@ func TestExecutor_K8s_PostActionsAfterPreconditionNotMet(t *testing.T) { exec, err := executor.NewBuilder(). WithConfig(config). WithAPIClient(apiClient). - WithK8sClient(k8sEnv.Client). + WithTransportClient(k8sEnv.Client). WithLogger(k8sEnv.Log). Build() require.NoError(t, err) diff --git a/test/integration/executor/executor_maestro_integration_test.go b/test/integration/executor/executor_maestro_integration_test.go new file mode 100644 index 0000000..63ff4f0 --- /dev/null +++ b/test/integration/executor/executor_maestro_integration_test.go @@ -0,0 +1,601 @@ +package executor_integration_test + +import ( + "context" + "encoding/json" + "fmt" + "net/http" + "net/http/httptest" + "sync" + "testing" + "time" + + "github.com/cloudevents/sdk-go/v2/event" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/config_loader" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/executor" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/hyperfleet_api" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/maestro_client" + "github.com/openshift-hyperfleet/hyperfleet-adapter/internal/manifest" + "github.com/openshift-hyperfleet/hyperfleet-adapter/pkg/constants" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +// ============================================================================= +// Maestro Integration Test Helpers +// ============================================================================= + +// maestroTestAPIServer creates a mock HyperFleet API server for Maestro integration tests +type maestroTestAPIServer struct { + server *httptest.Server + mu sync.Mutex + requests []maestroTestRequest + clusterResponse map[string]interface{} + statusResponses []map[string]interface{} +} + +type maestroTestRequest struct { + Method string + Path string + Body string +} + +func newMaestroTestAPIServer(t *testing.T) *maestroTestAPIServer { + mock := &maestroTestAPIServer{ + requests: make([]maestroTestRequest, 0), + clusterResponse: map[string]interface{}{ + "metadata": map[string]interface{}{ + "name": "test-cluster", + }, + "spec": map[string]interface{}{ + "region": "us-west-2", + "provider": "gcp", + "node_count": 5, + }, + "status": map[string]interface{}{ + "conditions": []map[string]interface{}{ + { + "type": "Ready", + "status": "True", + }, + }, + }, + }, + statusResponses: make([]map[string]interface{}, 0), + } + + mock.server = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + mock.mu.Lock() + defer mock.mu.Unlock() + + var bodyStr string + if r.Body != nil { + buf := make([]byte, 1024*1024) + n, _ := r.Body.Read(buf) + bodyStr = string(buf[:n]) + } + + mock.requests = append(mock.requests, maestroTestRequest{ + Method: r.Method, + Path: r.URL.Path, + Body: bodyStr, + }) + + t.Logf("Mock API: %s %s", r.Method, r.URL.Path) + + switch { + case r.Method == http.MethodPost && r.URL.Path != "": + var statusBody map[string]interface{} + if err := json.Unmarshal([]byte(bodyStr), &statusBody); err == nil { + mock.statusResponses = append(mock.statusResponses, statusBody) + } + w.WriteHeader(http.StatusOK) + _ = json.NewEncoder(w).Encode(map[string]string{"status": "accepted"}) + return + case r.Method == http.MethodGet: + w.WriteHeader(http.StatusOK) + _ = json.NewEncoder(w).Encode(mock.clusterResponse) + return + } + + w.WriteHeader(http.StatusNotFound) + _ = json.NewEncoder(w).Encode(map[string]string{"error": "not found"}) + })) + + return mock +} + +func (m *maestroTestAPIServer) Close() { + m.server.Close() +} + +func (m *maestroTestAPIServer) URL() string { + return m.server.URL +} + +func (m *maestroTestAPIServer) GetStatusResponses() []map[string]interface{} { + m.mu.Lock() + defer m.mu.Unlock() + return append([]map[string]interface{}{}, m.statusResponses...) +} + +// createMaestroTestEvent creates a CloudEvent for Maestro integration testing +func createMaestroTestEvent(clusterId string) *event.Event { + evt := event.New() + // Use the clusterId as the CloudEvent ID so that "event.id" parameter extraction works + evt.SetID(clusterId) + evt.SetType("com.redhat.hyperfleet.cluster.provision") + evt.SetSource("maestro-integration-test") + evt.SetTime(time.Now()) + + eventData := map[string]interface{}{ + "id": clusterId, + "resource_type": "cluster", + "generation": "gen-001", + "href": "/api/v1/clusters/" + clusterId, + } + eventDataBytes, _ := json.Marshal(eventData) + _ = evt.SetData(event.ApplicationJSON, eventDataBytes) + + return &evt +} + +// createMaestroTestConfig creates a unified Config with Maestro resources +func createMaestroTestConfig(apiBaseURL, targetCluster string) *config_loader.Config { + return &config_loader.Config{ + APIVersion: config_loader.APIVersionV1Alpha1, + Kind: config_loader.ExpectedKindConfig, + Metadata: config_loader.Metadata{ + Name: "maestro-test-adapter", + }, + Spec: config_loader.ConfigSpec{ + Adapter: config_loader.AdapterInfo{ + Version: "1.0.0", + }, + Clients: config_loader.ClientsConfig{ + HyperfleetAPI: config_loader.HyperfleetAPIConfig{ + Timeout: 10 * time.Second, + RetryAttempts: 1, + RetryBackoff: hyperfleet_api.BackoffConstant, + }, + }, + Params: []config_loader.Parameter{ + { + Name: "hyperfleetApiBaseUrl", + Source: "env.HYPERFLEET_API_BASE_URL", + Required: true, + }, + { + Name: "hyperfleetApiVersion", + Source: "env.HYPERFLEET_API_VERSION", + Default: "v1", + Required: false, + }, + { + Name: "clusterId", + Source: "event.id", + Required: true, + }, + { + Name: "targetCluster", + Default: targetCluster, + }, + }, + Preconditions: []config_loader.Precondition{ + { + ActionBase: config_loader.ActionBase{ + Name: "clusterStatus", + APICall: &config_loader.APICall{ + Method: "GET", + URL: "{{ .hyperfleetApiBaseUrl }}/api/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}", + Timeout: "5s", + }, + }, + Capture: []config_loader.CaptureField{ + {Name: "clusterName", FieldExpressionDef: config_loader.FieldExpressionDef{Field: "metadata.name"}}, + {Name: "region", FieldExpressionDef: config_loader.FieldExpressionDef{Field: "spec.region"}}, + {Name: "cloudProvider", FieldExpressionDef: config_loader.FieldExpressionDef{Field: "spec.provider"}}, + }, + Conditions: []config_loader.Condition{ + {Field: "metadata.name", Operator: "exists"}, + }, + }, + }, + // Maestro Resources with multiple manifests + Resources: []config_loader.Resource{ + { + Name: "clusterResources", + Transport: &config_loader.TransportConfig{ + Client: config_loader.TransportClientMaestro, + Maestro: &config_loader.MaestroTransportConfig{ + TargetCluster: "{{ .targetCluster }}", + ManifestWork: &config_loader.ManifestWorkConfig{ + Name: "cluster-{{ .clusterId }}", + }, + }, + }, + Manifests: []config_loader.NamedManifest{ + { + Name: "namespace", + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": map[string]interface{}{ + "name": "cluster-{{ .clusterId }}", + "labels": map[string]interface{}{ + "hyperfleet.io/cluster-id": "{{ .clusterId }}", + "hyperfleet.io/managed-by": "{{ .metadata.name }}", + }, + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + }, + }, + { + Name: "configMap", + Manifest: map[string]interface{}{ + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": map[string]interface{}{ + "name": "cluster-config", + "namespace": "cluster-{{ .clusterId }}", + "labels": map[string]interface{}{ + "hyperfleet.io/cluster-id": "{{ .clusterId }}", + }, + "annotations": map[string]interface{}{ + constants.AnnotationGeneration: "1", + }, + }, + "data": map[string]interface{}{ + "cluster-id": "{{ .clusterId }}", + "cluster-name": "{{ .clusterName }}", + "region": "{{ .region }}", + "provider": "{{ .cloudProvider }}", + }, + }, + }, + }, + Discovery: &config_loader.DiscoveryConfig{ + ByName: "cluster-{{ .clusterId }}", + }, + }, + }, + }, + } +} + +// ============================================================================= +// Maestro Integration Tests (using MockMaestroClient) +// ============================================================================= + +// TestExecutor_Maestro_CreateMultipleManifests tests creating a ManifestWork with multiple manifests +func TestExecutor_Maestro_CreateMultipleManifests(t *testing.T) { + // Setup mock API server + mockAPI := newMaestroTestAPIServer(t) + defer mockAPI.Close() + + // Setup mock Maestro client + mockMaestro := maestro_client.NewMockMaestroClient() + + // Set environment variables + t.Setenv("HYPERFLEET_API_BASE_URL", mockAPI.URL()) + t.Setenv("HYPERFLEET_API_VERSION", "v1") + + // Create config with Maestro resources + targetCluster := "test-target-cluster" + config := createMaestroTestConfig(mockAPI.URL(), targetCluster) + + apiClient, err := hyperfleet_api.NewClient(testLog(), + hyperfleet_api.WithTimeout(10*time.Second), + hyperfleet_api.WithRetryAttempts(1), + ) + require.NoError(t, err) + + // Create executor with mock Maestro client as transport + exec, err := executor.NewBuilder(). + WithConfig(config). + WithAPIClient(apiClient). + WithTransportClient(mockMaestro). + WithLogger(testLog()). + Build() + require.NoError(t, err) + + // Create test event + clusterId := fmt.Sprintf("maestro-cluster-%d", time.Now().UnixNano()) + evt := createMaestroTestEvent(clusterId) + + // Execute + ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second) + defer cancel() + + result := exec.Execute(ctx, evt) + + // Verify execution succeeded + require.Equal(t, executor.StatusSuccess, result.Status, "Expected success status, got %s: errors=%v", result.Status, result.Errors) + + // Verify resource results + require.Len(t, result.ResourceResults, 1, "Expected 1 resource result for clusterResources") + + rr := result.ResourceResults[0] + assert.Equal(t, "clusterResources", rr.Name) + assert.Equal(t, executor.StatusSuccess, rr.Status) + assert.Equal(t, manifest.OperationCreate, rr.Operation) + t.Logf("Resource %s: operation=%s", rr.Name, rr.Operation) + + // Verify ManifestWork was created with multiple manifests + appliedWorks := mockMaestro.GetAppliedWorks() + require.Len(t, appliedWorks, 1, "Expected 1 ManifestWork to be applied") + + work := appliedWorks[0] + assert.Equal(t, fmt.Sprintf("cluster-%s", clusterId), work.GetName()) + assert.Len(t, work.Spec.Workload.Manifests, 2, "ManifestWork should contain 2 manifests") + + // Verify correct target cluster was used + consumers := mockMaestro.GetApplyConsumers() + require.Len(t, consumers, 1) + assert.Equal(t, targetCluster, consumers[0]) + + t.Logf("ManifestWork applied: name=%s targetCluster=%s manifestCount=%d", + work.GetName(), consumers[0], len(work.Spec.Workload.Manifests)) +} + +// TestExecutor_Maestro_TemplateRendering tests that template variables are rendered in all manifests +func TestExecutor_Maestro_TemplateRendering(t *testing.T) { + mockAPI := newMaestroTestAPIServer(t) + defer mockAPI.Close() + + mockMaestro := maestro_client.NewMockMaestroClient() + + t.Setenv("HYPERFLEET_API_BASE_URL", mockAPI.URL()) + t.Setenv("HYPERFLEET_API_VERSION", "v1") + + config := createMaestroTestConfig(mockAPI.URL(), "template-test-cluster") + + apiClient, err := hyperfleet_api.NewClient(testLog()) + require.NoError(t, err) + + exec, err := executor.NewBuilder(). + WithConfig(config). + WithAPIClient(apiClient). + WithTransportClient(mockMaestro). + WithLogger(testLog()). + Build() + require.NoError(t, err) + + clusterId := fmt.Sprintf("template-cluster-%d", time.Now().UnixNano()) + evt := createMaestroTestEvent(clusterId) + + result := exec.Execute(context.Background(), evt) + require.Equal(t, executor.StatusSuccess, result.Status) + + // Verify templates were rendered in manifests + appliedWorks := mockMaestro.GetAppliedWorks() + require.Len(t, appliedWorks, 1) + + work := appliedWorks[0] + + // Check the ManifestWork name was rendered + expectedName := fmt.Sprintf("cluster-%s", clusterId) + assert.Equal(t, expectedName, work.GetName(), "ManifestWork name should be rendered from template") + + // Verify manifests contain rendered values (by checking raw manifest data) + require.Len(t, work.Spec.Workload.Manifests, 2) + + // Parse first manifest (Namespace) + var nsManifest map[string]interface{} + err = json.Unmarshal(work.Spec.Workload.Manifests[0].Raw, &nsManifest) + require.NoError(t, err) + + nsMetadata := nsManifest["metadata"].(map[string]interface{}) + assert.Equal(t, fmt.Sprintf("cluster-%s", clusterId), nsMetadata["name"], + "Namespace name should be rendered") + + nsLabels := nsMetadata["labels"].(map[string]interface{}) + assert.Equal(t, clusterId, nsLabels["hyperfleet.io/cluster-id"], + "Namespace label should contain rendered cluster ID") + + // Parse second manifest (ConfigMap) + var cmManifest map[string]interface{} + err = json.Unmarshal(work.Spec.Workload.Manifests[1].Raw, &cmManifest) + require.NoError(t, err) + + cmMetadata := cmManifest["metadata"].(map[string]interface{}) + assert.Equal(t, fmt.Sprintf("cluster-%s", clusterId), cmMetadata["namespace"], + "ConfigMap namespace should be rendered") + + cmData := cmManifest["data"].(map[string]interface{}) + assert.Equal(t, clusterId, cmData["cluster-id"], + "ConfigMap data should contain rendered cluster ID") + assert.Equal(t, "test-cluster", cmData["cluster-name"], + "ConfigMap data should contain captured cluster name from precondition") + assert.Equal(t, "us-west-2", cmData["region"], + "ConfigMap data should contain captured region from precondition") + + t.Logf("Template rendering verified for ManifestWork: %s", work.GetName()) +} + +// TestExecutor_Maestro_ManifestWorkNaming tests ManifestWork naming behavior +func TestExecutor_Maestro_ManifestWorkNaming(t *testing.T) { + t.Run("uses configured manifestWork name with templates", func(t *testing.T) { + mockAPI := newMaestroTestAPIServer(t) + defer mockAPI.Close() + + mockMaestro := maestro_client.NewMockMaestroClient() + + t.Setenv("HYPERFLEET_API_BASE_URL", mockAPI.URL()) + t.Setenv("HYPERFLEET_API_VERSION", "v1") + + config := createMaestroTestConfig(mockAPI.URL(), "naming-test-cluster") + + apiClient, err := hyperfleet_api.NewClient(testLog()) + require.NoError(t, err) + + exec, err := executor.NewBuilder(). + WithConfig(config). + WithAPIClient(apiClient). + WithTransportClient(mockMaestro). + WithLogger(testLog()). + Build() + require.NoError(t, err) + + clusterId := "naming-test-123" + evt := createMaestroTestEvent(clusterId) + + result := exec.Execute(context.Background(), evt) + require.Equal(t, executor.StatusSuccess, result.Status) + + appliedWorks := mockMaestro.GetAppliedWorks() + require.Len(t, appliedWorks, 1) + + // Config specifies: manifestWork.name = "cluster-{{ .clusterId }}" + assert.Equal(t, "cluster-naming-test-123", appliedWorks[0].GetName()) + }) + + t.Run("generates name when manifestWork name not configured", func(t *testing.T) { + mockAPI := newMaestroTestAPIServer(t) + defer mockAPI.Close() + + mockMaestro := maestro_client.NewMockMaestroClient() + + t.Setenv("HYPERFLEET_API_BASE_URL", mockAPI.URL()) + + // Config without manifestWork.name configured + config := createMaestroTestConfig(mockAPI.URL(), "auto-name-cluster") + // Remove the manifestWork name + config.Spec.Resources[0].Transport.Maestro.ManifestWork = nil + + apiClient, err := hyperfleet_api.NewClient(testLog()) + require.NoError(t, err) + + exec, err := executor.NewBuilder(). + WithConfig(config). + WithAPIClient(apiClient). + WithTransportClient(mockMaestro). + WithLogger(testLog()). + Build() + require.NoError(t, err) + + clusterId := "auto-name-456" + evt := createMaestroTestEvent(clusterId) + + result := exec.Execute(context.Background(), evt) + require.Equal(t, executor.StatusSuccess, result.Status) + + appliedWorks := mockMaestro.GetAppliedWorks() + require.Len(t, appliedWorks, 1) + + // Name should be generated: {resourceName}-{firstManifestName} + expectedName := fmt.Sprintf("clusterResources-cluster-%s", clusterId) + assert.Equal(t, expectedName, appliedWorks[0].GetName(), + "ManifestWork name should be auto-generated from resource name and first manifest name") + }) +} + +// TestExecutor_Maestro_ErrorHandling tests error handling when Maestro client fails +func TestExecutor_Maestro_ErrorHandling(t *testing.T) { + mockAPI := newMaestroTestAPIServer(t) + defer mockAPI.Close() + + mockMaestro := maestro_client.NewMockMaestroClient() + // Configure mock to return an error + mockMaestro.ApplyManifestWorkError = fmt.Errorf("maestro server unavailable: connection refused") + + t.Setenv("HYPERFLEET_API_BASE_URL", mockAPI.URL()) + t.Setenv("HYPERFLEET_API_VERSION", "v1") + + config := createMaestroTestConfig(mockAPI.URL(), "error-test-cluster") + + apiClient, err := hyperfleet_api.NewClient(testLog()) + require.NoError(t, err) + + exec, err := executor.NewBuilder(). + WithConfig(config). + WithAPIClient(apiClient). + WithTransportClient(mockMaestro). + WithLogger(testLog()). + Build() + require.NoError(t, err) + + evt := createMaestroTestEvent("error-test-cluster") + + result := exec.Execute(context.Background(), evt) + + // Execution should fail + assert.Equal(t, executor.StatusFailed, result.Status) + + // Resource result should show failure + require.Len(t, result.ResourceResults, 1) + rr := result.ResourceResults[0] + assert.Equal(t, executor.StatusFailed, rr.Status) + assert.NotNil(t, rr.Error) + assert.Contains(t, rr.Error.Error(), "connection refused") + + // Error should be present in result + require.NotEmpty(t, result.Errors) + resourceError, ok := result.Errors[executor.PhaseResources] + require.True(t, ok, "Should have error in resources phase") + assert.Contains(t, resourceError.Error(), "failed to apply ManifestWork") + + t.Logf("Error handling verified: %v", rr.Error) +} + +// TestExecutor_Maestro_ManifestsStoredInContext verifies manifests are stored in execution context +func TestExecutor_Maestro_ManifestsStoredInContext(t *testing.T) { + mockAPI := newMaestroTestAPIServer(t) + defer mockAPI.Close() + + mockMaestro := maestro_client.NewMockMaestroClient() + + t.Setenv("HYPERFLEET_API_BASE_URL", mockAPI.URL()) + t.Setenv("HYPERFLEET_API_VERSION", "v1") + + config := createMaestroTestConfig(mockAPI.URL(), "context-test-cluster") + + apiClient, err := hyperfleet_api.NewClient(testLog()) + require.NoError(t, err) + + exec, err := executor.NewBuilder(). + WithConfig(config). + WithAPIClient(apiClient). + WithTransportClient(mockMaestro). + WithLogger(testLog()). + Build() + require.NoError(t, err) + + clusterId := "context-test-789" + evt := createMaestroTestEvent(clusterId) + + result := exec.Execute(context.Background(), evt) + require.Equal(t, executor.StatusSuccess, result.Status) + + // Verify execution context contains stored manifests + require.NotNil(t, result.ExecutionContext) + resources := result.ExecutionContext.Resources + + // Manifests should be stored by compound name: {resourceName}.{manifestName} + assert.NotNil(t, resources["clusterResources.namespace"], + "Namespace manifest should be stored as clusterResources.namespace") + assert.NotNil(t, resources["clusterResources.configMap"], + "ConfigMap manifest should be stored as clusterResources.configMap") + + // First manifest should also be stored under resource name + assert.NotNil(t, resources["clusterResources"], + "First manifest should also be stored under resource name") + + // Verify stored manifest content + nsManifest := resources["clusterResources.namespace"] + assert.Equal(t, "Namespace", nsManifest.GetKind()) + assert.Equal(t, fmt.Sprintf("cluster-%s", clusterId), nsManifest.GetName()) + + cmManifest := resources["clusterResources.configMap"] + assert.Equal(t, "ConfigMap", cmManifest.GetKind()) + assert.Equal(t, "cluster-config", cmManifest.GetName()) + + // Log the keys + var keys []string + for k := range resources { + keys = append(keys, k) + } + t.Logf("Manifests stored in context: %v", keys) +}