Skip to content

Commit 5fc149d

Browse files
committed
Convert .md file to .adoc file under the Learn content type
1 parent 9379f5d commit 5fc149d

16 files changed

+486
-401
lines changed
File renamed without changes.
Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,11 @@ menu: learn
33
title: About Validated Patterns
44
weight: 10
55
---
6-
7-
# About Validated Patterns
6+
:toc:
87

98
Validated Patterns and upstream Community Patterns are a natural progression from reference architectures with additional value. Here is a brief video to explain what patterns are all about:
109

11-
[![patterns-intro-video](https://img.youtube.com/vi/lI8TurakeG4/0.jpg)](https://www.youtube.com/watch?v=lI8TurakeG4)
10+
image::https://img.youtube.com/vi/lI8TurakeG4/0.jpg[patterns-intro-video,link=https://www.youtube.com/watch?v=lI8TurakeG4]
1211

1312
This effort is focused on customer solutions that involve multiple Red Hat
1413
products. The patterns include one or more applications that are based on successfully deployed customer examples. Example application code is provided as a demonstration, along with the various open source projects and Red Hat products required to for the deployment to work. Users can then modify the pattern for their own specific application.
@@ -17,40 +16,43 @@ How do we select and produce a pattern? We look for novel customer use cases, ob
1716

1817
The automation also enables the solution to be added to Continuous Integration (CI), with triggers for new product versions (including betas), so that we can proactively find and fix breakage and avoid bit-rot.
1918

20-
## Who should use these patterns?
19+
[id="who-should-use-these-patterns"]
20+
== Who should use these patterns?
2121

22-
It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced [Cloud Native](https://www.cncf.io/projects/) concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops ([ArgoCD](https://argoproj.github.io/argo-cd/)), Advanced Cluster Management ([Open Cluster Management](https://open-cluster-management.io/)), and OpenShift Pipelines ([Tekton](https://tekton.dev/))
22+
It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced https://www.cncf.io/projects/[Cloud Native] concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops (https://argoproj.github.io/argo-cd/[ArgoCD]), Advanced Cluster Management (https://open-cluster-management.io/[Open Cluster Management]), and OpenShift Pipelines (https://tekton.dev/[Tekton])
2323

24-
## General Structure
24+
[id="general-structure"]
25+
== General Structure
2526

26-
All patterns assume an OpenShift cluster is available to deploy the application(s) that are part of the pattern. If you do not have an OpenShift cluster, you can use [cloud.redhat.com](https://console.redhat.com/openshift).
27+
All patterns assume an OpenShift cluster is available to deploy the application(s) that are part of the pattern. If you do not have an OpenShift cluster, you can use https://console.redhat.com/openshift[cloud.redhat.com].
2728

2829
The documentation will use the `oc` command syntax but `kubectl` can be used interchangeably. For each deployment it is assumed that the user is logged into a cluster using the `oc login` command or by exporting the `KUBECONFIG` path.
2930

3031
The diagram below outlines the general deployment flow of a datacenter application.
3132

3233
But first the user must create a fork of the pattern repository. This allows changes to be made to operational elements (configurations etc.) and to application code that can then be successfully made to the forked repository for DevOps continuous integration (CI). Clone the directory to your laptop/desktop. Future changes can be pushed to your fork.
3334

34-
![GitOps for Datacenter](/images/gitops-datacenter.png)
35-
36-
1. Make a copy of the values file. There may be one or more values files. E.g. `values-global.yaml` and/or `values-datacenter.yaml`. While most of these values allow you to specify subscriptions, operators, applications and other application specifics, there are also *secrets* which may include encrypted keys or user IDs and passwords. It is important that you make a copy and **do not push your personal values file to a repository accessible to others!**
35+
image::/images/gitops-datacenter.png[GitOps for Datacenter]
3736

38-
2. Deploy the application as specified by the pattern. This may include a Helm command (`helm install`) or a make command (`make deploy`).
37+
. Make a copy of the values file. There may be one or more values files. E.g. `values-global.yaml` and/or `values-datacenter.yaml`. While most of these values allow you to specify subscriptions, operators, applications and other application specifics, there are also _secrets_ which may include encrypted keys or user IDs and passwords. It is important that you make a copy and *do not push your personal values file to a repository accessible to others!*
38+
. Deploy the application as specified by the pattern. This may include a Helm command (`helm install`) or a make command (`make deploy`).
3939

4040
When the workload is deployed the pattern first deploys OpenShift GitOps. OpenShift GitOps will then take over and make sure that all application and the components of the pattern are deployed. This includes required operators and application code.
4141

4242
Most patterns will have an Advanced Cluster Management operator deployed so that multi-cluster deployments can be managed.
4343

44-
## Edge Patterns
44+
[id="edge-patterns"]
45+
== Edge Patterns
4546

4647
Some patterns include both a data center and one or more edge clusters. The diagram below outlines the general deployment flow of applications on an edge application. The edge OpenShift cluster is often deployed on a smaller cluster than the datacenter. Sometimes this might be a three node cluster that allows workloads to be deployed on the master nodes. The edge cluster might be a single node cluster (SN0). It might be deployed on bare metal, on local virtual machines or in a public/private cloud. Provision the cluster (see above)
4748

48-
![GitOps for Edge](/images/gitops-edge.png)
49+
image::/images/gitops-edge.png[GitOps for Edge]
4950

50-
3. Import/join the cluster to the hub/data center. Instructions for importing the cluster can be found [here]. You're done.
51+
. Import/join the cluster to the hub/data center. Instructions for importing the cluster can be found [here]. You're done.
5152

5253
When the cluster is imported, ACM on the datacenter will deploy an ACM agent and agent-addon pod into the edge cluster. Once installed and running ACM will then deploy OpenShift GitOps onto the cluster. Then OpenShift GitOps will deploy whatever applications are required for that cluster based on a label.
5354

54-
## OpenShift GitOps (a.k.a ArgoCD)
55+
[id="openshift-gitops-argocd"]
56+
== OpenShift GitOps (a.k.a ArgoCD)
5557

5658
When OpenShift GitOps is deployed and running in a cluster (datacenter or edge) you can launch its console by choosing ArgoCD in the upper left part of the OpenShift Console (TO-DO whenry to add an image and clearer instructions here)

content/learn/community.adoc

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
---
2+
menu:
3+
learn:
4+
parent: Workflow
5+
title: Community Patterns
6+
weight: 42
7+
aliases: /requirements/community/
8+
---
9+
10+
:toc:
11+
12+
= Community Pattern Requirements
13+
14+
[id="tldr"]
15+
== tl;dr
16+
17+
* *What are they:* Best practice implementations conforming to the Validated Patterns implementation practices
18+
* *Purpose:* Codify best practices and promote collaboration between different groups inside, and external to, Red Hat
19+
* *Creator:* Customers, Partners, GSIs, Services/Consultants, SAs, and other Red Hat teams
20+
21+
[id="requirements"]
22+
== Requirements
23+
24+
General requirements for all Community, and Validated patterns
25+
26+
[id="base"]
27+
=== Base
28+
29+
. Patterns *MUST* include a top-level README highlighting the business problem and how the pattern solves it
30+
. Patterns *MUST* include an architecture drawing. The specific tool/format is flexible as long as the meaning is clear.
31+
. Patterns *MUST* undergo an informal architecture review by a community leader to ensure that the solution has the right products, and they are generally being used as intended.
32+
+
33+
For example: not using a database as a message bus.
34+
As community leaders, contributions from within Red Hat may be subject to a higher level of scrutiny
35+
While we strive to be inclusive, the community will have quality standards and generally using the framework does not automatically imply a solution is suitable for the community to endorse/publish.
36+
37+
. Patterns *MUST* undergo an informal technical review by a community leader to ensure that it conforms to the link:/requirements/implementation/[technical requirements] and meets basic reuse standards
38+
. Patterns *MUST* document their support policy
39+
+
40+
It is anticipated that most community patterns will be supported by the community on a best-effort basis, but this should be stated explicitly.
41+
The validated patterns team commits to maintaining the framework but will also accept help.
42+
43+
. Patterns SHOULD include a recorded demo highlighting the business problem and how the pattern solves it

content/learn/community.md

Lines changed: 0 additions & 38 deletions
This file was deleted.
Lines changed: 27 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -5,55 +5,63 @@ weight: 90
55
aliases: /faq/
66
---
77

8-
# FAQ
8+
:toc:
99

10-
## What is a Hybrid Cloud Pattern?
10+
= FAQ
11+
12+
[id="what-is-a-hybrid-cloud-pattern"]
13+
== What is a Hybrid Cloud Pattern?
1114

1215
Hybrid Cloud Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Hybrid Cloud Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.
1316

1417
Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Hybrid Cloud Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.
1518

16-
The first Hybrid Cloud Pattern is based on [MANUela](https://github.com/sa-mw-dach/manuela), an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
19+
The first Hybrid Cloud Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
1720

1821
We are actively developing new Hybrid Cloud Patterns. Watch this space for updates!
1922

20-
## How are they different from XYZ?
23+
[id="how-are-they-different-from-xyz"]
24+
== How are they different from XYZ?
2125

2226
Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Hybrid Cloud Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.
2327

24-
## What technologies are used?
28+
[id="what-technologies-are-used"]
29+
== What technologies are used?
2530

2631
Key technologies in the stack for Industrial Edge include:
2732

28-
- Red Hat OpenShift Container Platform
29-
- Red Hat Advanced Cluster Management
30-
- Red Hat OpenShift GitOps (based on ArgoCD)
31-
- Red Hat OpenShift Pipelines (based on tekton)
32-
- Red Hat Integration - AMQ Broker (ActiveMQ Artemis MQTT)
33-
- Red Hat Integration - AMQ Streams (Kafka)
34-
- Red Hat Integration - Camel K
35-
- Seldon Operator
33+
* Red Hat OpenShift Container Platform
34+
* Red Hat Advanced Cluster Management
35+
* Red Hat OpenShift GitOps (based on ArgoCD)
36+
* Red Hat OpenShift Pipelines (based on tekton)
37+
* Red Hat Integration - AMQ Broker (ActiveMQ Artemis MQTT)
38+
* Red Hat Integration - AMQ Streams (Kafka)
39+
* Red Hat Integration - Camel K
40+
* Seldon Operator
3641

3742
In the future, we expect to further use Red Hat OpenShift, and expand the integrations with other elements of the ecosystem. How can the concept of GitOps integrate with a fleet of devices that are not running Kubernetes? What about integrations with baremetal or VM servers? Sounds like a job for Ansible! We expect to tackle some of these problems in future patterns.
3843

39-
## How are they structured?
44+
[id="how-are-they-structured"]
45+
== How are they structured?
4046

41-
Hybrid Cloud Patterns come in parts - we have a [common](https://github.com/hybrid-cloud-patterns/common) repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - [industrial edge](https://github.com/hybrid-cloud-patterns/industrial-edge). This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
47+
Hybrid Cloud Patterns come in parts - we have a https://github.com/hybrid-cloud-patterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/hybrid-cloud-patterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
4248

4349
The common repository is primarily concerned with how to deploy the GitOps operator, and to create the namespaces that will be necessary to manage the pattern applications.
4450

4551
The pattern repository has the application-specific layout, and determines which components are installed in which places - hub or edge. The pattern repository also defines the hub and edge locations. Both the hub and edge are expected to have multiple components each - the hub will have pipelines and the CI/CD framework, as well as any centralization components or data analysis components. Edge components are designed to be smaller as we do not need to deploy Pipelines or the test and staging areas to the Edge.
4652

4753
Each application is described as a series of resources that are rendered into GitOps (ArgoCD) via Helm and Kustomize. The values for these charts are set by values files that need to be "personalized" (with your local cluster values) as the first step of installation. Subsequent pushes to the gitops repository will be reflected in the clusters running the applications.
4854

49-
## Who is behind this?
55+
[id="who-is-behind-this"]
56+
== Who is behind this?
5057

5158
Today, a team of Red Hat engineers including Andrew Beekhof (@beekhof), Lester Claudio (@claudiol), Martin Jackson (@mhjacks), William Henry (@ipbabble), Michele Baldessari (@mbaldessari), Jonny Rickard (@day0hero) and others.
5259

5360
Excited or intrigued by what you see here? We'd love to hear your thoughts and ideas! Try the patterns contained here and see below for links to our repositories and issue trackers.
5461

55-
## How can I get involved?
62+
[id="how-can-i-get-involved"]
63+
== How can I get involved?
5664

57-
Try out what we've done and submit issues to our [issue trackers](https://github.com/hybrid-cloud-patterns/industrial-edge/issues).
65+
Try out what we've done and submit issues to our https://github.com/validatedpatterns/industrial-edge/issues[issue trackers].
5866

59-
We will review pull requests to our [pattern](https://github.com/hybrid-cloud-patterns/common) [repositories](https://validatedpatterns.io/industrial-edge).
67+
We will review pull requests to our https://github.com/validatedpatterns/common[pattern] https://github.com/validatedpatterns/industrial-edge[repositories].

0 commit comments

Comments
 (0)