You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The EESSI specific settings can be found in group_vars/all.yml, and in templates we added a template for a Squid configation for the local proxy servers; this file is not included in the Galaxy repository.
45
+
### Configuration
46
+
47
+
The EESSI specific settings can be found in `group_vars/all.yml`, and in `templates` we added our own templates
48
+
of Squid configurations for the Stratum 1 and local proxy servers.
49
+
For all playbooks you will also need to have an appropriate Ansible `hosts` file;
50
+
see the supplied `hosts.example` for the structure and host groups that you need for these playbooks.
51
+
52
+
## Running the playbooks
40
53
41
-
In order to run one of these playbooks, you will have to use a hosts file that includes at least the group of nodes for which you are running a playbook. An example of the file that shows the correct structure can be found in hosts.example.
42
-
The playbooks can then be run as follows:
54
+
In general, all the playbooks can be run like this:
43
55
```
44
56
ansible-playbook -i hosts -b <name of playbook>.yml
45
57
```
46
-
where -b means become, i.e. run with sudo. If this requires a password, include -K:
58
+
where `-i` allows you to specify the path to your hosts file, and `-b` means "become", i.e. run with `sudo`.
59
+
If this requires a password, include `-K`, which will ask for the `sudo` password when running the playbook:
47
60
```
48
61
ansible-playbook -i hosts -b -K <name of playbook>.yml
49
62
```
63
+
64
+
Before you run any of the commands below, make sure that you updated the file `group_vars/all.yml`
65
+
and include the new/extra URLs of any server you want to change/add (e.g. add your Stratum 1).
66
+
67
+
68
+
### Stratum 0
69
+
First install the Stratum 0 server:
70
+
```
71
+
ansible-playbook -i hosts -b -K stratum0.yml
72
+
```
73
+
74
+
Then install the files for the configuration repository:
This will automatically make replicas of all the repositories defined in `group_vars/all.yml`.
107
+
108
+
### Local proxies
109
+
The local proxies also need a Squid configuration file; the default can be found in
110
+
`templates/localproxy_squid.conf.j2`.
111
+
112
+
You have to define the lists of IP addresses / ranges (using CIDR notation) that are allowed to use the proxy using the variable `cvmfs_localproxy_allowed_clients`.
113
+
You can put this, for instance, in your hosts file. See `hosts.example` for more details.
114
+
115
+
If you want to customize the Squid configuration more, you can also make your own file, and point to it using `cvmfs_squid_conf_src` (see the Stratum 1 section).
116
+
117
+
Do keep in mind that you should never accept proxy request from everywhere to everywhere!
118
+
Besides having a Squid configuration with the right ACLs, it is recommended to also have a firewall that limits access to your proxy.
119
+
120
+
Deploy your proxies using:
121
+
```
122
+
ansible-playbook -i hosts -b -K localproxy.yml
123
+
```
124
+
125
+
### Clients
126
+
Make sure that your hosts file contains the list of hosts where the CVMFS client should be installed.
127
+
Furthermore, you can add a vars section for the clients that contains the list of (local) proxy servers
128
+
that your clients should use:
129
+
```yaml
130
+
[cvmfsclients:vars]
131
+
cvmfs_http_proxies=["your-local.proxy:3128"]
132
+
```
133
+
If you just want to roll out one client without a proxy, you can leave this out.
134
+
Finally, run the playbook:
135
+
```
136
+
ansible-playbook -i hosts -b -K client.yml
137
+
```
138
+
139
+
## Verification and usage
140
+
141
+
### Client
142
+
143
+
Once the client has been installed, you should be able to access all repositories under /cvmfs. They might not immediately show up in that directory before you have actually used them, so you might first have to run ls, e.g.:
144
+
```
145
+
ls /cvmfs/cvmfs-config.eessi-hpc.org
146
+
```
147
+
148
+
On the client machines you can use the `cvmfs_config` tool for different operations. For instance, you can verify the file system by running:
The second time you run it, you should get a cache hit:
177
+
```
178
+
X-Cache: HIT from url-to-your-proxy
179
+
```
180
+
181
+
### Using the CVMFS infrastructure
182
+
183
+
When the infrastructure seems to work, you can try publishing some new files. This can be done by starting a transaction on the Stratum 0, adding some files, and publishing the transaction:
184
+
```
185
+
sudo cvmfs_server transaction pilot.eessi-hpc.org
186
+
mkdir /cvmfs/pilot.eessi-hpc.org/testdir
187
+
touch /cvmfs/pilot.eessi-hpc.org/testdir/testfile
188
+
sudo cvmfs_server publish pilot.eessi-hpc.org
189
+
```
190
+
It might take a few minutes, but then the new file should show up at the clients.
This roles directory contains a submodule that points to the repository of the Galaxy Ansible role for installing/configuring CVMFS, which can be found at:
1
+
This roles directory contains submodules that point to the repositories of Ansible roles on which the EESSI playbooks depend.
2
+
3
+
## cvmfs
4
+
5
+
The Galaxy Ansible role for installing/configuring CVMFS, which can be found at:
2
6
https://github.com/galaxyproject/ansible-cvmfs
3
7
4
8
We renamed the directory to "cvmfs", so we can use "cvmfs" as the name of this role in our Ansible playbooks.
9
+
10
+
## geerlingguy.repo-epel
11
+
12
+
Ansible role for adding the EPEL repository to RHEL/CentOS systems. The source can be found at:
0 commit comments