This repository presents the reference implementation of a product data server as part of the product data network buildingenvelopedata.org. Before deploying this repository, machine can be used to set up the machine.
The API specification of the product data servers is available in the repository api. There is also a visualization of the API of a product data server.
This repository is deployed as the product data server of TestLab Solar Facades of Fraunhofer ISE.
If you have a question for which you don't find the answer in this repository,
please raise a new
issue and add
the tag question! All ways to contribute are presented by
CONTRIBUTING.md.
The basis for our collaboration is decribed by our Code of
Conduct.
-
Open your favorite shell, for example, good old Bourne Again SHell, aka,
bash, the somewhat newer Z shell, aka,zsh, or shiny newfish. -
Install Git by running
sudo apt install git-allon Debian-based distributions like Ubuntu, orsudo dnf install giton Fedora and closely-related RPM-Package-Manager-based distributions like CentOS. For further information see Installing Git. -
Clone the source code by running
git clone git@github.com:building-envelope-data/database.gitand navigate into the new directorydatabaseby runningcd ./database. -
Prepare your environment by running
cp ./.env.development.sample ./.env && chmod 600 ./.envand by adding the line127.0.0.1 local.solarbuildingenvelopes.comto your/etc/hostsfile. The value ofGNUPG_SECRET_SIGNING_KEY_FINGERPRINTwill be set later. -
Prepare your remote controls GNU Make and Docker Compose by running
ln --symbolic ./docker.mk ./Makefileandln --symbolic ./docker-compose.development.yaml ./docker-compose.yaml.
-
Install Docker Desktop, and GNU Make.
-
List all GNU Make targets by running
make help. -
Generate a GnuPG key with the passphrase
${GNUPG_SECRET_SIGNING_KEY_PASSPHRASE}set in the./.envfile by runningmake bootstrap COMMAND='./gpg.mk key PERSON="${name}" COMMENT="${comment}" EMAIL="${email}"'with your information filled in, for example,make bootstrap COMMAND='./gpg.mk key PERSON="Anna Smith" COMMENT="first" EMAIL="anna.smith@fraunhofer.de"'. Then copy the key's fingerprint which is output by the command and set it as the value of theGNUPG_SECRET_SIGNING_KEY_FINGERPRINTvariable in the./.envfile. -
Create the PostgreSQL database and schema by running
./database.mk create migrate. -
Build and start all services and follow their logs by running
make build up logs. -
In your web browser, navigate to the
- web frontend at
https://local.solarbuildingenvelopes.com:5051, - GraphQL API at
https://local.solarbuildingenvelopes.com:5051/graphql/, - REST API
https://local.solarbuildingenvelopes.com:5051/openapi/docs//, - dummy email server at
https://local.solarbuildingenvelopes.com:5051/email/(to view for example the confirmation email sent during registration), - OpenId Connect configuration navigate to
https://local.solarbuildingenvelopes.com:5051/.well-known/openid-configuration
Note that the port is
5051by default. If you set the variableHTTPS_PORTwithin the./.envto some other value though, you need to use that value instead within the URLs. - web frontend at
In another shell
- Drop into
bashwith the working directory/app, which is mounted to the host's./backenddirectory, inside a fresh Docker container based on./backend/Dockerfileby runningmake shell SERVICE=backend. If necessary, the Docker image is (re)built automatically, which takes a while the first time. Note that the Docker image and containers try to use the same user and group IDs as the ones on the host machine. This has the upside that files created within containers in mounted directories are owned by the host user. It has the downside that the Docker image may fail to build because the IDs may already be taken by other users and groups in the base image. This happens for example if you arerooton the host machine with the user and group IDs 0. If there is an ID collision, then you can either change the user and group ID on the host machine (for example by logging in as another user) or you can replace all occurrences ofshell id --groupandshell id --userinMakefileby fixed non-colliding IDs like 1000. If you know a better way, please let use know on GitHub. - List all backend GNU Make targets by running
make help. - For example, update packages and tools by running
make update. - Drop out of the container by running
exitor pressingCtrl-D.
After changing the domain model in ./backend/src/data, you need to migrate
the database by dropping into make shell SERVICE=backend, adding a migration
with make migration PERSON=${MIGRATION_NAME}, verifying and if necessary
adapting the new migration C# code and SQL scripts, exiting the container with
exit, and applying the new migration to the PostgreSQL database with
./database.mk migrate. See
Migrations Overview
and the following pages for details.
On the very first usage:
- Install Visual Studio Code and open it.
Navigate to the Extensions pane (
Ctrl+Shift+X). Add the extension Remote Development. - Navigate to the
Remote Explorer
pane. Hover over the running
database-backend-*container (if it is not running, then runmake upin a shell inside the project directory) and click on the "Attach in Current Window" icon. In the Explorer pane, open the directory/app, which is mounted to the host's./backenddirectory. Navigate to the Extensions pane. Add the extensions C# Dev Kit, IntelliCode for C# Dev Kit, GraphQL: Language Feature Support, and GitLens — Git supercharged. - Navigate to the
Remote Explorer
pane. Hover over the running
database-frontend-*container and click on the "Attach in New Window" icon. In the Explorer pane, open the directory/app, which is mounted to the host's./frontenddirectory. Navigate to the Extensions pane. Add the extensions GraphQL: Language Feature Support, and GitLens — Git supercharged.
Note that the Docker containers are configured in ./docker-compose.development.yaml in
such a way that Visual Studio Code extensions installed within containers are
retained in Docker volumes and thus remain installed across make down and
make up cycles.
On subsequent usages: Open Visual Studio Code, navigate to the "Remote Explorer" pane, and attach to the container(s) you want to work in.
The following Visual Studio Code docs may be of interest for productivity and debugging
To debug the
ASP.NET Core web application,
attach Visual Studio Code to the database-backend-* container,
press Ctrl+Shift+P, select "Debug: Attach to a .NET 5+ or .NET Core process",
and choose the process /app/src/bin/Debug/net10.0/Database run titled
Database or alternatively navigate to the "Run and Debug" pane
(Ctrl+Shift+D), select the launch profile ".NET Core Attach", press the
"Start Debugging" icon (F5), and select the same process as above. Then, for
example, open some source files to set breakpoints, navigate through the
website https://local.solarbuildingenvelopes.com:5051, which will stop at
breakpoints, and inspect the information provided by the debugger at the
breakpoints. For details on debugging C# in Visual Studio Code, see
Debugging.
Note that the debugger detaches after the
polling file watcher
restarts the process, which happens for example after editing a source file
because dotnet watch is configured in ./docker-compose.development.yaml with
DOTNET_USE_POLLING_FILE_WATCHER set to true. As of this writing, there is
an
open feature request to reattach the debugger automatically.
There also are multiple extensions like
.NET Watch Attach
and
.NET Stalker Debugger
that attempt to solve that. Those extensions don't work in our case though, as
they try to restart dotnet watch themselves, instead of waiting for the
polling file watcher of dotnet watch to restart
/app/src/bin/Debug/net10.0/Database run and attach to that process.
After migrating the PostgreSQL database or changing the database schema
manually or upgrading Npgsql, the service backend may throw exceptions
regarding the object-relational mapping (Npgsql or EF Core). In that case it
may be necessary to restart the service backend, for example, by running
make down up and it may even be necessary recreate the database from scratch
by running make down remove-data-volume up. Note that the latter will remove
all data from PostgreSQL, recreate the database and its schema, and seed it
freshly.
When your hard-disk starts to grow full, it may be the case that Docker does
not clean-up anonymous volumes properly. You can do so manually by running
docker system prune potentially with the arguments --volumes and/or
--all. Note that this may result in loss of data. It may also be the case
that the log files grew huge. You can delete them by running
rm ./backend/src/logs/*.
When the frontend Docker image does not build in production because of an
unused import in an automatically generated file, for example, one in the
directory ./frontend/__generated__, then temporarily ignore TypeScript
build errors by adding the following lines to ./frontend/next.config.ts, for
example with vi or nano in a shell on the deployment machine:
typescript: {
ignoreBuildErrors: true,
},
The same can happen in development when running make build (or yarn run build) in the shell entered by make shell SERVICE=frontend. In that case,
remove the offending import manually in the file and try again, for example
using tail like so tail -n +5 ./__generated__/queries/... > x.tmp && mv x.tmp ... . Do not disable build errors in development because when you do so, build
errors in non-generated files may leak into the code base.
For information on using Docker in production see Configure and troubleshoot the Docker daemon and the pages following it.
- Use the sibling project machine and its instructions for the first stage of the set-up.
- Enter a shell on the production machine using
ssh. - Change into the directory
/appby runningcd /app. - Clone the repository twice by running
for environment in staging production ; do git clone git@github.com:building-envelope-data/database.git ./${environment} done - For each of the two environments staging and production referred to by
${environment}below:-
Set the variable
environmentby runningenvironment=stagingorenvironment=production. -
Change into the clone
${environment}by runningcd /app/${environment}. -
Open
https://www.buildingenvelopedata.orgin your favorite web browser, log into your account, navigate to the institution operating this database (which you should be a representative of), add an OpenId Connect Application with- client ID and display name of your choice;
- consent type: explicit;
- endpoints: authorization, pushed authorization, introspection, end session, revocation, token;
- grant types: authorization code and refresh token;
- response types: code;
- scopes: profile, read:api, write:api, api:database:manage;
- requirements: proof key for code exchange and pushed authorization requests;
- post logout redirect URI:
https://${HOST}/connect/callback/logout/metabase - redirect URI:
https://${HOST}/connect/callback/login/metabasewhere${HOST}is the domain name with sub-domain of the deployment, for example,staging.solarbuildingenvelopes.comorwww.solarbuildingenvelopes.comfor the product-data database of the TestLab Solar Facades.
Alternatively, after logging in, open
https://www.buildingenvelopedata.org/graphql/and run the following mutation with your institution ID and host filled-in:mutation { createOpenIdConnectApplication( input: { institutionId: "00000000-0000-0000-0000-000000000000" clientId: "my-client" consentType: EXPLICIT displayName: "My Client" endpoints: [AUTHORIZATION, PUSHED_AUTHORIZATION, INTROSPECTION, END_SESSION, REVOCATION, TOKEN] grantTypes: [AUTHORIZATION_CODE, REFRESH_TOKEN] postLogoutRedirectUri: "https://${HOST}/connect/callback/logout/metabase" redirectUri: "https://${HOST}/connect/callback/login/metabase" responseTypes: [CODE] scopes: [PROFILE, READ_API] } ) { clientSecret errors { code message path } } } -
Prepare the environment by running
cp ./.env.${environment}.sample ./.env && chmod 600 ./.envand by adjusting variable values to your needs, in particular, by setting passwords to newly generated ones, where random passwords may be generated by runningopenssl rand -base64 32. The value ofGNUPG_SECRET_SIGNING_KEY_FINGERPRINTwill be set later. Here is some information on what the variables meanNAMEis the name Docker project name, in particular, it is the prefix of the Docker container names listed bydocker ps --all;HOSTis the domain name with sub-domain of the deployment, in particular, it is used to make resource locators absolute;HTTP_PORTis the HTTP port to which the reverse proxy NGINX forwards for HTTPS requests (seePRODUCTION_HTTP_PORTandSTAGING_HTTP_PORTin the.envfile of your clone of machine);METABASE_HOSTis the domain name with sub-domain of the metabase, in particular, to use it as OpenId Connect provider and to ask it for information about logged-in users needed for authorization;DATABASE_IDis the UUID that was assigned to this product-data database upon registering it at the metabase;OPERATOR_IDis the UUID of the institution that operates this product-data database;VERIFICATION_CODEis the verification code that was generated for this product-data database upon registering it at the metabase;OPEN_ID_CONNECT_CLIENT_IDandOPEN_ID_CONNECT_CLIENT_SECRETare the OpenId Connect client identifier and secret of this product-data database as a client of the metabase acting as identity provider (the client secret is given when registering an OpenId Connect client at the metabase);GNUPG_SECRET_SIGNING_KEY_FINGERPRINTandGNUPG_SECRET_SIGNING_KEY_PASSPHRASEare fingerprint and passphrase of the GnuPG secret key for signing response and data approvals.
-
Prepare your remote controls GNU Make and Docker Compose by running
ln --symbolic ./docker.mk ./Makefileandln --symbolic ./docker-compose.production.yaml ./docker-compose.yaml.
-
Generate a GnuPG key with the passphrase
${GNUPG_SECRET_SIGNING_KEY_PASSPHRASE}set in the./.envfile by runningmake bootstrap COMMAND='./gpg.mk key PERSON="${name}" COMMENT="${comment}" EMAIL="${email}"'with your information filled in, for example,make bootstrap COMMAND='./gpg.mk key PERSON="Anna Smith" COMMENT="first" EMAIL="anna.smith@fraunhofer.de"'. Then copy the key's fingerprint which is output by the command and set it as the value of theGNUPG_SECRET_SIGNING_KEY_FINGERPRINTvariable in the./.envfile. -
Create the PostgreSQL database by running
./database.mk create.
-
- Draft a new release with a new version according to
Semantic Versioning by running the GitHub action
Draft a new release
which, creates a new branch named
release/v*.*.*, creates a corresponding pull request, updates the Changelog, and bumps the version inpackage.json, where*.*.*is the version. Note that this is not the same as "Draft a new release" on Releases. - Fetch the release branch by running
git fetchand check it out by runninggit checkout release/v*.*.*, where*.*.*is the version. - Apply pending migrations with
./database.mk migrate. - Make sure that all tests succeed and try out any new features manually.
- Publish the new release
by merging the release branch into
mainwhereby a new pull request frommainintodevelopis created that you need to merge to finish of.
- Enter a shell on the production machine using
ssh. - Navigate into
/app/productionby runningcd /app/production. - Back up the production database by running
./database.mk backup DIR=/app/production/backup. - Change to the staging environment by running
cd /app/staging. - Restore the staging database from the production backup by running
./database.mk restore DIR=/app/production/backup. - Adapt the environment file
./.envif necessary by comparing it with the./.env.staging.samplefile of the release to be deployed. - Deploy the new release in the staging environment by running
./deploy.mk TARGET=${TAG} deploy, where${TAG}is the release tag to be deployed, for example,v1.0.0. - If it fails after the database backup was made, rollback to the previous
state by running
./deploy.mk rollback, figure out what went wrong, apply the necessary fixes to the codebase, create a new release, and try to deploy that release instead. - If it succeeds, deploy the new reverse proxy that handles sub-domains by
running
cd /app/machine && make deployand test whether everything works as expected and if that is the case, continue. Note that in the staging environment sent emails can be viewed in the web browser underhttps://staging.solarbuildingenvelopes.com/email/and emails to addresses in the variableRELAY_ALLOWED_EMAILSin the.envfile are delivered to the respective inboxes (the variable's value is a comma separated list of email addresses). - Change to the production environment by running
cd /app/production. - Adapt the environment file
./.envif necessary by comparing it with the./.env.production.samplefile of the release to be deployed. - Deploy the new release in the production environment by running
./deploy.mk TARGET=${TAG} deploy, where${TAG}is the release tag to be deployed, for example,v1.0.0. - If it fails after the database backup was made, rollback to the previous
state by running
./deploy.mk rollback, figure out what went wrong, apply the necessary fixes to the codebase, create a new release, and try to deploy that release instead.
The file docker.mk contain GNU Make targets to manage Docker containers
like up and down, to follow Docker container logs with logs, to drop into
shells inside running Docker containers like shell SERVICE=backend for the backend service
and shell SERVICE=frontend for the frontend service, and to list information about Docker
like list and list-services.
The Makefile ./deploy.mk contains GNU Make targets to deploy a new release or
rollback it back as mentioned above. These targets depend on several smaller
targets like begin-maintenance and end-maintenance to begin or end
displaying maintenance information to end users that try to interact with the
website, and backup to backup all data before deploying a new version,
migrate to migrate the database, and run-tests to run tests.
If for some reason the website displays the maintenance page without maintenance happening at the moment, then drop into a shell on the production machine, check all logs for information on what happened, fix issues if necessary, and end maintenance. It could for example happen that a cron job set-up by machine begins maintenance, fails to do its actual job, and does not end maintenance afterwards. Whether failing to do its job is a problem for the inner workings of the website needs to be decided by some developer. If it for example backing up the database fails because the machine is out of memory at the time of doing the backup, the website itself should still working.
If the database container restarts indefinitely and its logs say
PANIC: could not locate a valid checkpoint record
for example preceded by LOG: invalid resource manager ID in primary checkpoint record or LOG: invalid primary checkpoint record, then the database is
corrupt. For example, the write-ahead log (WAL) may be corrupt because the
database was not shut down cleanly. One solution is to restore the database
from a backup by running
./database.mk restore DIR=/app/data/backups/20XX-XX-XX_XX_XX_XX/
where the Xs need to be replaced by proper values. Another solution is to
reset the transaction log by entering the database container with
docker compose run database bash
and dry-running
gosu postgres pg_resetwal --dry-run /var/lib/postgresql/data
and, depending on the output, also running
gosu postgres pg_resetwal /var/lib/postgresql/data
Note that both solutions may cause data to be lost.
If one field in the SQL database needs to be updated and there is no GraphQL
mutation available, then you may update it in PostgreSQL directly as
illustrated in the following example. Test it in the staging environment
under /app/staging before doing it in production under /app/production.
- Drop into a shell on the server as user
cloudby runningssh -CvX -A cloud@IpAdressOfCloudServer. - Navigate to the production environment by running
cd /app/production. - Make a database backup by running
DATE=$(date +"%Y-%m-%d_%H_%M_%S")and./database.mk backup DIR=/app/data/backups/${DATE} - Navigate to the staging environment by running
cd /app/staging. - Load the backup into the staging database by running
./database.mk restore DIR=/app/data/backups/${DATE}. - Drop into
psqlby running./database.mk psql. - List all tables in the schema
databaseby running\dt database.*. - List all optical data records by running
select * from database.optical_data;and remember for example one identifier of a record that you want to update. - Update a single field by running
update database.optical_data set "Description" = '...' where "Id" = 'f07499ab-f119-471f-8aad-d3c016676bce';. - Delete a faulty record by running
delete from database.optical_data where "Id" = 'f07499ab-f119-471f-8aad-d3c016676bce';.