...
...
...
...
Excerpt |
---|
This page describes the rules all containers for developing telemedicine solutions to be deployed on or use of the infrastructure must apply to. |
Note |
---|
If the |
...
application violates any of the demands below the |
...
application may be rejected. |
Table of Contents |
---|
1. Docker Image
...
Design and Development
Design and development must follow design guides and best practices
See Best Practices and Design Guidelines
Persistence through eHealth Infrastructure Services
Telemedicine solutions do not get their own persistence but persistence must be handled through the eHealth Infrastructure services (including key-value store).
Deployment and application definition
Applications must be based on predefined Docker Images
Telemedicine solution must be built using predefined eHealth docker base images (Docker Base Images)
The applications in the container must run as non-root (Docker Base Images Security)
The docker image that is pushed to the central docker image repository must be signed with an approved private key. (Image Signing)
...
Applications must be based on predefined Helm charts
The application must be deployed using one of the official eHealth helm charts available here: https://registry.admin.ehealth.sundhed.dk/harbor/projects/5/helm-charts. See Helm Charts
The application must be deployed using the most recent version of the helm chart.
Application Requirements
The application containers must be stateless. (see also 2.3. Handle being scaled)
At restart the application filesystem is reset to the one from the docker image.
Persistence is handled by the infrastructure applications.
Server restart and reset must be acceptable not have any state that cannot be recreated
A service may keep an in-memory cache, as long as all instances of the same service can also handle requests for objects in the cache. And the service has a sane strategy to refresh/clean up the cache.
A service cannot have any state that cannot be recreated if the service is moved to another worker node, or if the service is called
A service cannot hold sessions locally
A service cannot run its database
The application containers must accept restart and reset at any time.
The container may at any time receive a
SIGTERM
signal to the main process (PID 1
).At restart the application filesystem is reset to the one from the docker image.
If the application fails to exit gracefully before a timeout of 10 seconds the application will be killed by the platform.
Handle being scaled The application containers should be able to scale to multiple instances.
It is a requirement recommended that all services run at least in two instances in different data centres to have high availability.
If a service is busy it may be scaled to more than two instances, to handle extra requests.
For this reason, a service cannot have any state that cannot be recreated if the service is moved to another worker node, or if the service is called
A service cannot hold sessions locally
A service cannot run its database
A service may keep an in-memory cache, as long as all instances of the same service can also handle requests for objects in the cache. And the service has a sane strategy to refresh/clean up the cache.
...
Health Checks
The Services Each application container must expose one or two endpoints for readiness and liveness status.
Such as a
Liveness Probe/healthz
endpoint that responds with status code200
when the application is ready to receive requestsSee also this article: Kubernetes liveness and readiness probes difference
Liveness Probe
The liveness probe shall check the container's health.
If for some reason, the liveness probe fails (x times), the eHealth Infrastructure restarts the container.
Suppose that a Pod is running our application inside a container, but due to some reason let’s say memory leak, CPU usage, application deadlock etc the application is not responding to our requests, and is stuck in an error state.
The liveness probe checks the container health as we tell it to do, and if for some reason the liveness probe fails (x times), it restarts the container.
See also this article: Kubernetes liveness and readiness probes difference
Readiness Probe
Readiness Probe - such as a /healthz
endpoint
The endpoint must respond with status code
200
when the application is ready to receive requestsIn some cases, we would like our application to be alive, but not serve traffic unless some conditions are met e.g., populating a dataset, waiting for some other service to be alive etc. In such cases, we use a readiness probe. If the condition inside the readiness probe passes, only then
ourthe application can serve traffic.
Deployment
Every release must follow a strict release plan where all eHealth Environments (see Environments ) are visited in the show order.
external test → pre-production → production
Rollback must always be possible. A new release must never have contradictory demands with the previous version.
See also this article: Kubernetes liveness and readiness probes difference
...
Monitoring and Operations
Logging requirements
Follow the specification for the application log (Logging model)
Errors and essential incidents must be found in the application log
Request Headers
Headers used for authentication and authorization must be set
B3 header propagation
Tracing headers must be propagated as described in https://istio.io/docs/tasks/telemetry/distributed-tracing/.
This can be handled by libraries like https://github.com/jaegertracing/jaeger-client-java
Applications accessing the infrastructure are encouraged to expose their application identity by using the HTTP Header User Agent, eg: User agent : HAPI-FHIR/4.1.0 (FHIR Client; FHIR 3.0.2/DSTU3; apache) or User agent : CGI-CC360-COPD/1.0.3. This information can then in the future be used to provide proper redirects.
5. Logging requirements
Follow the specification for the application log (Logging model)
Errors and essential incidents must be found in the application log
7. Helm charts
The application must be deployed using one of the official eHealth helm charts available here: https://registry.admin.ehealth.sundhed.dk/harbor/projects/5/helm-charts
See Helm Charts
The application must be deployed using the most recent version of the helm chart.
...
Documentation
Documentation of the component's application purpose, service requirements and resource usage
9. Rollout plan
Every release must follow a strict release plan where all four eHealth Environments (see Environments ) are visited in the show order.
Internal test → external test → pre-production → production
New releases must be able to coexist with the previous version.
Rollback must always be possible. A new release must never have contradictory demands with the previous version.
The system runs 24/7 meaning that service windows with down time is not an option.