Dropwizard with Fabric8 and Kubernetes
Here are some of my notes from the development of a REST service prototype with Dropwizard, Fabric8 and Kubernetes.
Examples
You can check out the following example projects:
Setup
Minikube
I use minikube for my development and on the first start of minikube the definition of the image must be passed as parameters. Any future call of minikube
will use the parameters from the first call, no matter what.
You find the minikube image of your first call at $HOME/.minikube/machines
.
Docker
Your application will be packaged as a docker image. In order for minikube to find the image you built locally on your machine you need to use the docker daemon of your minikube installation. So first start minikube and then use minikube docker-env
to get access to the docker daemon of minikube.
eval $(minikube docker-env)
Development
Fabric8 YAML files
All fabric8 YAML files go into /src/main/fabric8
(if you have a Maven project).
These files are resource fragments.
External Resources
From time to time you want to access resources which are not deployed on your kubernetes nodes, f. e. a database during development or an LDAP server. To get this in a dynamic fashion you can use a Service definition with the type ExternalName. This will register a service at your pod.
kind: Service apiVersion: v1 metadata: name: testdatabase spec: type: ExternalName externalName: testdb.domain
You can check the service registration with
kubectl describe service testdatabase -o yaml
Fabric8 Merged Files
The merged kubernetes files which the fabric8-maven-plugin generates can be found at target/fabric8/kubernetes.
Services
Service Names
If you are using Maven then the service name will be composed from the artifact id.
<groupId>greenfield</groupId> <artifactId>customer</artifactId>
will result in a service name customer
. This can be changed by using a resource fragment like this
metadata: name: ${project.groupId}-${project.artifactId} namespace: default
Here the groupid is prepended to the artifact id and results in greenfield-customer
.
Service Discovery
Service Discovery by DNS
Every service is available by its DNS name and so you can use that to reach it.
Service Discovery by Environment Variables
Also for every service environment variables will be create like
GREENFIELD_CUSTOMER_SERVICE_HOST=10.0.0.2 GREENFIELD_CUSTOMER_SERVICE_PORT=8181 GREENFIELD_CUSTOMER_SERVICE_PORT_DROPWIZARD=8181
Which can be used to configure things like REST clients:
String baseUri = "http://" + System.getenv("GREENFIELD_CUSTOMER_SERVICE_HOST") + ":" + System.getenv("GREENFIELD_CUSTOMER_SERVICE_PORT"); List<Customer> services = client. target(baseUri). path("customer"). request("application/vnd.sgbs.customer.v1+json"). get(new GenericType<List<Customer>>() { });
Service Namespaces
You can configure a service to be in a specific namespace. The namespace specifies the subdomain the service is reachable via DNS.
metadata: name: greenfield-service namespace: default
This service is reachable by the domain name greenfield-service.default.svc.cluster.local
or simply by greenfield-service
(depending from where you make the request).
Database
Secrets
You may add the secrets for accessing the database into the Dropwizard configuration. But Kubernetes offers you a away to remove those credentials from the REST configuration: Kubernetes Secrets
By using Kubernetes Secrets you can mount the credentials as a volume in your filesystem or have the values as environment variables.
The storing of the credentials in Kubernetes can be done manually via the kubectl
command or as part of the build and deployment process.
Deploy Secrets with Fabric8
By using a resource fragment you can use the fabric8 maven plugin to install the credentials in kubernetes:
kind: Secret apiVersion: v1 metadata: name: ${project.groupId}-${project.artifactId}-db-secret data: username: bXktYXBw password: Mzk1MjgkdmRnN0piCg==
In the deployment resource fragment you can now specify how to provide the credentials to you application. I choose the environment variables:
spec: template: spec: containers: - env: - name: GREENFIELD_DB_USERNAME valueFrom: secretKeyRef: name: ${project.groupId}-${project.artifactId}-db-secret key: username - name: GREENFIELD_DB_PASSWORD valueFrom: secretKeyRef: name: ${project.groupId}-${project.artifactId}-db-secret key: password
In the application I use the env vars GREENFIELD_DB_USERNAME
and GREENFIELD_DB_PASSWORD
to access the credential values.
DBIFactory factory = new DBIFactory(); DataSourceFactory dataSourceFactory = configuration.getDataSourceFactory(); dataSourceFactory.setUser(System.getenv("GREENFIELD_DB_USERNAME")); dataSourceFactory.setPassword(System.getenv("GREENFIELD_DB_PASSWORD")); DBI jdbi = factory.build(env, configuration.getDataSourceFactory(), "postgresql");
Deployment
During the development of the prototype a simple deployment from the local codebase was enough for me:
mvn fabric8:run
This compiles, packages and deploys the application in the minikube instance.
Dropwizard CLI Arguments
Dropwizards normally gets started with some arguments on the command line. Fabric8 doesn't do this out of the box. You need to tell it explicitly. I used a resource fragment for this:
metadata: annotations: configmap.fabric8.io/update-on-change: ${project.artifactId} spec: template: spec: volumes: - name: config configMap: name: ${project.artifactId} containers: - command : [ "/deployments/run-java.sh" ] args : [ "server", "/etc/greenfield-customer/config.json" ] ports: - containerPort: 8181 name: dropwizard protocol: TCP volumeMounts: - name: config mountPath: /etc/greenfield-customer
This does multiple things:
- sets the command to be executed on container startup
- sets the arguments to be passed to the command (for dropwizard: server config.yml)
- defines what ports should be opened up on the docker image (http port of the dropwizard application)
- defines where to find the configuration and mounts a configmap as the configuration
The definition of the deployment syntax can be found here.
Dropwizard Configuration
The dropwizard configuration is normally in a YAML file. Though lately I have found out that you can also use JSON as a syntax. This comes quite handy because now it is quite easy to place the dropwizard configuration in JSON format into a fabric8 YAML file.
metadata: name: ${project.groupId}-${project.artifactId}-rs data: config.json: | { "server": { "applicationConnectors": [{ "type": "http", "port": 8181 }], "adminConnectors": [{ "type": "http", "port": 8081 }], "requestLog" : { "appenders" : [ { "type" : "file", "currentLogFilename" : "/var/log/application/access.log", "archivedLogFilenamePattern" : "/var/log/application/access.log.%d.gz", "archivedFileCount" : 10, }, { "type" : "console", "target" : "stdout" } ] } }, "logging": { "level" : "INFO", "appenders" : [ { "type" : "console", "threshold" : "INFO", "target" : "stdout" }, { "type" : "file", "currentLogFilename" : "/var/log/application/log", "archivedLogFilenamePattern" : "/var/log/application/log.%d.gz", "archivedFileCount" : 10, "timeZone" : "UTC" } ] }, "database": { "driverClass": "org.postgresql.Driver", "url": "jdbc:postgresql://greenfield-service-db:5432/greenfield", "properties": { "charSet": "UTF-8" }, "maxWaitForConnection": "1s", "validationQuery": "/* MyService Health Check */ SELECT 1", "initialSize": 2, "minSize": 2, "maxSize": 8, "checkConnectionWhileIdle": false, "evictionInterval": "10s", "minIdleTime": "1 minute" } }
By this the dropwizard configuration is registered as a kubernetes ConfigMap
.
To access this configuration as a file the configuration must be mounted as a volume into the filesystem of the docker image, see deployment.yml.
Administration
Log into Docker instance
kubectl exec -ti $(kubectl get pods | cut -d " " -f 1 | grep maven.artifact.id) bash
Display Pod Log
kubectl logs -f $(kubectl get pods | cut -d " " -f 1 | grep maven.artifact.id)
Application Logging
Kubernetes uses by default the stack fluentd, ElasticSearch and Kibana for logging. This can be installed in the Runtime via the template logging
. This will install the necessary pods.
Everything your application outputs to STDOUT
and STDERR
will automatically be logged.
So the easiest way to get the application logs into the logging stack is to output the log data to STDOUT
and STDERR
. The same goes for the access log.
curl -v http://192.168.99.100:32369/_aliases?pretty