Getting Started with OSGi Remote Services – enRoute Maven Archetype Edition

At the EclipseCon Europe 2016 I held a tutorial together with Peter Kirschner named Building Nano Services with OSGi Declarative Services. The final exercise should have been the demonstration of OSGi Remote Services. It actually did not really happen because of the lack of time and networking issues. The next year at the EclipseCon Europe 2017 we joined forces again and gave a talk with the name Microservices with OSGi. In that talk we focused on OSGi Remote Services, but we again failed with the demo at the end because of networking issues. At the EclipseCon Europe 2018 I gave a talk on how to use different OSGi specifications for connecting services remotely titled How to connect your OSGi application. Of course I mentioned OSGi Remote Services there, and of course the demonstration failed again because of networking issues.

In the last years I published several blog posts and gave several talks related to OSGi, and often the topic OSGi Remote Services was raised, but never really covered in detail. Scott Lewis, the project lead of the Eclipse Communication Framework, was really helpful whenever I encountered issues with Remote Services. I promised to write a blog post about that topic as a favour for all the support. And with this blog post I finally want to keep my promise. That said, let’s start with OSGi Remote Services.

Motivation

First I want to explain the motivation for having a closer look at OSGi Remote Services. Looking at general software architecture discussions in the past, service oriented architectures and microservices are a huge topic. Per definition the idea of a microservices architecture is to have

  • a suite of small services
  • each running in its own process
  • communicating with a lightweight mechanism, e.g. HTTP
  • independently deployable
  • easy to replace

While new frameworks and tools came up over the years, the OSGi specification(s) covers these topics for a long time. Via the service registry and the service dynamics you can build up very small modules. Those modules can then be integrated into small runtimes and deployed in different environments (despite the needed JVM or a database if needed). The services in those small independent deployments can then be accessed in different ways, like using the HTTP Whiteboard or JAX-RS Whiteboard. This satisfies the aspect of a communication between services via lightweight mechanisms. For inhomogeneous environments the usage of those specifications is a good match. But it means that you need to implement the access layer on the provider side (e.g. the JAX-RS wrapper to access the service via REST) and you need to implement the service access on the consumer side by using a corresponding framework to execute the REST calls.

Ideally the developer of the service as well as the developer of the service consumer should not need to think about the infrastructure of the whole application. Well, it is always good that everybody in a project knows about everything, but the idea is to not making your code dependent on infrastructure. And this is where OSGi Remote Services come in. You develop the service and the service consumer as if they would be executed in the same runtime. In the deployment the lightweight communication will be added to support service communication over a network.

And as initially mentioned, I want to have a look at ways how to probably get rid of the networking issues I faced in the presentations in the past.

Introduction

To understand this blog post you should be familiar with OSGi services and ideally with OSGi Declarative Services. If you are not familiar with OSGi DS, you can get an introduction by reading my blog post Getting Started with OSGi Declarative Services.

In short, the OSGi Service Layer specifies a Service Producer that publishes a service, and a Service Consumer that listens and retrieves a service. This is shown in the following picture:

With OSGi Remote Services this picture is basically the same. The difference is that the services are registered and consumed across network boundaries. For OSGi Remote Services the above picture could be extended to look like the following:

Glossary

To understand the above picture and the following blog post better, here is a short glossary for the used terms:

  • Remote Service (Distributed Service)
    Basic specification to describe how OSGi services can be exported and imported to be available across network boundaries.
  • Distribution Provider
    Exports services by creating endpoints on the producer side, imports services by creating proxies to access endpoints on the consumer side, manages policies around the topology and discovers remote services.
  • Endpoint
    Communication access mechanism to a remote service that requires some protocol for communications.
  • Topology
    Mapping between services and endpoints as well as their communication characteristics.
  • Remote Service Admin (RSA)
    Provides the mechanisms to import and export services through a set of configuration types. It is a passive Distribution Provider, not taking any action to export or import itself.
  • Topology Manager
    Provides the policy for importing and exporting services via RSA and implements a Topology.
  • Discovery
    Discover / announce Endpoint Descriptions via some discovery protocol.
  • Endpoint Description
    A properties based description of an Endpoint that can be exchanged between different frameworks to create connections to each other’s services.

To get a slightly better understanding, the following picture shows some more details inside the Remote Service Implementation block.

Note:
Actually this picture is still a simplified version, as internally there are Endpoint Event Listener and Remote Service Admin Listener that are needed to trigger all the necessary actions. But to get an idea how things play together this picture should be sufficient.

Now let’s explain the picture in more detail:

Service Provider Runtime

  • A service is marked to be exported. This is done via service properties.
  • The Distribution Provider creates an endpoint for the exported service:
    • The Topology Manager gets informed about the exported service.
    • If the export configuration matches the Topology it instructs the Remote Service Admin to create an Endpoint.
    • The Remote Service Admin creates the Endpoint.
  • The Discovery gets informed via Endpoint Event Listener and announces the Endpoint to other systems via Endpoint Description.

Service Consumer Runtime

  • The Discovery discovers an Endpoint via Endpoint Description that was announced in the network.
  • The Distribution Provider creates a proxy for the service.
    • The Topology Manager learns from the Discovery about the newly discovered service (via Endpoint Event Listener), which then instructs the Remote Service Admin to import the service.
    • The Remote Service Admin then creates a local service proxy that is registered as service in the local OSGi runtime. This proxy is mapped to the remote service (or an alternative like a webservice).
  • The service proxy is used for wiring.

To simplify the picture again, the important takeaways are the Distribution Provider and the Discovery. The Distribution Provider is responsible for exporting and importing the service, the Discovery is responsible for announcing and discovering the service. The other terms are needed for a deeper understanding, but for a high level understanding of OSGi Remote Services, these two are sufficient.

Tutorial

Now it is time to get our hands dirty and play with OSGi Remote Services. This tutorial has several steps:

  1. Project Setup
  2. Service Implementation (API & Impl)
  3. Service Provider Runtime
  4. Service Consumer Implementation
  5. Service Consumer Runtime

There are different ways and tools available for OSGi development. In this tutorial I will use the OSGi enRoute Maven Archetypes. I also published this tutorial with other toolings if you don’t want to use enRoute:

ECF – Remote Service Runtime

While the implementation and export of an OSGi service as a Remote Service is trivial in first place, the definition of the runtime can become quite complicated. Especially collecting the necessary bundles is not that easy without some guidance.

As a reference, with Equinox as underlying OSGi framework the following bundles need to be part of the runtime as a basis:

  • Equinox OSGi
    • org.eclipse.osgi
    • org.eclipse.osgi.services
    • org.eclipse.equinox.common
    • org.eclipse.equinox.event
    • org.eclipse.osgi.util
    • org.apache.felix.scr
  • Equinox Console
    • org.apache.felix.gogo.command
    • org.apache.felix.gogo.runtime
    • org.apache.felix.gogo.shell
    • org.eclipse.equinox.console
  • ECF and dependencies
    • org.eclipse.core.jobs
    • org.eclipse.ecf
    • org.eclipse.ecf.discovery
    • org.eclipse.ecf.identity
    • org.eclipse.ecf.osgi.services.distribution
    • org.eclipse.ecf.osgi.services.remoteserviceadmin
    • org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy
    • org.eclipse.ecf.remoteservice
    • org.eclipse.ecf.remoteservice.asyncproxy
    • org.eclipse.ecf.sharedobject
    • org.eclipse.equinox.concurrent
    • org.eclipse.osgi.services.remoteserviceadmin

With the above basic runtime configuration the Remote Services will not yet work. There are still two things missing, the Discovery and the Distribution Provider. ECF provides different implementations for both. Which implementations to use needs to be defined by the project. In this tutorial we will use Zeroconf/JmDNS for the Discovery and the Generic Distribution Provider:

  • ECF Discovery – Zeroconf
    • org.eclipse.ecf.provider.jmdns
  • ECF Distribution Provider – Generic
    • org.eclipse.ecf.provider
    • org.eclipse.ecf.provider.remoteservice

Note:
You can find the list of different implementations with the documentation about the bundles, configuration types and intents in the ECF Wiki:

Project Setup

By using Maven and the OSGi enRoute archetypes you create plain Maven-Java projects. This way you can use any IDE if you are not comfortable with Eclipse and Bndtools. The first step is to create the projects from command line.

Workspace

Switch to a folder in which you want to create the projects. Create a minimal enRoute OSGi workspace by using the project-bare archetype:

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=project-bare \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = enroute
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier
  • After accepting the inserted values with ‘y’ a subfolder named by the artifactId enroute is created that contains a basic minimal pom.xml file.

Service Interface

Change into the newly created folder enroute and create the Service API project by using the api archetype:

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=api \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = api
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier.api
  • After accepting the inserted values with ‘y’ a subfolder named api is created that contains the api project structure and the api project is added as module to the parent pom.xml file.

Service Implementation

Create the Service Implementation project by using the ds-component archetype:

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=ds-component \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = inverter
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier.inverter
  • After accepting the inserted values with ‘y’ a subfolder named inverter is created that contains the service implementation project structure and the inverter project is added as module to the parent pom.xml file.

Service Provider Runtime

With the OSGi enRoute Archetypes we create a composite application to put the modules together. This is done via the application archetype. Execute the following command in the enroute folder:

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=application \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = inverter-app
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier
    • impl-artifactId = inverter
    • impl-groupId = org.fipro.modifier
    • impl-version = 1.0-SNAPSHOT
    • target-java-version = 11
  • First you need to decline the properties configuration, as by default target-java-version = 8 will be used. After setting the correct values and accepting them with ‘y’ a subfolder named inverter-app is created that contains .bndrun files and preparations for configuring the application.

Service Consumer

To be able to test the Remote Service, we directly create the Service Consumer project by again using the ds-component archetype:

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=ds-component \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = client
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier.client
  • After accepting the inserted values with ‘y’ a subfolder named inverter is created that contains the service implementation project structure and the inverter project is added as module to the parent pom.xml file.

Service Consumer Runtime

The consumer will be a command line application. Therefore create an application project with the application archetype similar to creating the service application:

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=application \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = client-app
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier
    • impl-artifactId = client
    • impl-groupId = org.fipro.modifier
    • impl-version = 1.0-SNAPSHOT
    • target-java-version = 11
  • First you need to decline the properties configuration, as by default target-java-version = 8 will be used. After setting the correct values and accepting them with ‘y’ a subfolder named client-app is created that contains .bndrun files and preparations for configuring the application.

Import, modify and implement

Now the projects can be imported to the IDE of your choice. As the projects are plain Maven based Java projects, you can use any IDE. But still my choice is Eclipse with bndtools.

  • Import the created projects via
    File – Import… – Maven – Existing Maven Projects
  • Click Next
  • Select the created enroute directory
  • Click Finish

Unfortunately the archetypes are some years old and were not updated since then. Using the enRoute OSGi Maven Archetypes you get project skeletons that are configured for Java 8, Bndtools 4.1.0 and OSGi R7. For this tutorial it is sufficient to use OSGi R7, but let’s update to Java 11 and the current Bndtools 6.2.0.

Note:
On Windows there is some formatting issue when using the archetypes. For every additional module you create, an empty line with some spaces is added between the content lines. If you followed the tutorial and created 5 modules, you will see 5 empty lines between every content line. To clean this up and make the enroute/pom.xml file readable again, you can do a search and replace via regular expression in an editor of your choice. Use the following regex and replace it with nothing

^(?:[\t ]*(?:\r?\n|\r))+

The following screenshot shows the settings in the Find/Replace dialog that can be used to cleanup:

  • Open the file enroute/pom.xml
  • Locate the properties section
    • Update bnd.version from 4.1.0 to 6.2.0
    • Remove maven.compiler.source
    • Remove maven.compiler.target
  • Locate the dependencyManagement section
    • Add the ECF dependencies configured like below
<!-- ECF dependencies -->
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.core.jobs</artifactId>
  <version>3.12.0</version>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.concurrent</artifactId>
  <version>1.2.100</version>
</dependency>

<!-- ECF -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf</artifactId>
  <version>3.10.0</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.discovery</artifactId>
  <version>5.1.1</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.identity</artifactId>
  <version>3.9.402</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
  <version>2.1.600</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
  <version>4.9.3</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
  <version>1.0.101</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
  <version>2.1.200</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice</artifactId>
  <version>8.14.0</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.sharedobject</artifactId>
  <version>2.6.200</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
  <version>1.6.300</version>
</dependency>

<!-- ECF Discovery - Zeroconf -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
  <version>4.3.301</version>
</dependency>

<!-- ECF Distribution Provider - Generic -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider</artifactId>
  <version>4.9.1</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.remoteservice</artifactId>
  <version>4.6.1</version>
</dependency>

Note:
Unfortunately the ECF project does not have the dependencies configured in the pom.xml files, so there automated resolving of transitive dependencies in Maven is not working. The reason is obviously the usage of Tycho and the resolving of dependencies based on the MANIFEST file. While the MANIFEST-first approach is nice at development time, it makes you a bad Maven citizen by default. If you as a project want to be also a good Maven citizen, you have to maintain the dependencies twice, in the MANIFEST for PDE based development and Tycho builds, and in the pom.xml file in the dependencies section, that is actually not used in the build and creates Warnings in the Tycho build.

For this example simply use the snippet above, which should help in managing the dependencies. But keep in mind that by the time the versions might have increased and need to be updated.

  • Locate the pluginManagement section
    • Add the maven-compiler-plugin configured like below
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>3.8.1</version>
      <configuration>
        <release>11</release>
      </configuration>
    </plugin>
  • Add the api project to the dependencies of the inverter project
    • Open the file inverter/pom.xml
    • Add the following block to the dependencies section
    <dependency>
        <groupId>org.fipro.modifier</groupId>
        <artifactId>api</artifactId>
        <version>1.0-SNAPSHOT</version>
    </dependency>
  • Add the api project to the dependencies of the client project
    • Open the file client/pom.xml
    • Add the following block to the dependencies section
    <dependency>
        <groupId>org.fipro.modifier</groupId>
        <artifactId>api</artifactId>
        <version>1.0-SNAPSHOT</version>
    </dependency>
  • Right click on the enroute project – Maven – Update Project…
    • Have all projects checked
    • Click OK

Service Interface

Modify the api project:

  • Delete the ConsumerInterface and the ProviderInterface
  • Copy the following interface StringModifier into the api project
package org.fipro.modifier.api;

public interface StringModifier {
    String modify(String input);
}

Service Implementation

Modify the inverter project:

  • Delete the ComponentImpl class
  • Copy the following class StringInverter into the inverter project
package org.fipro.modifier.inverter;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

@Component(property= {
    "service.exported.interfaces=*",
    "service.exported.configs=ecf.generic.server" }
)
public class StringInverter implements StringModifier {

    @Override
    public String modify(String input) {
        return (input != null)
            ? new StringBuilder(input).reverse().toString()
            : "No input given";
    }
}

The only thing that needs to be done additionally in comparison to creating a local OSGi service, is to configure that the service should be exported as Remote Service. This is done by setting the component property service.exported.interfaces. The value of this property needs to be a list of types for which the class is registered as a service. For a simple use case like the above, the asterisk can be used, which means to export the service for all interfaces under which it is registered, but to ignore the classes. For more detailed information have a look at the Remote Service Properties section of the OSGi Compendium Specification.

The other component property used in the above example is service.exported.configs. This property is used to specify the configuration types, for which the Distribution Provider should create Endpoints. If it is not specified, the Distribution Provider is free to choose the default configuration type for the service.

Note:
In the above example we use the ECF Generic Provider. This one by default chooses a SSL configuration type, so without additional configuration the example would not work if we don’t specify the configuration type.

Additionally you can specify Intents via the service.exported.intents component property to constrain the possible communication mechanisms that a distribution provider can choose to distribute a service. An example will be provided at a later step.

Service Consumer

The implementation of a Remote Service Consumer also quite simple. From the development perspective there is nothing to consider. The service consumer is implemented without any additions. Only the runtime needs to be extended to contain the necessary bundles for Discovery and Distribution.

The simplest way of implementing a service consumer is a Gogo Shell command.

Modify the client project:

  • Delete the ComponentImpl class
  • Copy the following class ModifyCommand into the client project
package org.fipro.modifier.client;

import java.util.List;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=modify"},
    service=ModifyCommand.class
)
public class ModifyCommand {

    @Reference
    volatile List<StringModifier> modifier;
	
    public void modify(String input) {
        if (modifier.isEmpty()) {
            System.out.println("No StringModifier registered");
        } else {
            modifier.forEach(m -> System.out.println(m.modify(input)));
        }
    }
}

Now the ECF bundles need to be added to the dependencies section of the service-app/pom.xml. You can find the ECF bundles on Maven Central.

After the Maven Dependencies are updated, the .bndrun configuration can be updated to include the necessary bundles:

Service Provider Runtime

  • Open the inverter-app/pom.xml file
    • Add the dependencies to the ECF bundles as shown below
      (the versions are already configured in the parent pom.xml dependencyManagement section)
<!-- ECF dependencies -->
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>

<!-- ECF -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>

<!-- ECF Discovery - Zeroconf -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>

<!-- ECF Distribution Provider - Generic -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.remoteservice</artifactId>
</dependency>
  • Open the inverter-app/inverter-app.bndrun file
  • Add the following bundles to the Run Requirements
    • org.fipro.modifier.inverter
    • org.eclipse.ecf.osgi.services.distribution
    • org.eclipse.ecf.provider.jmdns
    • org.eclipse.ecf.provider.remoteservice
  • Save the changes
  • Click on Resolve
  • Accept the result in the opening dialog via Update

Now you can start the inverter-app via the Run OSGi button in the upper right corner of the editor. As there is nothing included in the runtime that would show up somewhere, you won’t see anything now.

Service Consumer Runtime

The client app is a simple command line application that uses the Gogo Shell. To get the Gogo Shell up and running some additional steps need to be performed in the client-app. By default the Gogo Shell bundles are included in the project setup for the test scope and for debugging. To make them available in the compile scope:

  • Open the client-app/pom.xml file
    • Add the following block in the dependencies section for the ECF bundles
      (the versions are already configured in the parent pom.xml dependencyManagement section)
<!-- ECF dependencies -->
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>

<!-- ECF -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>

<!-- ECF Discovery - Zeroconf -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>

<!-- ECF Distribution Provider - Generic -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.remoteservice</artifactId>
</dependency>
  • Add the following block in the dependencies section for the Gogo Shell bundles
    (actually copied from the org.osgi.enroute:debug-bundles, so the versions are probably outdated, but for the example sufficient).
<!-- The Gogo Shell -->
<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.gogo.shell</artifactId>
    <version>1.0.0</version>
</dependency>
<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.gogo.runtime</artifactId>
    <version>1.0.10</version>
</dependency>
<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.gogo.command</artifactId>
    <version>1.0.2</version>
    <exclusions>
        <exclusion>
            <groupId>org.osgi</groupId>
            <artifactId>org.osgi.core</artifactId>
        </exclusion>
        <exclusion>
            <groupId>org.osgi</groupId>
            <artifactId>org.osgi.compendium</artifactId>
        </exclusion>
    </exclusions>
</dependency>
  • Open the client-app/client-app.bndrun file
  • Add the following bundles to the Run Requirements
    • org.apache.felix.gogo.command
    • org.apache.felix.gogo.shell
    • org.fipro.modifier.client
    • org.eclipse.ecf.osgi.services.distribution
    • org.eclipse.ecf.provider.jmdns
    • org.eclipse.ecf.provider.remoteservice
  • Save the changes
  • Click on Resolve
  • Accept the result in the opening dialog via Update
  • Switch to the Source tab of the .bndrun file editor and add the following section to start the console in interactive mode
-runproperties: \
    osgi.console=,\
    osgi.console.enable.builtin=false

If you now click on Run OSGi on the Run tab of the editor, the Gogo Shell becomes available in the Console view of the IDE. Once the application is started you can execute the created Gogo Shell command via

modify <input>

If services are available, it will print out the modified results. Otherwise the message “No StringModifier registered” will be printed.

Remote Service Admin Events

There are several events with regards to importing and exporting Remote Services, that are fired by the Remote Service Admin synchronously once they happen. These events are posted asynchronously via the OSGi Event Admin under the topic

org/osgi/service/remoteserviceadmin/<type>

Where <type> can be one of the following:

  • EXPORT_ERROR
  • EXPORT_REGISTRATION
  • EXPORT_UNREGISTRATION
  • EXPORT_UPDATE
  • EXPORT_WARNING
  • IMPORT_ERROR
  • IMPORT_REGISTRATION
  • IMPORT_UNREGISTRATION
  • IMPORT_UPDATE
  • IMPORT_WARNING

A simple event listener that prints to the console on any Remote Service Admin Event could look like this:

@Component(property = EventConstants.EVENT_TOPIC + "=org/osgi/service/remoteserviceadmin/*")
public class RemoteServiceEventListener implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println(event.getTopic());
        for (String objectClass :  ((String[])event.getProperty("objectClass"))) {
            System.out.println("\t"+objectClass);
        }
    }

}

For further details on the Remote Service Admin Events have a look at the OSGi Compendium Specification Chapter 122.7.

If you need to react synchronously on these events, you can implement a RemoteServiceAdminListener. I typically would not recommend this, unless you really want blocking calls on import/export events. Typically it is intended to be used internally by the Remote Service Admin. But for debugging purposes the ECF project also provides a DebugRemoteServiceAdminListener. It writes the endpoint description via a Writer to support debugging of Remote Services. Via the following class you could easily register a DebugRemoteServiceAdminListener via OSGi DS that prints the information on the console.

@Component
public class DebugListener
    extends DebugRemoteServiceAdminListener
    implements RemoteServiceAdminListener {
	// register the DebugRemoteServiceAdminListener via DS
}

To test this you can either add the above components to one of the existing bundles, or create a new bundle and add that bundle to the runtimes.

Runtime Debugging

The ECF project provides several ways for runtime inspection and runtime debugging. This is mainly done Gogo Shell commands provided via separate bundles. To enable the OSGi console and the ECF console commands, you need to add the following bundles to your runtime:

  • OSGi Console
    • org.apache.felix.gogo.command
    • org.apache.felix.gogo.runtime
    • org.apache.felix.gogo.shell
  • ECF Console
    • org.eclipse.ecf.console
    • org.eclipse.ecf.osgi.services.remoteserviceadmin.console

With the ECF Console bundles added to the runtime, there are several commands to inspect and interact with the Remote Service Admin. As an overview the available commands are listed in the wiki:
Gogo Commands for Remote Services Development

Additionally the DebugRemoteServiceAdminListener described above is activated by default with the ECF Console bundles. It can be activated or deactivated in the runtime via the command

ecf:rsadebug <true/false>

To add the ECF Console bundles to the project, add the following snippet to the dependencyManagement section of the enroute/pom.xml file:

<dependency>
    <groupId>org.eclipse.ecf</groupId>
    <artifactId>org.eclipse.ecf.console</artifactId>
    <version>1.3.100</version>
</dependency>
<dependency>
    <groupId>org.eclipse.ecf</groupId>
    <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
    <version>1.3.0</version>
</dependency>

JAX-RS Distribution Provider

One of the biggest issues I faced when working with Remote Services is networking as mentioned in the introduction. In the above example the ECF Generic Distribution Provider is used for a simpler setup. But for example in a corporate network with enabled firewalls somewhere in the network setup, the example will probably not work. As said before, the ECF project provides multiple Distribution Provider implementations, which gives the opportunity to configure the setup to match the project needs. One interesting implementation in that area is the JAX-RS Distribution Provider. Using that one could probably help solving several of the networking issues related to firewalls. But as with the whole Remote Service topic, the complexity in the setup is quite high because of the increased number of dependencies that need to be resolved.

The JAX-RS Distribution Provider implementation is available for Eclipse Jersey and Apache CXF. It uses the OSGi HttpService to register the JAX-RS resource, and of course it then also needs a Servlet container like Eclipse Jetty to provide the JAX-RS resource. I will show the usage of the Jersey based implementation in the following sections.

Project Setup

Unfortunately the JAX-RS Distribution Provider is not available via Maven Central. The only way to get the project setup done is to install the artifacts in the local repository. This can be done by installing the artifacts locally via mvn clean install. Alternatively you can use the maven-install-plugin, which can even be integrated into your Maven build if you add the artifact to install to the source code repository. For this tutorial we use the manual installation of artifacts, as it is the easier approach for now.

Note:
The artifact versions in the below snippets rely on the JAX-RS Distribution Provider 1.14.6 which was the most current version at the time this tutorial was written. If there is a newer version available in the meantime you need to update the snippets.

  • Download the JaxRSProviders archive from GitHub
  • Extract the following artifacts in a temporary directory (located in build/plugins)
    • org.eclipse.ecf.provider.jaxrs.client_1.8.1.202202112253.jar
    • org.eclipse.ecf.provider.jaxrs.server_1.11.1.202202112253.jar
    • org.eclipse.ecf.provider.jaxrs_1.7.1.202202112253.jar
    • org.eclipse.ecf.provider.jersey.client_1.8.2.202202112253.jar
    • org.eclipse.ecf.provider.jersey.server_1.11.1.202202112253.jar
  • Open a shell and execute the following commands to install the artifacts to the local Maven repository
mvn install:install-file \
  -Dfile=org.eclipse.ecf.provider.jaxrs_1.7.1.202202112253.jar \
  -DgroupId=org.eclipse.ecf \
  -DartifactId=org.eclipse.ecf.provider.jaxrs \
  -Dversion=1.7.1 \
  -Dpackaging=jar
  

mvn install:install-file \
  -Dfile=org.eclipse.ecf.provider.jaxrs.server_1.11.1.202202112253.jar \
  -DgroupId=org.eclipse.ecf \
  -DartifactId=org.eclipse.ecf.provider.jaxrs.server \
  -Dversion=1.11.1 \
  -Dpackaging=jar
  
 
mvn install:install-file \
  -Dfile=org.eclipse.ecf.provider.jersey.server_1.11.1.202202112253.jar \
  -DgroupId=org.eclipse.ecf \
  -DartifactId=org.eclipse.ecf.provider.jersey.server \
  -Dversion=1.11.1 \
  -Dpackaging=jar
  
  
mvn install:install-file \
  -Dfile=org.eclipse.ecf.provider.jaxrs.client_1.8.1.202202112253.jar \
  -DgroupId=org.eclipse.ecf \
  -DartifactId=org.eclipse.ecf.provider.jaxrs.client \
  -Dversion=1.8.1 \
  -Dpackaging=jar
  
  
mvn install:install-file \
  -Dfile=org.eclipse.ecf.provider.jersey.client_1.8.2.202202112253.jar \
  -DgroupId=org.eclipse.ecf \
  -DartifactId=org.eclipse.ecf.provider.jersey.client \
  -Dversion=1.8.2 \
  -Dpackaging=jar
  • Open the file enroute/pom.xml
    • Locate the dependencyManagement section
    • Add the JAX-RS Distribution Provider dependencies configured like below
<!-- ECF JAX-RS Distribution Provider -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
  <version>1.7.1</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
  <version>1.11.1</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
  <version>1.11.1</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs.client</artifactId>
  <version>1.8.1</version>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jersey.client</artifactId>
  <version>1.8.2</version>
</dependency>

<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
  <version>2.10.1</version>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.jaxrs</groupId>
  <artifactId>jackson-jaxrs-json-provider</artifactId>
  <version>2.10.1</version>
</dependency>

<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet</artifactId>
  <version>2.30.1</version>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet-core</artifactId>
  <version>2.30.1</version>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.core</groupId>
  <artifactId>jersey-client</artifactId>
  <version>2.30.1</version>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.media</groupId>
  <artifactId>jersey-media-json-jackson</artifactId>
  <version>2.30.1</version>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.inject</groupId>
  <artifactId>jersey-hk2</artifactId>
  <version>2.30.1</version>
</dependency>

<dependency>
  <groupId>org.apache.felix</groupId>
  <artifactId>org.apache.felix.http.jetty</artifactId>
  <version>4.1.14</version>
</dependency>

Note:
I have chosen the same versions for the dependencies as the JAX-RS Distribution Provider has. There are already newer versions available, so you can check if newer versions would work. Also note that the above snippet is the minimal necessary configuration. All other dependencies are resolved transitively. I have chosen this approach to minimize the snippet.

JAX-RS Remote Service Implementation

The implementation of the service already looks different compared to what you have seen so far. Instead of only adding the necessary Component Properties to configure the service as a Remote Service, the service implementation does directly contain the JAX-RS annotations. That of course also means that the annotations need to be available.

  • Create the Service Implementation project by using the ds-component archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=ds-component \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = uppercase
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier.uppercase
  • After accepting the inserted values with ‘y’ a subfolder named uppercase is created that contains the service implementation project structure and the uppercase project is added as module to the parent pom.xml file.
  • Import the created project via
    File – Import… – Maven – Existing Maven Projects
  • Click Next
  • Select the enroute directory
  • Select the created uppercase project
  • Click Finish
  • Add the api project and jakarta.ws.rs-api to the dependencies of the uppercase project
    • Open the file uppercase/pom.xml
    • Add the following block to the dependencies section
    <dependency>
        <groupId>org.fipro.modifier</groupId>
        <artifactId>api</artifactId>
        <version>1.0-SNAPSHOT</version>
    </dependency>
    <dependency>
        <groupId>jakarta.ws.rs</groupId>
        <artifactId>jakarta.ws.rs-api</artifactId>
        <version>2.1.6</version>
    </dependency>
  • Delete the ComponentImpl class
  • Copy the following class UppercaseModifier into the uppercase project
package org.fipro.modifier.uppercase;

import java.util.Locale;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

//The JAX-RS path annotation for this service
@Path("/modify")
//The OSGi DS component annotation
@Component(
    immediate = true,
    property = { 
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs"})
public class UppercaseModifier implements StringModifier {

    @GET
    // The JAX-RS annotation to specify the result type
    @Produces(MediaType.TEXT_PLAIN)
    // The JAX-RS annotation to specify that the last part
    // of the URL is used as method parameter
    @Path("/{value}")
    @Override
    public String modify(@PathParam("value") String input) {
        return (input != null)
            ? input.toUpperCase(Locale.getDefault())
            : "No input given";
    }
}

For the JAX-RS annotations, please have a look at various existing tutorials and blog posts in the internet, for example

About the OSGi DS configuration:

  • The service is an Immediate Compontent, so it is consumed by the OSGi Http Whiteboard on startup
  • Export all interfaces as Remote Service via service.exported.interfaces=*
  • Configure that JAX-RS is used as communication mechanism by the distribution provider via service.exported.intents=jaxrs

Note:
As mentioned earlier there is a bug in ECF 3.14.26 which is integrated in the Eclipse 2021-21 SimRel repo. The service.exported.intents property is not enough to get the JAX-RS resource registered. Additionally it is necessary to set service.exported.configs=ecf.jaxrs.jersey.server to make it work. This was fixed shortly after I reported it and is included with the current ECF 3.14.31 release. The basic idea of the intent configuration is to make the service independent of the underlying JAX-RS Distribution Provider implementation (Jersey vs. Apache CXF).

JAX-RS Jersey Distribution Provider Dependencies

For the JAX-RS Distribution Provider Runtime a lot more dependencies are required. The following list should cover the additional necessary base dependencies:

  • Jackson
    • com.fasterxml.jackson.core.jackson-annotations
    • com.fasterxml.jackson.core.jackson-core
    • com.fasterxml.jackson.core.jackson-databind
    • com.fasterxml.jackson.jaxrs.jackson-jaxrs-base
    • com.fasterxml.jackson.jaxrs.jackson-jaxrs-json-provider
    • com.fasterxml.jackson.module.jackson-module-jaxb-annotations
  • Jersey / Glassfish / Dependencies
    • org.glassfish.hk2.api
    • org.glassfish.hk2.external.aopalliance-repackaged
    • org.glassfish.hk2.external.jakarta.inject
    • org.glassfish.hk2.locator
    • org.glassfish.hk2.osgi-resource-locator
    • org.glassfish.hk2.utils
    • org.glassfish.jersey.containers.jersey-container-servlet
    • org.glassfish.jersey.containers.jersey-container-servlet-core
    • org.glassfish.jersey.core.jersey-client
    • org.glassfish.jersey.core.jersey-common
    • org.glassfish.jersey.core.jersey-server
    • org.glassfish.jersey.ext.jersey-entity-filtering
    • org.glassfish.jersey.inject.jersey-hk2
    • org.glassfish.jersey.media.jersey-media-jaxb
    • org.glassfish.jersey.media.jersey-media-json-jackson
    • com.sun.activation.javax.activation
    • jakarta.annotation-api
    • javax.ws.rs-api
    • jakarta.xml.bind-api
    • javassist
    • javax.validation.api
    • org.slf4j.api

For the Service Provider we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Server bundles, the Jetty as embedded server and the HTTP Whiteboard:

  • ECF Distribution Provider – JAX-RS Jersey
    • org.eclipse.ecf.provider.jaxrs
    • org.eclipse.ecf.provider.jaxrs.server
    • org.eclipse.ecf.provider.jersey.server
  • Jetty / Http Whiteboard / Http Service
    • org.apache.felix.http.jetty
    • org.apache.felix.http.servlet-api

For the Service Consumer we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Client bundles and the HttpClient to be able to access the JAX-RS resource:

  • ECF Distribution Provider – JAX-RS Jersey
    • org.eclipse.ecf.provider.jaxrs
    • org.eclipse.ecf.provider.jaxrs.client
    • org.eclipse.ecf.provider.jersey.client

Service Provider Runtime

  • Create the Service Provider Runtime project using the application archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=application \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = uppercase-app
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier
    • impl-artifactId = uppercase
    • impl-groupId = org.fipro.modifier
    • impl-version = 1.0-SNAPSHOT
    • target-java-version = 11
  • First you need to decline the properties configuration, as by default target-java-version = 8 will be used. After setting the correct values and accepting them with ‘y’ a subfolder named uppercase-app is created that contains .bndrun files and preparations for configuring the application.
  • Import the created project via
    File – Import… – Maven – Existing Maven Projects
  • Click Next
  • Select the enroute directory
  • Select the created uppercase-app project
  • Click Finish
  • Open the uppercase-app/pom.xml file
    • Add the dependencies to the ECF bundles as shown below
      (the versions are already configured in the parent pom.xml dependencyManagement section)
<!-- ECF dependencies -->
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>

<!-- ECF -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>

<!-- ECF Discovery - Zeroconf -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>

<!-- ECF JAX-RS Distribution Provider -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
</dependency>

<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.jaxrs</groupId>
  <artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>

<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.core</groupId>
  <artifactId>jersey-client</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.media</groupId>
  <artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.inject</groupId>
  <artifactId>jersey-hk2</artifactId>
</dependency>
  • Open the uppercase-app/uppercase-app.bndrun file
    • Add the following bundles to the Run Requirements
      • org.fipro.modifier.uppercase
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jmdns
      • org.eclipse.ecf.provider.jersey.server
      • org.apache.felix.http.jetty
      • org.eclipse.equinox.event
    • Add the following property to the OSGi Framework properties:
      • org.osgi.service.http.port=8181
    • Save the changes
    • Click on Resolve
    • Accept the result in the opening dialog via Update

Note:
With the latest version of the JAX-RS Distribution Provider, the .bndrun configuration is much more comfortable than before. There were several improvements to make the definition of a runtime more user friendly, so if you are already familiar with the JAX-RS Distribution Provider and used it in the past, be sure to update it to the latest version to benefit from the latest modifications.

Now you can start the Uppercase JAX-RS Service Runtime from the Overview tab via Launch an Eclipse application. After the runtime is started the service will be available as JAX-RS resource and can be accessed in a browser, e.g. http://localhost:8181/modify/remoteservice

Note:
Unfortunately with the above setup, you will see a 404 instead of the service result. It seems that using Jetty 9 the usage of the base URL is not working for Remote Services. Maybe it is only a configuration issue that I was not able to solve as part of this tutorial. There are two options to handle this issue, either configure additional path segments or use Jetty 10.

Note:
Don’t worry if you see a SelectContainerException in the console. It is only an information that tells that the service from the first part of the tutorial can not be imported in the runtime of this part of the tutorial and vice versa. The first service is distributed via the Generic Provider, while the second service is distributed by the JAX-RS Provider. But both are using the JmDNS Discovery Provider.

The URL path is defined via the JAX-RS annotations, “modify” via @Path("/modify") on the class, “remoteservice” is the path parameter defined via @Path("/{value}") on the method (if you change that value, the result will change accordingly). You can extend the URL via configurations shown below:

  • Add a prefix URL path segment on runtime level:
    Add the following property to the OSGi Framework properties
    ecf.jaxrs.server.pathPrefix=<value>
    (e.g. ecf.jaxrs.server.pathPrefix=/services)
  • Add a leading URL path segment on service level:
    Add the following component property to the @Component annotation
    ecf.jaxrs.server.pathPrefix=<value>
    e.g.
@Component(
    immediate = true,
    property = {
        "service.exported.interfaces=*",
        "service.exported.configs=ecf.jaxrs.jersey.server",
        "service.exported.intents=jaxrs",
        "ecf.jaxrs.server.pathPrefix=/upper"})

If all of the above configurations are added, the new URL to the service is, e.g. http://localhost:8181/services/upper/modify/remoteservice

Additional information about available component properties can be found here: Jersey Service Properties

Service Provider Runtime – Jetty 10

With the above setup the bundle org.apache.felix.http.jetty is integrated in the runtime. That bundle combines the following:

  • OSGi Http Service
  • OSGi Http Whiteboard
  • Jetty 9

This makes the integration very easy. If you want to update to Jetty 10 the setup is more complicated, as that is not available as combined Felix bundle. In that case you need the following bundles:

  • Jetty 10
    • org.eclipse.jetty.http
    • org.eclipse.jetty.io
    • org.eclipse.jetty.security
    • org.eclipse.jetty.server
    • org.eclipse.jetty.servlet
    • org.eclipse.jetty.util
    • org.eclipse.jetty.util.ajax
  • OSGi Http Service and Http Whiteboard (Equinox / Jetty)
    • org.eclipse.equinox.http.jetty
    • org.eclipse.equinox.http.servlet
  • OSGi Service Interfaces
    • org.eclipse.osgi.services

First you create a new Service Provider Runtime project that includes Jetty 10:

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=application \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = uppercase-app-jetty10
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier
    • impl-artifactId = uppercase
    • impl-groupId = org.fipro.modifier
    • impl-version = 1.0-SNAPSHOT
    • target-java-version = 11
  • First you need to decline the properties configuration, as by default target-java-version = 8 will be used. After setting the correct values and accepting them with ‘y’ a subfolder named uppercase-app is created that contains .bndrun files and preparations for configuring the application.
  • Import the created project via
    File – Import… – Maven – Existing Maven Projects
  • Click Next
  • Select the enroute directory
  • Select the created uppercase-app-jetty10 project
  • Click Finish
  • Open the uppercase-app-jetty10/pom.xml file
    • Add the dependencies to ECF, Jetty 10 and Equinox Http as shown below
      (of course you can also configure the versions for Jetty 10 etc. in the enroute/pom.xml dependencyManagement section as described before)
<!-- ECF dependencies -->
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>

<!-- ECF -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>

<!-- ECF Discovery - Zeroconf -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jmdns</artifactId>
</dependency>

<!-- ECF JAX-RS Distribution Provider -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
</dependency>

<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.jaxrs</groupId>
  <artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>

<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.core</groupId>
  <artifactId>jersey-client</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.media</groupId>
  <artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.inject</groupId>
  <artifactId>jersey-hk2</artifactId>
</dependency>

<!-- Equinox OSGi Http Service and Http Whiteboard -->
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.osgi.services</artifactId>
  <version>3.10.200</version>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.http.jetty</artifactId>
  <version>3.8.100</version>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.http.servlet</artifactId>
  <version>1.7.200</version>
</dependency>

<!-- Jetty 10 -->
<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-http</artifactId>
  <version>10.0.8</version>
</dependency>
<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-io</artifactId>
  <version>10.0.8</version>
</dependency>
<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-security</artifactId>
  <version>10.0.8</version>
</dependency>
<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-server</artifactId>
  <version>10.0.8</version>
</dependency>
<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-servlet</artifactId>
  <version>10.0.8</version>
</dependency>
<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-util</artifactId>
  <version>10.0.8</version>
</dependency>
<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-util-ajax</artifactId>
  <version>10.0.8</version>
</dependency>

<!-- Jetty 10 Dependencies -->
<dependency>
  <groupId>jakarta.servlet</groupId>
  <artifactId>jakarta.servlet-api</artifactId>
  <version>4.0.4</version>
</dependency>
<dependency>
  <groupId>jakarta.xml.bind</groupId>
  <artifactId>jakarta.xml.bind-api</artifactId>
  <version>2.3.3</version>
</dependency>

<!-- Gogo Shell & ECF Console - optionally -->
<dependency>
  <groupId>org.apache.felix</groupId>
  <artifactId>org.apache.felix.gogo.shell</artifactId>
  <version>1.0.0</version>
</dependency>
<dependency>
  <groupId>org.apache.felix</groupId>
  <artifactId>org.apache.felix.gogo.runtime</artifactId>
  <version>1.0.10</version>
</dependency>
<dependency>
  <groupId>org.apache.felix</groupId>
  <artifactId>org.apache.felix.gogo.command</artifactId>
  <version>1.0.2</version>
  <exclusions>
    <exclusion>
      <groupId>org.osgi</groupId>
      <artifactId>org.osgi.core</artifactId>
    </exclusion>
    <exclusion>
      <groupId>org.osgi</groupId>
      <artifactId>org.osgi.compendium</artifactId>
    </exclusion>
  </exclusions>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.console</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
</dependency>
  • Open the file uppercase-app-jetty10.bndrun
    • Add the following bundles to the Run Requirements
      • org.fipro.modifier.uppercase
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jmdns
      • org.eclipse.ecf.provider.jersey.server
      • org.eclipse.equinox.http.jetty
      • org.eclipse.equinox.http.servlet
      • org.eclipse.jetty.http
      • org.eclipse.jetty.io
      • org.eclipse.jetty.security
      • org.eclipse.jetty.server
      • org.eclipse.jetty.servlet
      • org.eclipse.jetty.util
      • org.eclipse.jetty.util.ajax
      • org.eclipse.equinox.event
    • Add org.apache.felix.http.jetty to the Run Blacklist
      (this is necessary to avoid that this bundle is used by the resolve step)
    • Optional: Add the following console bundles for debugging and inspection
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.shell
      • org.eclipse.ecf.osgi.services.remoteserviceadmin.console
    • Set Execution Env.: JavaSE-11
    • Add the following properties to the OSGi Framework properties:
      • org.osgi.service.http.port=8181
      • launch.activation.eager=true
    • Optional: Add the following properties to the OSGi Framework properties for the interactive console:
      • osgi.console=
      • osgi.console.enable.builtin=false
    • Save the changes
    • Click on Resolve
    • Accept the result in the opening dialog via Update

Note:
The OSGi Framework property launch.activation.eager=true is necessary because of the activation policy set in the Equinox Jetty Http Service bundle. It is configured to be activated lazy, which means it will only be activated if someone requests something from that bundle. But as Equinox does collect all OSGi service interfaces in org.eclipse.osgi.services, actually nobody ever will request something from that bundle, which leaves it in the STARTING state forever. With launch.activation.eager property the lazy activation will be ignored and all bundles will be simply started. Bug 530076 was created to discuss if the lazy activation could be dropped.

Note:
Unfortunately you can not include the org.apache.felix.webconsole in a Jetty 10 runtime. The reason is the Servlet API version dependency of webconsole. org.apache.felix.webconsole requires javax.servlet;version="[2.4,4)" even in its latest version, while org.eclipse.jetty.servlet requires javax.servlet;version="[4.0.0,5)". So if you want to use the webconsole in your JAX-RS Remote Service, you need to stick with Jetty 9.

Note:
It is currently not possible to use Jetty 11 for OSGi development, as the OSGi implementations are not updated to the jakarta namespace.

For an overview on the Jetty versions and dependencies, have a look at the Jetty Downloads page.

Service Consumer Runtime

To consume the Remote Service provided via JAX-RS Distribution Provider, the runtime needs to be extended to include the additional dependencies:

  • Open the file client-app/pom.xml
    • Add the following snippet to the dependencies section
<!-- ECF JAX-RS Distribution Provider -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs.client</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jersey.client</artifactId>
</dependency>

<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.jaxrs</groupId>
  <artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>

<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.core</groupId>
  <artifactId>jersey-client</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.media</groupId>
  <artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.inject</groupId>
  <artifactId>jersey-hk2</artifactId>
</dependency>

<!-- ECF Console -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.console</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
</dependency>
  • Open the file client-app/client-app.bndrun
    • Add the following bundle to the Run Requirements
      • org.eclipse.ecf.provider.jersey.client
    • Save the changes
    • Click on Resolve to update the Run Bundles

If you now start the Service Consumer Runtime and have the Service Provider Runtime also running, you can execute the following command

modify jax

This will actually lead to an error if you followed my tutorial step by step:

ServiceException: Service exception on remote service proxy

The reason is that the Service Interface does not contain the JAX-RS annotations as the service actually does, and therefore the mapping is working. So while for providing the service the interface does not need to be modified, it has to for the consumer side.

Extend the Service Interface

  • Open the file api/pom.xml
  • Add the following snippet to the dependencies section
    <dependency>
        <groupId>jakarta.ws.rs</groupId>
        <artifactId>jakarta.ws.rs-api</artifactId>
        <version>2.1.6</version>
    </dependency>
  • Open the StringModifier class and add the JAX-RS annotations to be exactly the same as for the Service Implementation
package org.fipro.modifier.api;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Path("/modify")
public interface StringModifier {
    @GET
    @Produces(MediaType.TEXT_PLAIN)
    @Path("/{value}")
    String modify(@PathParam("value") String input);
}

If you now start the Uppercase Service Provider Runtime and the Service Consumer Runtime again, the error should be gone and you should see the expected result.

Update the “Inverter” Service Provider Runtime

After the Service Interface was extended to include the JAX-RS annotations, the first Service Provider Runtime will not resolve anymore because of missing dependencies. To fix this:

  • Open the file inverter-app/inverter-app.bndrun
    • Click on Resolve to update the Run Bundles

Now you can start that Service Provider Runtime again. If the other Service Provider and the Service Consumer are also active, executing the modify command will now output the result of both services.

Endpoint Description Extender Format (EDEF)

In the tutorial we used JmDNS/Zeroconf as Discovery Provider. This way there is not much we have to do as a developer or administrator despite adding the according bundle to the runtime. This kind of Discovery is using a broadcast mechanism to announce the service in the network. In cases this doesn’t work, e.g. firewall rules that block broadcasting, it is also possible that you use a static file-based discovery. This can be done using the Endpoint Description Extender Format (EDEF) and is also supported by ECF.

Let’s create an additional service that is distributed via JAX-RS. But this time we exclude the org.eclipse.ecf.provider.jmdns bundle, so there is no additional discovery inside the Service Provider Runtime. We also add the console bundles to be able to inspect the runtime.

Note:
If you don’t want to create another service, you can also modify the previous uppercase service. In that case remove the org.eclipse.ecf.provider.jmdns bundle from the product configuration and ensure that the console bundles are added to be able to inspect the remote service runtime via the OSGi Console.

  • Create the Service Implementation project by using the ds-component archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=ds-component \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = camelcase
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier.camelcase
  • After accepting the inserted values with ‘y’ a subfolder named camelcase is created that contains the service implementation project structure and the camelcase project is added as module to the parent pom.xml file.
  • Create the Service Provider Runtime project using the application archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=application \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = camelcase-app
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier
    • impl-artifactId = camelcase
    • impl-groupId = org.fipro.modifier
    • impl-version = 1.0-SNAPSHOT
    • target-java-version = 11
  • First you need to decline the properties configuration, as by default target-java-version = 8 will be used. After setting the correct values and accepting them with ‘y’ a subfolder named camelcase-app is created that contains .bndrun files and preparations for configuring the application.
  • Import the created projecst via
    File – Import… – Maven – Existing Maven Projects
  • Click Next
  • Select the enroute directory
  • Select the created camelcase and camelcase-app projects
  • Click Finish
  • Add the api project and jakarta.ws.rs-api to the dependencies of the camelcase project
    • Open the file camelcase/pom.xml
    • Add the following block to the dependencies section
    <dependency>
        <groupId>org.fipro.modifier</groupId>
        <artifactId>api</artifactId>
        <version>1.0-SNAPSHOT</version>
    </dependency>
    <dependency>
        <groupId>jakarta.ws.rs</groupId>
        <artifactId>jakarta.ws.rs-api</artifactId>
        <version>2.1.6</version>
    </dependency>
  • Delete the ComponentImpl class
  • Copy the following class CamelCaseModifier into the camelcase project
package org.fipro.modifier.camelcase;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

@Path("/modify")
@Component(
    immediate = true,
    property = { 
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs",
        "ecf.jaxrs.server.pathPrefix=/camelcase"})
public class CamelCaseModifier implements StringModifier {

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    @Path("/{value}")
    @Override
    public String modify(@PathParam("value") String input) {
        StringBuilder builder = new StringBuilder();
        if (input != null) {
            for (int i = 0; i < input.length(); i++) {
                char currentChar = input.charAt(i);
                if (i % 2 == 0) {
                    builder.append(Character.toUpperCase(currentChar));
                } else {
                    builder.append(Character.toLowerCase(currentChar));
                }
            }
        }
        else {
            builder.append("No input given");
        }
        return builder.toString();
    }
}
  • Open the camelcase-app/pom.xml file
    • Add the dependencies to the ECF bundles as shown below
      (the versions are already configured in the parent pom.xml dependencyManagement section)
<!-- ECF dependencies -->
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.core.jobs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.platform</groupId>
  <artifactId>org.eclipse.equinox.concurrent</artifactId>
</dependency>

<!-- ECF -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.discovery</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.identity</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.distribution</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice.asyncproxy</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.remoteservice</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.sharedobject</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.osgi.services.remoteserviceadmin</artifactId>
</dependency>

<!-- ECF JAX-RS Distribution Provider -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jaxrs.server</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.provider.jersey.server</artifactId>
</dependency>

<!-- ECF JAX-RS Distribution Provider Dependencies -->
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.jaxrs</groupId>
  <artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>

<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.containers</groupId>
  <artifactId>jersey-container-servlet-core</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.core</groupId>
  <artifactId>jersey-client</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.media</groupId>
  <artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.inject</groupId>
  <artifactId>jersey-hk2</artifactId>
</dependency>

<!-- The Gogo Shell -->
<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.gogo.shell</artifactId>
    <version>1.0.0</version>
</dependency>
<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.gogo.runtime</artifactId>
    <version>1.0.10</version>
</dependency>
<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.gogo.command</artifactId>
    <version>1.0.2</version>
    <exclusions>
        <exclusion>
            <groupId>org.osgi</groupId>
            <artifactId>org.osgi.core</artifactId>
        </exclusion>
        <exclusion>
            <groupId>org.osgi</groupId>
            <artifactId>org.osgi.compendium</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<!-- ECF Console -->
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.console</artifactId>
</dependency>
<dependency>
  <groupId>org.eclipse.ecf</groupId>
  <artifactId>org.eclipse.ecf.osgi.services.remoteserviceadmin.console</artifactId>
</dependency>
  • Open the camelcase-app/camelcase-app.bndrun file
    • Add the following bundles to the Run Requirements
      • org.fipro.modifier.camelcase
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jersey.server
      • org.apache.felix.http.jetty
      • org.eclipse.equinox.event
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.shell
      • org.eclipse.ecf.osgi.services.remoteserviceadmin.console
    • Add the following property to the OSGi Framework properties:
      • osgi.console=
      • osgi.console.enable.builtin=false
      • org.osgi.service.http.port=8282
      • ecf.jaxrs.server.pathPrefix=/services
    • Save the changes
    • Click on Resolve
    • Accept the result in the opening dialog via Update

Once the runtime is started via Run OSGi the service should be available via http://localhost:8282/services/camelcase/modify/remoteservice

You probably noticed a console output on startup that shows the Endpoint Description XML. This is actually what we need for the EDEF file. You can also get the endpoint description at runtime via the ECF Gogo Command listexports <endpoint.id>:

osgi> listexports
endpoint.id                          |Exporting Container ID                       |Exported Service Id
5918da3a-a971-429f-9ff6-87abc70d4742 |http://localhost:8282/services/camelcase     |38

osgi> listexports 5918da3a-a971-429f-9ff6-87abc70d4742
<endpoint-descriptions xmlns="http://www.osgi.org/xmlns/rsa/v1.0.0">
  <endpoint-description>
    <property name="ecf.endpoint.id" value-type="String" value="http://localhost:8282/services/camelcase"/>
    <property name="ecf.endpoint.id.ns" value-type="String" value="ecf.namespace.jaxrs"/>
    <property name="ecf.endpoint.ts" value-type="Long" value="1642667915518"/>
    <property name="ecf.jaxrs.server.pathPrefix" value-type="String" value="/camelcase"/>
    <property name="ecf.rsvc.id" value-type="Long" value="1"/>
    <property name="endpoint.framework.uuid" value-type="String" value="80778aff-63c7-448d-92a5-7902eb6782ae"/>
    <property name="endpoint.id" value-type="String" value="5918da3a-a971-429f-9ff6-87abc70d4742"/>
    <property name="endpoint.package.version.org.fipro.modifier" value-type="String" value="1.0.0"/>
    <property name="endpoint.service.id" value-type="Long" value="38"/>
    <property name="objectClass" value-type="String">
      <array>
        <value>org.fipro.modifier.StringModifier</value>
      </array>
    </property>
    <property name="remote.configs.supported" value-type="String">
      <array>
        <value>ecf.jaxrs.jersey.server</value>
      </array>
    </property>
    <property name="remote.intents.supported" value-type="String">
      <array>
        <value>passByValue</value>
        <value>exactlyOnce</value>
        <value>ordered</value>
        <value>osgi.async</value>
        <value>osgi.private</value>
        <value>osgi.confidential</value>
        <value>jaxrs</value>
      </array>
    </property>
    <property name="service.imported" value-type="String" value="true"/>
    <property name="service.imported.configs" value-type="String">
      <array>
        <value>ecf.jaxrs.jersey.server</value>
      </array>
    </property>
    <property name="service.intents" value-type="String">
      <array>
        <value>jaxrs</value>
      </array>
    </property>
  </endpoint-description>
</endpoint-descriptions>

The endpoint description is needed by the Service Consumer to discover the new service. Without a Discovery that is broadcasting, the service needs to be discovered statically via an EDEF file. As the EDEF file is registered via manifest header, we create a new bundle. You could also place it in an existing bundle like org.fipro.modifier.client, but for some more OSGi dynamics fun, let’s create a new bundle.

  • Create the EDEF configuration bundle project
    • File -> New -> Other… -> Maven -> Maven Module
    • Click Next
    • Check Create a simple project
    • Set Module Name: to client-edef
    • Select Parent Project: enroute
    • Click Next
    • Click Finish
  • Create a new folder edef
    • Right click on the source folder src/main/resources -> New -> Folder
    • Set Folder name to edef
    • Click Finish
  • Create a new file camelcase.xml in that folder
    • Right click on the edef folder -> New -> File
    • Set File name to camelcase.xml
    • Copy the Endpoint Description XML from the previous console command execution into that file
  • Create a new package edef
    • Right click on the source folder src/main/java -> New -> Package
    • Set Name to edef
    • Check Create package-info.java
    • Click Finish
  • Open src/main/java/edef/package-info.java
    • Add the Header OSGi Bundle Annotation to add the Remote-Service header to the OSGi metadata
@org.osgi.annotation.bundle.Header(name="Remote-Service", value="edef/camelcase.xml")
package edef;
  • Open the file client-edef/pom.xml
    • Add the following fragment after the artifactId
  <dependencies>
    <dependency>
      <groupId>org.osgi.enroute</groupId>
      <artifactId>osgi-api</artifactId>
      <type>pom</type>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>biz.aQute.bnd</groupId>
        <artifactId>bnd-maven-plugin</artifactId>
      </plugin>
      <plugin>
        <groupId>biz.aQute.bnd</groupId>
        <artifactId>bnd-baseline-maven-plugin</artifactId>
      </plugin>
    </plugins>
  </build>

Note:
If you see an error on the project after the modification on the pom.xml file, execute a right-click on the project -> Maven -> Update Project… -> select the client-edef project or even all projects in the dialog and click Update.

  • Open the file client-app/pom.xml
    • Add the following snippet to the dependencies section
    <dependency>
        <groupId>org.fipro.modifier</groupId>
        <artifactId>client-edef</artifactId>
        <version>1.0-SNAPSHOT</version>
    </dependency>
  • Open the file client-app/client-app.bndrun
    • Add org.fipro.modifier.client-edef to the Run Requirements
    • Save the changes
    • Click on Resolve to update the Run Bundles

If you start the Service Consumer Runtime, the service will directly be available. This is because the new org.fipro.modifier.client-edef bundle is activated automatically by the bnd launcher (a big difference compared to Equinox). Let’s deactivate it via the console. First we need to find the bundle-id via lb and then stop it via stop <bundle-id>. The output should look similar to the following snippet:

g! lb edef
START LEVEL 1
   ID|State      |Level|Name
   49|Active     |    1|client-edef (1.0.0.202202180929)|1.0.0.202202180929

g! stop 49

Now the service becomes unavailable via the modify command. If you start the bundle, the service becomes available again.

ECF Extensions to EDEF

The EDEF specification itself would not be sufficient for productive usage. For example, the values of the endpoint description properties need to match. For the endpoint.id this would be really problematic, as that value is a random generated uuid and changes on each runtime start. So if the Service Provider Runtime is restarted there is a new endpoint.id value. ECF includes a mechanism to support the discovery and the distribution even if the endpoint.id of the importer and the exporter do not match. This actually makes the EDEF file support work in productive environments.

ECF also provides a mechanism to create an endpoint description using a properties file. All the necessary endpoint description properties need to be included as properties with the respective types and values. The following example shows the properties representation for the EDEF XML of the above example. Note that for endpoint.id and endpoint.framework.uuid the type is set to uuid and the value is 0. This way ECF will generate a random UUID and the matching feature will ensure that the distribution will work even without matching id values.

ecf.endpoint.id=http://localhost:8282/services/camelcase
ecf.endpoint.id.ns=ecf.namespace.jaxrs
ecf.endpoint.ts:Long=1642761763599
ecf.jaxrs.server.pathPrefix=/camelcase
ecf.rsvc.id:Long=1
endpoint.framework.uuid:uuid=0
endpoint.id:uuid=0
endpoint.package.version.org.fipro.modifier.api=1.0.0
endpoint.service.id:Long=38
objectClass:array=org.fipro.modifier.api.StringModifier
remote.configs.supported:array=ecf.jaxrs.jersey.server
remote.intents.supported:array=passByValue,exactlyOnce,ordered,osgi.async,osgi.private,osgi.confidential,jaxrs
service.imported:boolean=true
service.imported.configs:array=ecf.jaxrs.jersey.server
service.intents:array=jaxrs

Properties files can be used to override values in an underlying XML EDEF file and even as an alternative, so the XML file is not needed anymore. It is even possible to override properties values for different environments, which makes it very interesting in a productive environment. So there can be a default Properties file for the basic endpoint description, then an endpoint description per service that derives from the basic settings, and even profile specific settings that changes for example the ecf.endpoint.id URLs per profile (DEV/INT/PROD). More details on that topic can be found in the ECF Wiki.

Alternatively you can also trigger a remote service import via EDEF programmatically using classes from the org.osgi.service.remoteserviceadmin package (see below). This way it is possible to dynamically import and close remote service registrations at runtime (without operating via low level OSGi bundle operations). The following snippet is an example for the programmatic registration of the service above:

Map<String, Object> properties = new HashMap<>();

properties.put("ecf.endpoint.id", "http://localhost:8282/services/camelcase");
properties.put("ecf.endpoint.id.ns", "ecf.namespace.jaxrs");
properties.put("ecf.endpoint.ts", 1642489801532l);
properties.put("ecf.jaxrs.server.pathPrefix", "/camelcase");
properties.put("ecf.rsvc.id", 1l);
properties.put("endpoint.framework.uuid", "0");
properties.put("endpoint.id", "0");
properties.put("endpoint.package.version.org.fipro.modifier.api", "1.0.0");
properties.put("endpoint.service.id", 38l);
properties.put("objectClass", new String[] { "org.fipro.modifier.api.StringModifier" });
properties.put("remote.configs.supported", new String[] { "ecf.jaxrs.jersey.server" });
properties.put("remote.intents.supported", new String[] { "passByValue", "exactlyOnce", "ordered", "osgi.async", "osgi.private", "osgi.confidential", "jaxrs" });
properties.put("service.imported", "true");
properties.put("service.intents", new String[] { "jaxrs" });
properties.put("service.imported.configs", new String[] { "ecf.jaxrs.jersey.server" });

EndpointDescription desc = new EndpointDescription(properties);
ImportRegistration importRegistration = admin.importService(desc);

Conclusion

The OSGi specification has several chapters and implementations to support a microservice architecture. The Remote Service and Remote Service Admin specifications are one of these and probably the most complicated ones, which was confirmed by several OSGi experts I talked with at conferences. Also the specification itself is not easy to understand, but I hope that this blog post helps to get a better understanding.

While Remote Services are pretty easy to implement, the complicated steps are in the setup of the runtime by collecting all necessary bundles. While the ECF project provides several examples and also tries to provide support for better bundle resolving, it is still not a trivial task. I hope this tutorial helps also in solving that topic a little bit.

Of course at runtime you might face networking issues, as I did in every talk for example. The typical fallacies are even referred in the Remote Service Specification. With the usage of JAX-RS and HTTP for the distribution of services and EDEF for a static file-based discovery, this might be less problematic. Give them a try if you are running into troubles.

At the end I again want to thank Scott Lewis for his continuous work on ECF and his support whenever I faced issues with my examples and had questions on some details. If you need an extension or if you have other requests regarding ECF or the JAX-RS Distribution Provider, like publishing the JAX-RS Distribution Provider on Maven Central and providing dependencies via pom.xml, please get in touch with him.

References

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Getting Started with OSGi Remote Services – enRoute Maven Archetype Edition

Getting Started with OSGi Remote Services – Bndtools Edition

At the EclipseCon Europe 2016 I held a tutorial together with Peter Kirschner named Building Nano Services with OSGi Declarative Services. The final exercise should have been the demonstration of OSGi Remote Services. It actually did not really happen because of the lack of time and networking issues. The next year at the EclipseCon Europe 2017 we joined forces again and gave a talk with the name Microservices with OSGi. In that talk we focused on OSGi Remote Services, but we again failed with the demo at the end because of networking issues. At the EclipseCon Europe 2018 I gave a talk on how to use different OSGi specifications for connecting services remotely titled How to connect your OSGi application. Of course I mentioned OSGi Remote Services there, and of course the demonstration failed again because of networking issues.

In the last years I published several blog posts and gave several talks related to OSGi, and often the topic OSGi Remote Services was raised, but never really covered in detail. Scott Lewis, the project lead of the Eclipse Communication Framework, was really helpful whenever I encountered issues with Remote Services. I promised to write a blog post about that topic as a favour for all the support. And with this blog post I finally want to keep my promise. That said, let’s start with OSGi Remote Services.

Motivation

First I want to explain the motivation for having a closer look at OSGi Remote Services. Looking at general software architecture discussions in the past, service oriented architectures and microservices are a huge topic. Per definition the idea of a microservices architecture is to have

  • a suite of small services
  • each running in its own process
  • communicating with a lightweight mechanism, e.g. HTTP
  • independently deployable
  • easy to replace

While new frameworks and tools came up over the years, the OSGi specification(s) covers these topics for a long time. Via the service registry and the service dynamics you can build up very small modules. Those modules can then be integrated into small runtimes and deployed in different environments (despite the needed JVM or a database if needed). The services in those small independent deployments can then be accessed in different ways, like using the HTTP Whiteboard or JAX-RS Whiteboard. This satisfies the aspect of a communication between services via lightweight mechanisms. For inhomogeneous environments the usage of those specifications is a good match. But it means that you need to implement the access layer on the provider side (e.g. the JAX-RS wrapper to access the service via REST) and you need to implement the service access on the consumer side by using a corresponding framework to execute the REST calls.

Ideally the developer of the service as well as the developer of the service consumer should not need to think about the infrastructure of the whole application. Well, it is always good that everybody in a project knows about everything, but the idea is to not making your code dependent on infrastructure. And this is where OSGi Remote Services come in. You develop the service and the service consumer as if they would be executed in the same runtime. In the deployment the lightweight communication will be added to support service communication over a network.

And as initially mentioned, I want to have a look at ways how to probably get rid of the networking issues I faced in the presentations in the past.

Introduction

To understand this blog post you should be familiar with OSGi services and ideally with OSGi Declarative Services. If you are not familiar with OSGi DS, you can get an introduction by reading my blog post Getting Started with OSGi Declarative Services.

In short, the OSGi Service Layer specifies a Service Producer that publishes a service, and a Service Consumer that listens and retrieves a service. This is shown in the following picture:

With OSGi Remote Services this picture is basically the same. The difference is that the services are registered and consumed across network boundaries. For OSGi Remote Services the above picture could be extended to look like the following:

Glossary

To understand the above picture and the following blog post better, here is a short glossary for the used terms:

  • Remote Service (Distributed Service)
    Basic specification to describe how OSGi services can be exported and imported to be available across network boundaries.
  • Distribution Provider
    Exports services by creating endpoints on the producer side, imports services by creating proxies to access endpoints on the consumer side, manages policies around the topology and discovers remote services.
  • Endpoint
    Communication access mechanism to a remote service that requires some protocol for communications.
  • Topology
    Mapping between services and endpoints as well as their communication characteristics.
  • Remote Service Admin (RSA)
    Provides the mechanisms to import and export services through a set of configuration types. It is a passive Distribution Provider, not taking any action to export or import itself.
  • Topology Manager
    Provides the policy for importing and exporting services via RSA and implements a Topology.
  • Discovery
    Discover / announce Endpoint Descriptions via some discovery protocol.
  • Endpoint Description
    A properties based description of an Endpoint that can be exchanged between different frameworks to create connections to each other’s services.

To get a slightly better understanding, the following picture shows some more details inside the Remote Service Implementation block.

Note:
Actually this picture is still a simplified version, as internally there are Endpoint Event Listener and Remote Service Admin Listener that are needed to trigger all the necessary actions. But to get an idea how things play together this picture should be sufficient.

Now let’s explain the picture in more detail:

Service Provider Runtime

  • A service is marked to be exported. This is done via service properties.
  • The Distribution Provider creates an endpoint for the exported service:
    • The Topology Manager gets informed about the exported service.
    • If the export configuration matches the Topology it instructs the Remote Service Admin to create an Endpoint.
    • The Remote Service Admin creates the Endpoint.
  • The Discovery gets informed via Endpoint Event Listener and announces the Endpoint to other systems via Endpoint Description.

Service Consumer Runtime

  • The Discovery discovers an Endpoint via Endpoint Description that was announced in the network.
  • The Distribution Provider creates a proxy for the service.
    • The Topology Manager learns from the Discovery about the newly discovered service (via Endpoint Event Listener), which then instructs the Remote Service Admin to import the service.
    • The Remote Service Admin then creates a local service proxy that is registered as service in the local OSGi runtime. This proxy is mapped to the remote service (or an alternative like a webservice).
  • The service proxy is used for wiring.

To simplify the picture again, the important takeaways are the Distribution Provider and the Discovery. The Distribution Provider is responsible for exporting and importing the service, the Discovery is responsible for announcing and discovering the service. The other terms are needed for a deeper understanding, but for a high level understanding of OSGi Remote Services, these two are sufficient.

Tutorial

Now it is time to get our hands dirty and play with OSGi Remote Services. This tutorial has several steps:

  1. Project Setup
  2. Service Implementation (API & Impl)
  3. Service Provider Runtime
  4. Service Consumer Implementation
  5. Service Consumer Runtime

There are different ways and tools available for OSGi development. In this tutorial I will use Bndtools. I also published this tutorial with other toolings if you don’t want to use Bndtools:

ECF – Remote Service Runtime

While the implementation and export of an OSGi service as a Remote Service is trivial in first place, the definition of the runtime can become quite complicated. Especially collecting the necessary bundles is not that easy without some guidance.

As a reference, with Equinox as underlying OSGi framework the following bundles need to be part of the runtime as a basis:

  • Equinox OSGi
    • org.eclipse.osgi
    • org.eclipse.osgi.services
    • org.eclipse.equinox.common
    • org.eclipse.equinox.event
    • org.eclipse.osgi.util
    • org.apache.felix.scr
  • Equinox Console
    • org.apache.felix.gogo.command
    • org.apache.felix.gogo.runtime
    • org.apache.felix.gogo.shell
    • org.eclipse.equinox.console
  • ECF and dependencies
    • org.eclipse.core.jobs
    • org.eclipse.ecf
    • org.eclipse.ecf.discovery
    • org.eclipse.ecf.identity
    • org.eclipse.ecf.osgi.services.distribution
    • org.eclipse.ecf.osgi.services.remoteserviceadmin
    • org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy
    • org.eclipse.ecf.remoteservice
    • org.eclipse.ecf.remoteservice.asyncproxy
    • org.eclipse.ecf.sharedobject
    • org.eclipse.equinox.concurrent
    • org.eclipse.osgi.services.remoteserviceadmin

With the above basic runtime configuration the Remote Services will not yet work. There are still two things missing, the Discovery and the Distribution Provider. ECF provides different implementations for both. Which implementations to use needs to be defined by the project. In this tutorial we will use Zeroconf/JmDNS for the Discovery and the Generic Distribution Provider:

  • ECF Discovery – Zeroconf
    • org.eclipse.ecf.provider.jmdns
  • ECF Distribution Provider – Generic
    • org.eclipse.ecf.provider
    • org.eclipse.ecf.provider.remoteservice

Note:
You can find the list of different implementations with the documentation about the bundles, configuration types and intents in the ECF Wiki:

Project Setup

ECF Templates

The project setup with Bndtools is different compared to PDE tooling. With Bndtools you setup a Workspace and configure the repositories to use. The ECF project provides Workspace/Project Templates to make the setup easier.

  • Add the ECF Bndtools Workspace Template
    • Window -> Preferences -> Bndtools-> Workspace Template
    • In the GitHub Repositories section select the green plus on the right to add a new repository
    • Set Repository Name: to ECF/bndtools.workspace
    • Leave the Branch: empty to use the default branch
    • Select Validate to check if the ECF Workspace Template can be loaded
    • Select Save
    • Select Apply and close to close the Preferences Window
  • Create a Remote Services Bndtools Workspace
    • File -> New -> Other… -> Bndtools -> Bnd OSGi Workspace
    • Select the Location where the Bnd Workspace should be created
    • Select the Template GitHub -> ECF/bndtools.workspace
    • Select Next to load the ECF Workspace Template
    • Select Finish to finalize the workspace creation

Further details on the Bndtools support provided by the ECF project can be found in the Eclipse Wiki.

Bnd OSGi Templates

Alternatively to using the provided ECF Bndtools Templates, you can configure the workspace manually. This might be useful as the ECF Templates add everything ECF provides to the workspace (including examples). That is perfect for getting started and learning the topics, but for more experienced setups this is probably too much as you want to limit your repository to what you really need.

For the manual setup you create a BND OSGi Workspace by using the default bndtools/workspace:

  • Create a Bnd OSGi Workspace
    • File -> New -> Other… -> Bndtools -> Bnd OSGi Workspace
    • Select the Location where the Bnd Workspace should be created
    • Click Next
    • Select the Template GitHub -> bndtools/workspace
    • Select Next to load the workspace template
    • Select Finish to finalize the workspace creation

To add the ECF related artifacts you need to modify some files in the workspace:

  • Create the file ecfatcentral.maven in the cnf folder
  • Add the following content to the file
# ECF
org.eclipse.platform:org.eclipse.core.jobs:3.12.0
org.eclipse.platform:org.eclipse.equinox.common:3.15.100
org.eclipse.platform:org.eclipse.equinox.concurrent:1.2.100
org.eclipse.ecf:org.eclipse.ecf:3.10.0
org.eclipse.ecf:org.eclipse.ecf.console:1.3.100
org.eclipse.ecf:org.eclipse.ecf.discovery:5.1.1
org.eclipse.ecf:org.eclipse.ecf.identity:3.9.402
org.eclipse.ecf:org.eclipse.ecf.osgi.services.distribution:2.1.600
org.eclipse.ecf:org.eclipse.ecf.osgi.services.remoteserviceadmin:4.9.3
org.eclipse.ecf:org.eclipse.ecf.osgi.services.remoteserviceadmin.console:1.3.0
org.eclipse.ecf:org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy:1.0.101
org.eclipse.ecf:org.eclipse.ecf.remoteservice.asyncproxy:2.1.200
org.eclipse.ecf:org.eclipse.ecf.remoteservice:8.14.0
org.eclipse.ecf:org.eclipse.ecf.sharedobject:2.6.200
org.eclipse.ecf:org.eclipse.osgi.services.remoteserviceadmin:1.6.300

# ECF Discovery Zeroconf
org.eclipse.ecf:org.eclipse.ecf.provider.jmdns:4.3.301

# ECF Distribution Provider - Generic
org.eclipse.ecf:org.eclipse.ecf.provider:4.9.1
org.eclipse.ecf:org.eclipse.ecf.provider.remoteservice:4.6.1

There are of course more artifacts provided by ECF. But for this example we keep the minimum needed.

Note:
Since the ECF artifacts are available on Maven Central you could also simply edit the existing central.maven file and add the ECF arteficts there, but for a better separation we split it here.

Now add the created ecfatcentral.maven file to the workspace build:

  • Open the cnf/build.bnd file
  • Add the following instruction
-plugin.10.ECFATCENTRAL: \
    aQute.bnd.repository.maven.provider.MavenBndRepository; \
        releaseUrl=https://repo.maven.apache.org/maven2/; \
        index=${.}/ecfatcentral.maven; \
        name="ECF Remote Services"

Bndtools also provides the option to include a p2 repository directly as explained here. To use the ECF p2 repository directly add the following instruction to the build.bnd file instead:

-plugin.11.p2: \
 	aQute.bnd.repository.p2.provider.P2Repository; \
 	url = https://download.eclipse.org/rt/ecf/3.14.31/site.p2; \
 	name = ECF Remote Services p2

Note:
If the newly added repositories do not show up in the Repositories view (bottom left in the default Bndtools Perspective), click on Reload workspace in the Bndtools Explorer (the circle arrows in the upper left corner).

Ensure that you have switched to the Bndtools Perspective for the following steps.

Service Interface

  • Create the Service API project
    • File -> New -> Bnd OSGi Project
    • Select the template
      • ECF Templates: Remote Service Project Templates -> Remote Service API Project
      • Bnd Templates: OSGi Release 7 Templates -> API Project
    • Click Next
    • Set name to org.fipro.modifier.api
    • Set JRE to JavaSE-11
    • Click Finish
  • On the New module-info.java dialog select Don’t Create
    Otherwise you will see compile errors as the OSGi annotations are not resolvable and you need to edit the module-info.java file to solve this. Or delete the module-info.java file.
  • Double check that the necessary export configurations are correctly specified via Bundle Annotations in the file package-info.java
  • Delete the created example files HelloService.java or ExampleConsumerInterface.java and ExampleProviderInterface.java
  • Copy the following interface StringModifier in the package org.fipro.modifier.api
package org.fipro.modifier.api;

public interface StringModifier {
    String modify(String input);
}

Service Implementation

  • Create the Service Implementation project
    • File -> New -> Bnd OSGi Project
    • Select the template
      • ECF Templates: Remote Service Project Templates -> Remote Service Impl Project
      • Bnd Templates: OSGi Release 7 Templates -> Component Development
    • Click Next
    • Set name to org.fipro.modifier.inverter
    • Set JRE to JavaSE-11
    • Using the ECF Template:
      • Click Next
      • Set api_package to org.fipro.modifier.api
      • Set service_exported_config to ecf.generic.server
    • Click Finish
  • On the New module-info.java dialog select Don’t Create
  • Delete the created example files HelloServiceImpl.java or Example.java
  • Using the Bnd Templates:
    • Open the file bnd.bnd
      • Switch to the Build tab
      • Add org.fipro.modifier.api to the Build Path via the green plus icon
      • Save
  • Copy the following class StringInverter into the package org.fipro.modifier.inverter
package org.fipro.modifier.inverter;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

@Component(property= {
    "service.exported.interfaces=*",
    "service.exported.configs=ecf.generic.server" }
)
public class StringInverter implements StringModifier {

    @Override
    public String modify(String input) {
        return (input != null)
            ? new StringBuilder(input).reverse().toString()
            : "No input given";
    }
}

The only thing that needs to be done additionally in comparison to creating a local OSGi service, is to configure that the service should be exported as Remote Service. This is done by setting the component property service.exported.interfaces. The value of this property needs to be a list of types for which the class is registered as a service. For a simple use case like the above, the asterisk can be used, which means to export the service for all interfaces under which it is registered, but to ignore the classes. For more detailed information have a look at the Remote Service Properties section of the OSGi Compendium Specification.

The other component property used in the above example is service.exported.configs. This property is used to specify the configuration types, for which the Distribution Provider should create Endpoints. If it is not specified, the Distribution Provider is free to choose the default configuration type for the service.

Note:
In the above example we use the ECF Generic Provider. This one by default chooses a SSL configuration type, so without additional configuration the example would not work if we don’t specify the configuration type.

Additionally you can specify Intents via the service.exported.intents component property to constrain the possible communication mechanisms that a distribution provider can choose to distribute a service. An example will be provided at a later step.

Service Provider Runtime

  • Create the Service Application project
    • File -> New -> Bnd OSGi Project
    • Select the template OSGi Release 7 Templates -> Application Project
    • Click Next
    • Set name to org.fipro.modifier.inverter.app
    • Set JRE to JavaSE-11
    • Click Finish
    • On the New module-info.java dialog select Don’t Create
  • Open the file org.fipro.modifier.inverter.app.bndrun
    • Add the following bundles to the Run Requirements
      • org.fipro.modifier.inverter
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jmdns
      • org.eclipse.ecf.provider.remoteservice
    • Remove the following bundle from the Run Requirements
      • org.fipro.modifier.inverter.app
        (for this example we do not add any configuration, so the bundle and its dependencies are not needed)
    • Set Execution Env.: JavaSE-11
  • Save the changes
  • Click on Resolve
  • Accept the result in the opening dialog via Update

Now you can start the org.fipro.modifier.inverter.app via the Run OSGi button in the upper right corner of the editor. With the console bundles in the Run Requirements the console will be available, apart from that you won’t see anything now.

Note:
The creation of a dedicated application project is not mandatory, but a recommended best practice to separate the application runtime from the service implementation. Especially if you consider that an application typically consists of several services, it doesn’t make much sense to have the launch configuration in one service bundle project. For this tutorial and for testing you can of course also edit the .bndrun file in the Service Implementation project.

Note:
If you used the ECF Project Templates to create the Service Implementation project, you will find two pre-configured .bndrun files in the project root that can be used to start the Service Provider Runtime. Open the file org.fipro.modifier.inverter.zeroconf.generic.bndrun and click on Resolve to calculate the Run Bundles. Once the result is accepted via Update in the dialog, the Service Provider Runtime can be started via Run OSGi.

Service Consumer

The implementation of a Remote Service Consumer also quite simple. From the development perspective there is nothing to consider. The service consumer is implemented without any additions. Only the runtime needs to be extended to contain the necessary bundles for Discovery and Distribution.

The simplest way of implementing a service consumer is a Gogo Shell command.

  • Create the Service Consumer project
    • File -> New -> Bnd OSGi Project
    • Select the template
      • ECF Templates: Remote Service Project Templates -> Remote Service Consumer Project
      • Bnd Templates: OSGi Release 7 Templates -> Component Development
    • Click Next
    • Set name to org.fipro.modifier.client
    • Set JRE to JavaSE-11
    • Using the ECF Template:
      • Click Next
      • Set api_package to org.fipro.modifier.api
    • Click Finish
  • On the New module-info.java dialog select Don’t Create
  • Using the Bnd Templates:
    • Open the file bnd.bnd
      • Switch to the Build tab
      • Add org.fipro.modifier.api to the Build Path via the green plus icon
      • Save
  • Delete the created files HelloServiceConsumer.java or Example.java
  • Copy the following class ModifyCommand into the package org.fipro.modifier.client
package org.fipro.modifier.client;

import java.util.List;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=modify"},
    service=ModifyCommand.class
)
public class ModifyCommand {

    @Reference
    volatile List<StringModifier> modifier;
	
    public void modify(String input) {
        if (modifier.isEmpty()) {
            System.out.println("No StringModifier registered");
        } else {
            modifier.forEach(m -> System.out.println(m.modify(input)));
        }
    }
}

Service Consumer Runtime

  • Create the Client Application project
    • File -> New -> Bnd OSGi Project
    • Select the template OSGi Release 7 Templates -> Application Project
    • Click Next
    • Set name to org.fipro.modifier.client.app
    • Set JRE to JavaSE-11
    • Click Finish
    • On the New module-info.java dialog select Don’t Create
  • Open the file org.fipro.modifier.client.app.bndrun
    • Add the following bundles to the Run Requirements
      • org.apache.felix.gogo.shell
      • org.apache.felix.gogo.command
      • org.fipro.modifier.client
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jmdns
      • org.eclipse.ecf.provider.remoteservice
      • org.eclipse.equinox.event
    • Remove the following bundle from the Run Requirements
      • org.fipro.modifier.client.app
    • Set Execution Env.: JavaSE-11
  • Save the changes
  • Click on Resolve
  • Accept the result in the opening dialog via Update

If you now click on Run OSGi on the Run tab of the editor, the Gogo Shell becomes available in the Console view of the IDE. Once the application is started you can execute the created Gogo Shell command via

modify <input>

If services are available, it will print out the modified results. Otherwise the message “No StringModifier registered” will be printed.

Note:
If you used the ECF Project Templates to create the Service Consumer project, you will find two pre-configured .bndrun files in the project root that can be used to start the Service Consumer Runtime. Open the file org.fipro.modifier.client.zeroconf.generic.bndrun and click on Resolve to calculate the Run Bundles. Once the result is accepted via Update in the dialog, the Service Provider Runtime can be started via Run OSGi.

Remote Service Admin Events

There are several events with regards to importing and exporting Remote Services, that are fired by the Remote Service Admin synchronously once they happen. These events are posted asynchronously via the OSGi Event Admin under the topic

org/osgi/service/remoteserviceadmin/<type>

Where <type> can be one of the following:

  • EXPORT_ERROR
  • EXPORT_REGISTRATION
  • EXPORT_UNREGISTRATION
  • EXPORT_UPDATE
  • EXPORT_WARNING
  • IMPORT_ERROR
  • IMPORT_REGISTRATION
  • IMPORT_UNREGISTRATION
  • IMPORT_UPDATE
  • IMPORT_WARNING

A simple event listener that prints to the console on any Remote Service Admin Event could look like this:

@Component(property = EventConstants.EVENT_TOPIC + "=org/osgi/service/remoteserviceadmin/*")
public class RemoteServiceEventListener implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println(event.getTopic());
        for (String objectClass :  ((String[])event.getProperty("objectClass"))) {
            System.out.println("\t"+objectClass);
        }
    }

}

For further details on the Remote Service Admin Events have a look at the OSGi Compendium Specification Chapter 122.7.

If you need to react synchronously on these events, you can implement a RemoteServiceAdminListener. I typically would not recommend this, unless you really want blocking calls on import/export events. Typically it is intended to be used internally by the Remote Service Admin. But for debugging purposes the ECF project also provides a DebugRemoteServiceAdminListener. It writes the endpoint description via a Writer to support debugging of Remote Services. Via the following class you could easily register a DebugRemoteServiceAdminListener via OSGi DS that prints the information on the console.

@Component
public class DebugListener
    extends DebugRemoteServiceAdminListener
    implements RemoteServiceAdminListener {
	// register the DebugRemoteServiceAdminListener via DS
}

To test this you can either add the above components to one of the existing bundles, or create a new bundle and add that bundle to the runtimes.

Runtime Debugging

The ECF project provides several ways for runtime inspection and runtime debugging. This is mainly done Gogo Shell commands provided via separate bundles. To enable the OSGi console and the ECF console commands, you need to add the following bundles to your runtime:

  • OSGi Console
    • org.apache.felix.gogo.command
    • org.apache.felix.gogo.runtime
    • org.apache.felix.gogo.shell
  • ECF Console
    • org.eclipse.ecf.console
    • org.eclipse.ecf.osgi.services.remoteserviceadmin.console

With the ECF Console bundles added to the runtime, there are several commands to inspect and interact with the Remote Service Admin. As an overview the available commands are listed in the wiki:
Gogo Commands for Remote Services Development

Additionally the DebugRemoteServiceAdminListener described above is activated by default with the ECF Console bundles. It can be activated or deactivated in the runtime via the command

ecf:rsadebug <true/false>

JAX-RS Distribution Provider

One of the biggest issues I faced when working with Remote Services is networking as mentioned in the introduction. In the above example the ECF Generic Distribution Provider is used for a simpler setup. But for example in a corporate network with enabled firewalls somewhere in the network setup, the example will probably not work. As said before, the ECF project provides multiple Distribution Provider implementations, which gives the opportunity to configure the setup to match the project needs. One interesting implementation in that area is the JAX-RS Distribution Provider. Using that one could probably help solving several of the networking issues related to firewalls. But as with the whole Remote Service topic, the complexity in the setup is quite high because of the increased number of dependencies that need to be resolved.

The JAX-RS Distribution Provider implementation is available for Eclipse Jersey and Apache CXF. It uses the OSGi HttpService to register the JAX-RS resource, and of course it then also needs a Servlet container like Eclipse Jetty to provide the JAX-RS resource. I will show the usage of the Jersey based implementation in the following sections.

Project Setup

Unfortunately the JAX-RS Distribution Provider is not available via Maven Central. As Bndtools supports p2 Repositories, we can add the one from GitHub to make it available in our workspace. The p2 support is only able to add the whole repository, so you will see everything from that p2 repository in the workspace. But as there is no availability on Maven Central, the only other option would be to download the artifacts locally and place them in a local structure (which is actually what the ECF Bndtools Workspace Template does). If you used the ECF Bndtools Workspace Templates, the JAX-RS Distribution Provider and its dependencies are already available in the workspace. There are no additional steps necessary for consuming the JAX-RS Distribution Provider and its dependencies.

If you have chosen the manual project setup I recommend to use the p2 repository:

  • Open the cnf/build.bnd file
    • Switch to the Source tab
    • Add the following instruction to add the JAX-RS Distribution Provider p2 repository
-plugin.12.p2: \
 	aQute.bnd.repository.p2.provider.P2Repository; \
 	url = https://raw.githubusercontent.com/ECF/JaxRSProviders/master/build/; \
 	name = ECF JAX-RS Distribution Provider p2

Additionally we need a server that publishes the JAX-RS resource. We will use a Jetty server.

  • Open the cnf/central.maven file
    • Add the following GAV coordinates to that file
org.apache.felix:org.apache.felix.http.jetty:4.1.14

Note:
The ECF Bndtools Workspace Template used a local repository approach in the past. That means the artifacts are physically located in subfolders of the cnf directory. To update them you needed to download the artifacts from the respective GitHub repositories and add/replace the jars in the local repository structure. This was recently changed to also make use of the p2 repository support. If you created an ECF Bndtools Workspace in the past you might want to check if the usage of p2 repositories could improve your project setup.

Note:
The local repository approach and the limitation with regards to updates can also be seen as an advantage. The JAX-RS Distribution Provider is not yet released and published officially. So the p2 update site is generic and if the libraries are updated there, the updates will be directly consumed on a workspace update. Anyhow I personally don’t like having jars locally in my Bnd OSGi Workspace as these artifacts also need to be checked into the repository. I’d rather configure the remote repositories and go into the “offline mode” in case I have to work without an internet connection.

JAX-RS Remote Service Implementation

The implementation of the service already looks different compared to what you have seen so far. Instead of only adding the necessary Component Properties to configure the service as a Remote Service, the service implementation does directly contain the JAX-RS annotations. That of course also means that the annotations need to be available.

  • Create the Service Implementation project
    • File -> New -> Bnd OSGi Project
    • Select the template
      • ECF Templates: Remote Service Project Templates -> JaxRS Remote Service Hello Impl Project
      • Bnd Templates: OSGi Release 7 Templates -> Component Development
    • Click Next
    • Set name to org.fipro.modifier.uppercase
    • Set JRE to JavaSE-11
    • Using the ECF Template:
      • Click Next
      • Set api_package to org.fipro.modifier.api
    • Click Finish
    • On the New module-info.java dialog select Don’t Create
  • Delete the file HelloWorldResource.java or Example.java
  • Using the Bnd Templates
    • Open the file bnd.bnd
      • Switch to the Build tab
      • Add the following bundles to the Build Path via the green plus icon
        • org.fipro.modifier.api
        • jakarta.ws.rs-api
      • Save
  • Copy the following UppercaseModifier snippet the project
package org.fipro.modifier.uppercase;

import java.util.Locale;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

//The JAX-RS path annotation for this service
@Path("/modify")
//The OSGi DS component annotation
@Component(
    immediate = true,
    property = { 
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs"})
public class UppercaseModifier implements StringModifier {

    @GET
    // The JAX-RS annotation to specify the result type
    @Produces(MediaType.TEXT_PLAIN)
    // The JAX-RS annotation to specify that the last part
    // of the URL is used as method parameter
    @Path("/{value}")
    @Override
    public String modify(@PathParam("value") String input) {
        return (input != null)
            ? input.toUpperCase(Locale.getDefault())
            : "No input given";
    }
}

For the JAX-RS annotations, please have a look at various existing tutorials and blog posts in the internet, for example

About the OSGi DS configuration:

  • The service is an Immediate Compontent, so it is consumed by the OSGi Http Whiteboard on startup
  • Export all interfaces as Remote Service via service.exported.interfaces=*
  • Configure that JAX-RS is used as communication mechanism by the distribution provider via service.exported.intents=jaxrs

Note:
As mentioned earlier there is a bug in ECF 3.14.26 which is integrated in the Eclipse 2021-21 SimRel repo. The service.exported.intents property is not enough to get the JAX-RS resource registered. Additionally it is necessary to set service.exported.configs=ecf.jaxrs.jersey.server to make it work. This was fixed shortly after I reported it and is included with the current ECF 3.14.31 release. The basic idea of the intent configuration is to make the service independent of the underlying JAX-RS Distribution Provider implementation (Jersey vs. Apache CXF).

JAX-RS Jersey Distribution Provider Dependencies

For the JAX-RS Distribution Provider Runtime a lot more dependencies are required. The following list should cover the additional necessary base dependencies:

  • Jackson
    • com.fasterxml.jackson.core.jackson-annotations
    • com.fasterxml.jackson.core.jackson-core
    • com.fasterxml.jackson.core.jackson-databind
    • com.fasterxml.jackson.jaxrs.jackson-jaxrs-base
    • com.fasterxml.jackson.jaxrs.jackson-jaxrs-json-provider
    • com.fasterxml.jackson.module.jackson-module-jaxb-annotations
  • Jersey / Glassfish / Dependencies
    • org.glassfish.hk2.api
    • org.glassfish.hk2.external.aopalliance-repackaged
    • org.glassfish.hk2.external.jakarta.inject
    • org.glassfish.hk2.locator
    • org.glassfish.hk2.osgi-resource-locator
    • org.glassfish.hk2.utils
    • org.glassfish.jersey.containers.jersey-container-servlet
    • org.glassfish.jersey.containers.jersey-container-servlet-core
    • org.glassfish.jersey.core.jersey-client
    • org.glassfish.jersey.core.jersey-common
    • org.glassfish.jersey.core.jersey-server
    • org.glassfish.jersey.ext.jersey-entity-filtering
    • org.glassfish.jersey.inject.jersey-hk2
    • org.glassfish.jersey.media.jersey-media-jaxb
    • org.glassfish.jersey.media.jersey-media-json-jackson
    • com.sun.activation.javax.activation
    • jakarta.annotation-api
    • javax.ws.rs-api
    • jakarta.xml.bind-api
    • javassist
    • javax.validation.api
    • org.slf4j.api

For the Service Provider we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Server bundles, the Jetty as embedded server and the HTTP Whiteboard:

  • ECF Distribution Provider – JAX-RS Jersey
    • org.eclipse.ecf.provider.jaxrs
    • org.eclipse.ecf.provider.jaxrs.server
    • org.eclipse.ecf.provider.jersey.server
  • Jetty / Http Whiteboard / Http Service
    • org.apache.felix.http.jetty
    • org.apache.felix.http.servlet-api

For the Service Consumer we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Client bundles and the HttpClient to be able to access the JAX-RS resource:

  • ECF Distribution Provider – JAX-RS Jersey
    • org.eclipse.ecf.provider.jaxrs
    • org.eclipse.ecf.provider.jaxrs.client
    • org.eclipse.ecf.provider.jersey.client

Service Provider Runtime

  • Create the Service Application project
    • File -> New -> Bnd OSGi Project
    • Select the template OSGi Release 7 Templates -> Application Project
    • Click Next
    • Set name to org.fipro.modifier.uppercase.app
    • Set JRE to JavaSE-11
    • Click Finish
    • On the New module-info.java dialog select Don’t Create
  • Open the file org.fipro.modifier.uppercase.app.bndrun
    • Add the following bundles to the Run Requirements
      • org.fipro.modifier.uppercase
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jmdns
      • org.eclipse.ecf.provider.jersey.server
      • org.apache.felix.http.jetty
      • org.eclipse.equinox.event
    • Remove the following bundle from the Run Requirements
      • org.fipro.modifier.uppercase.app
    • Optional: Add the following console bundles for debugging and inspection
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.shell
      • org.eclipse.ecf.osgi.services.remoteserviceadmin.console
    • Set Execution Env.: JavaSE-11
    • Add the following property to the OSGi Framework properties:
      • org.osgi.service.http.port=8181
    • Save the changes
    • Click on Resolve
    • Accept the result in the opening dialog via Update

Note:
With the latest version of the JAX-RS Distribution Provider, the .bndrun configuration is much more comfortable than before. There were several improvements to make the definition of a runtime more user friendly, so if you are already familiar with the JAX-RS Distribution Provider and used it in the past, be sure to update it to the latest version to benefit from the latest modifications.

Now you can start the Uppercase JAX-RS Service Runtime from the Overview tab via Launch an Eclipse application. After the runtime is started the service will be available as JAX-RS resource and can be accessed in a browser, e.g. http://localhost:8181/modify/remoteservice

Note:
Unfortunately with the above setup, you will see a 404 instead of the service result. It seems that using Jetty 9 the usage of the base URL is not working for Remote Services. Maybe it is only a configuration issue that I was not able to solve as part of this tutorial. There are two options to handle this issue, either configure additional path segments or use Jetty 10.

Note:
Don’t worry if you see a SelectContainerException in the console. It is only an information that tells that the service from the first part of the tutorial can not be imported in the runtime of this part of the tutorial and vice versa. The first service is distributed via the Generic Provider, while the second service is distributed by the JAX-RS Provider. But both are using the JmDNS Discovery Provider.

The URL path is defined via the JAX-RS annotations, “modify” via @Path("/modify") on the class, “remoteservice” is the path parameter defined via @Path("/{value}") on the method (if you change that value, the result will change accordingly). You can extend the URL via configurations shown below:

  • Add a prefix URL path segment on runtime level:
    Add the following property to the OSGi Framework properties
    ecf.jaxrs.server.pathPrefix=<value>
    (e.g. ecf.jaxrs.server.pathPrefix=/services)
  • Add a leading URL path segment on service level:
    Add the following component property to the @Component annotation
    ecf.jaxrs.server.pathPrefix=<value>
    e.g.
@Component(
    immediate = true,
    property = {
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs",
        "ecf.jaxrs.server.pathPrefix=/upper"})

If all of the above configurations are added, the new URL to the service is, e.g. http://localhost:8181/services/upper/modify/remoteservice

Additional information about available component properties can be found here: Jersey Service Properties

Service Provider Runtime – Jetty 10

With the above setup the bundle org.apache.felix.http.jetty is integrated in the runtime. That bundle combines the following:

  • OSGi Http Service
  • OSGi Http Whiteboard
  • Jetty 9

This makes the integration very easy. If you want to update to Jetty 10 the setup is more complicated, as that is not available as combined Felix bundle. In that case you need the following bundles:

  • Jetty 10
    • org.eclipse.jetty.http
    • org.eclipse.jetty.io
    • org.eclipse.jetty.security
    • org.eclipse.jetty.server
    • org.eclipse.jetty.servlet
    • org.eclipse.jetty.util
    • org.eclipse.jetty.util.ajax
  • OSGi Http Service and Http Whiteboard (Equinox / Jetty)
    • org.eclipse.equinox.http.jetty
    • org.eclipse.equinox.http.servlet
  • OSGi Service Interfaces
    • org.eclipse.osgi.services

First you need to add the necessary artifacts to the workspace:

  • Open the cnf/central.maven file
    • Add the following GAV coordinates to that file
org.eclipse.platform:org.eclipse.osgi.services:jar:3.10.200
org.eclipse.platform:org.eclipse.equinox.http.jetty:jar:3.8.100
org.eclipse.platform:org.eclipse.equinox.http.servlet:jar:1.7.200

org.eclipse.jetty:jetty-http:jar:10.0.8
org.eclipse.jetty:jetty-io:jar:10.0.8
org.eclipse.jetty:jetty-security:jar:10.0.8
org.eclipse.jetty:jetty-server:jar:10.0.8
org.eclipse.jetty:jetty-servlet:jar:10.0.8
org.eclipse.jetty:jetty-util:jar:10.0.8
org.eclipse.jetty:jetty-util-ajax:jar:10.0.8

jakarta.servlet:jakarta.servlet-api:jar:4.0.4

After that you can create a new Service Provider Runtime project that includes Jetty 10:

  • Create the Service Application project
    • File -> New -> Bnd OSGi Project
    • Select the template OSGi Release 7 Templates -> Application Project
    • Click Next
    • Set name to org.fipro.modifier.uppercase.app.jetty10
    • Set JRE to JavaSE-11
    • Click Finish
    • On the New module-info.java dialog select Don’t Create
  • Open the file org.fipro.modifier.uppercase.app.jetty10.bndrun
    • Add the following bundles to the Run Requirements
      • org.fipro.modifier.uppercase
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jmdns
      • org.eclipse.ecf.provider.jersey.server
      • org.eclipse.equinox.http.jetty
      • org.eclipse.equinox.http.servlet
      • org.eclipse.jetty.http
      • org.eclipse.jetty.io
      • org.eclipse.jetty.security
      • org.eclipse.jetty.server
      • org.eclipse.jetty.servlet
      • org.eclipse.jetty.util
      • org.eclipse.jetty.util.ajax
      • org.eclipse.equinox.event
    • Remove the following bundle from the Run Requirements
      • org.fipro.modifier.uppercase.app.jetty10
    • Add org.apache.felix.http.jetty to the Run Blacklist
      (this is necessary to avoid that this bundle is used by the resolve step)
    • Optional: Add the following console bundles for debugging and inspection
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.shell
      • org.eclipse.ecf.osgi.services.remoteserviceadmin.console
    • Set Execution Env.: JavaSE-11
    • Add the following property to the OSGi Framework properties:
      • org.osgi.service.http.port=8181
      • launch.activation.eager=true
    • Save the changes
    • Click on Resolve
    • Accept the result in the opening dialog via Update

Note:
The OSGi Framework property launch.activation.eager=true is necessary because of the activation policy set in the Equinox Jetty Http Service bundle. It is configured to be activated lazy, which means it will only be activated if someone requests something from that bundle. But as Equinox does collect all OSGi service interfaces in org.eclipse.osgi.services, actually nobody ever will request something from that bundle, which leaves it in the STARTING state forever. With launch.activation.eager property the lazy activation will be ignored and all bundles will be simply started. Bug 530076 was created to discuss if the lazy activation could be dropped.

Note:
Unfortunately you can not include the org.apache.felix.webconsole in a Jetty 10 runtime. The reason is the Servlet API version dependency of webconsole. org.apache.felix.webconsole requires javax.servlet;version="[2.4,4)" even in its latest version, while org.eclipse.jetty.servlet requires javax.servlet;version="[4.0.0,5)". So if you want to use the webconsole in your JAX-RS Remote Service, you need to stick with Jetty 9.

Note:
It is currently not possible to use Jetty 11 for OSGi development, as the OSGi implementations are not updated to the jakarta namespace.

For an overview on the Jetty versions and dependencies, have a look at the Jetty Downloads page.

Service Consumer Runtime

To consume the Remote Service provided via JAX-RS Distribution Provider, the runtime needs to be extended to include the additional dependencies:

  • Open the file org.fipro.modifier.client.app.bndrun
    • Add the following bundle to the Run Requirements
      • org.eclipse.ecf.provider.jersey.client
    • Save the changes
    • Click on Resolve to update the Run Bundles

If you now start the Service Consumer Runtime and have the Service Provider Runtime also running, you can execute the following command

modify jax

This will actually lead to an error if you followed my tutorial step by step:

ServiceException: Service exception on remote service proxy

The reason is that the Service Interface does not contain the JAX-RS annotations as the service actually does, and therefore the mapping is working. So while for providing the service the interface does not need to be modified, it has to for the consumer side.

Extend the Service Interface

  • Open the file org.fipro.modifier.api/bnd.bnd
  • Switch to the Build tab
    • Add the following bundle to the Build Path via the green plus icon
      • jakarta.ws.rs-api
  • Open the StringModifier class and add the JAX-RS annotations to be exactly the same as for the Service Implementation
package org.fipro.modifier.api;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Path("/modify")
public interface StringModifier {
    @GET
    @Produces(MediaType.TEXT_PLAIN)
    @Path("/{value}")
    String modify(@PathParam("value") String input);
}

If you now start the Uppercase Service Provider Runtime and the Service Consumer Runtime again, the error should be gone and you should see the expected result.

Update the “Inverter” Service Provider Runtime

After the Service Interface was extended to include the JAX-RS annotations, the first Service Provider Runtime will not resolve anymore because of missing dependencies. To fix this:

  • Open the file org.fipro.modifier.inverter.app.bndrun
    • Click on Resolve to update the Run Bundles

Now you can start that Service Provider Runtime again. If the other Service Provider and the Service Consumer are also active, executing the modify command will now output the result of both services.

Endpoint Description Extender Format (EDEF)

In the tutorial we used JmDNS/Zeroconf as Discovery Provider. This way there is not much we have to do as a developer or administrator despite adding the according bundle to the runtime. This kind of Discovery is using a broadcast mechanism to announce the service in the network. In cases this doesn’t work, e.g. firewall rules that block broadcasting, it is also possible that you use a static file-based discovery. This can be done using the Endpoint Description Extender Format (EDEF) and is also supported by ECF.

Let’s create an additional service that is distributed via JAX-RS. But this time we exclude the org.eclipse.ecf.provider.jmdns bundle, so there is no additional discovery inside the Service Provider Runtime. We also add the console bundles to be able to inspect the runtime.

Note:
If you don’t want to create another service, you can also modify the previous uppercase service. In that case remove the org.eclipse.ecf.provider.jmdns bundle from the product configuration and ensure that the console bundles are added to be able to inspect the remote service runtime via the OSGi Console.

  • Create the Service Implementation project
    • File -> New -> Bnd OSGi Project
    • Select the template
      • ECF Templates: Remote Service Project Templates -> JaxRS Remote Service Hello Impl Project
      • Bnd Templates: OSGi Release 7 Templates -> Component Development
    • Click Next
    • Set name to org.fipro.modifier.camelcase
    • Set JRE to JavaSE-11
    • Using the ECF Template:
      • Click Next
      • Set api_package to org.fipro.modifier.api
    • Click Finish
    • On the New module-info.java dialog select Don’t Create
  • Delete the file HelloWorldResource.java or Example.java
  • Using the Bnd Templates
    • Open the file bnd.bnd
      • Switch to the Build tab
      • Add the following bundles to the Build Path via the green plus icon
        • org.fipro.modifier.api
        • jakarta.ws.rs-api
      • Save
  • Copy the following CamelCaseModifier snippet the project
package org.fipro.modifier.camelcase;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

@Path("/modify")
@Component(
    immediate = true,
    property = { 
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs",
        "ecf.jaxrs.server.pathPrefix=/camelcase"})
public class CamelCaseModifier implements StringModifier {

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    @Path("/{value}")
    @Override
    public String modify(@PathParam("value") String input) {
        StringBuilder builder = new StringBuilder();
        if (input != null) {
            for (int i = 0; i < input.length(); i++) {
                char currentChar = input.charAt(i);
                if (i % 2 == 0) {
                    builder.append(Character.toUpperCase(currentChar));
                } else {
                    builder.append(Character.toLowerCase(currentChar));
                }
            }
        }
        else {
            builder.append("No input given");
        }
        return builder.toString();
    }
}
  • Create the Service Application project
    • File -> New -> Bnd OSGi Project
    • Select the template OSGi Release 7 Templates -> Application Project
    • Click Next
    • Set name to org.fipro.modifier.camelcase.app
    • Set JRE to JavaSE-11
    • Click Finish
    • On the New module-info.java dialog select Don’t Create
  • Open the file org.fipro.modifier.camelcase.app.bndrun
    • Add the following bundles to the Run Requirements
      • org.apache.felix.gogo.shell
      • org.apache.felix.gogo.command
      • org.fipro.modifier.camelcase
      • org.eclipse.ecf.osgi.services.distribution
      • org.eclipse.ecf.provider.jersey.server
      • org.apache.felix.http.jetty
      • org.eclipse.ecf.osgi.services.remoteserviceadmin.console
    • Remove the following bundle from the Run Requirements
      • org.fipro.modifier.camelcase.app
    • Set Execution Env.: JavaSE-11
    • Add the following properties to the OSGi Framework properties:
      • org.osgi.service.http.port=8282
      • ecf.jaxrs.server.pathPrefix=/services
    • Save the changes
    • Click on Resolve
    • Accept the result in the opening dialog via Update

Once the runtime is started via Run OSGi the service should be available via http://localhost:8282/services/camelcase/modify/remoteservice

You probably noticed a console output on startup that shows the Endpoint Description XML. This is actually what we need for the EDEF file. You can also get the endpoint description at runtime via the ECF Gogo Command listexports <endpoint.id>:

osgi> listexports
endpoint.id                          |Exporting Container ID                       |Exported Service Id
5918da3a-a971-429f-9ff6-87abc70d4742 |http://localhost:8282/services/camelcase     |38

osgi> listexports 5918da3a-a971-429f-9ff6-87abc70d4742
<endpoint-descriptions xmlns="http://www.osgi.org/xmlns/rsa/v1.0.0">
  <endpoint-description>
    <property name="ecf.endpoint.id" value-type="String" value="http://localhost:8282/services/camelcase"/>
    <property name="ecf.endpoint.id.ns" value-type="String" value="ecf.namespace.jaxrs"/>
    <property name="ecf.endpoint.ts" value-type="Long" value="1642667915518"/>
    <property name="ecf.jaxrs.server.pathPrefix" value-type="String" value="/camelcase"/>
    <property name="ecf.rsvc.id" value-type="Long" value="1"/>
    <property name="endpoint.framework.uuid" value-type="String" value="80778aff-63c7-448d-92a5-7902eb6782ae"/>
    <property name="endpoint.id" value-type="String" value="5918da3a-a971-429f-9ff6-87abc70d4742"/>
    <property name="endpoint.package.version.org.fipro.modifier" value-type="String" value="1.0.0"/>
    <property name="endpoint.service.id" value-type="Long" value="38"/>
    <property name="objectClass" value-type="String">
      <array>
        <value>org.fipro.modifier.StringModifier</value>
      </array>
    </property>
    <property name="remote.configs.supported" value-type="String">
      <array>
        <value>ecf.jaxrs.jersey.server</value>
      </array>
    </property>
    <property name="remote.intents.supported" value-type="String">
      <array>
        <value>passByValue</value>
        <value>exactlyOnce</value>
        <value>ordered</value>
        <value>osgi.async</value>
        <value>osgi.private</value>
        <value>osgi.confidential</value>
        <value>jaxrs</value>
      </array>
    </property>
    <property name="service.imported" value-type="String" value="true"/>
    <property name="service.imported.configs" value-type="String">
      <array>
        <value>ecf.jaxrs.jersey.server</value>
      </array>
    </property>
    <property name="service.intents" value-type="String">
      <array>
        <value>jaxrs</value>
      </array>
    </property>
  </endpoint-description>
</endpoint-descriptions>

The endpoint description is needed by the Service Consumer to discover the new service. Without a Discovery that is broadcasting, the service needs to be discovered statically via an EDEF file. As the EDEF file is registered via manifest header, we create a new bundle. You could also place it in an existing bundle like org.fipro.modifier.client, but for some more OSGi dynamics fun, let’s create a new bundle.

  • Create the EDEF configuration bundle project
    • File -> New -> Bnd OSGi Project
    • Select the template Bndtools -> Empty
    • Click Next
    • Set name to org.fipro.modifier.client.edef
    • Set JRE to JavaSE-11
    • Click Finish
  • On the New module-info.java dialog select Don’t Create
  • Create a new folder edef
    • Right click on the project -> New -> Folder
    • Set Folder name to edef
    • Click Finish
  • Create a new file camelcase.xml in that folder
    • Right click on the edef folder -> New -> File
    • Set File name to camelcase.xml
  • Copy the Endpoint Description XML from the previous console command execution into that file
  • Open the bnd.bnd file
    • Switch to the Source tab
    • Add the following statements
-includeresource: edef=edef

Remote-Service: edef/camelcase.xml
  • Open the file org.fipro.modifier.client.app.bndrun
    • Add org.fipro.modifier.client.edef to the Run Requirements
    • Save the changes
    • Click on Resolve to update the Run Bundles

If you start the Service Consumer Runtime, the service will directly be available. This is because the new org.fipro.modifier.client.edef bundle is activated automatically by the bnd launcher (a big difference compared to Equinox). Let’s deactivate it via the console. First we need to find the bundle-id via lb and then stop it via stop <bundle-id>. The output should look similar to the following snippet:

g! lb edef
START LEVEL 1
   ID|State      |Level|Name
   50|Active     |    1|org.fipro.modifier.client.edef (0.0.0)|0.0.0

g! stop 50

Now the service becomes unavailable via the modify command. If you start the bundle, the service becomes available again.

ECF Extensions to EDEF

The EDEF specification itself would not be sufficient for productive usage. For example, the values of the endpoint description properties need to match. For the endpoint.id this would be really problematic, as that value is a random generated uuid and changes on each runtime start. So if the Service Provider Runtime is restarted there is a new endpoint.id value. ECF includes a mechanism to support the discovery and the distribution even if the endpoint.id of the importer and the exporter do not match. This actually makes the EDEF file support work in productive environments.

ECF also provides a mechanism to create an endpoint description using a properties file. All the necessary endpoint description properties need to be included as properties with the respective types and values. The following example shows the properties representation for the EDEF XML of the above example. Note that for endpoint.id and endpoint.framework.uuid the type is set to uuid and the value is 0. This way ECF will generate a random UUID and the matching feature will ensure that the distribution will work even without matching id values.

ecf.endpoint.id=http://localhost:8282/services/camelcase
ecf.endpoint.id.ns=ecf.namespace.jaxrs
ecf.endpoint.ts:Long=1642761763599
ecf.jaxrs.server.pathPrefix=/camelcase
ecf.rsvc.id:Long=1
endpoint.framework.uuid:uuid=0
endpoint.id:uuid=0
endpoint.package.version.org.fipro.modifier.api=1.0.0
endpoint.service.id:Long=38
objectClass:array=org.fipro.modifier.api.StringModifier
remote.configs.supported:array=ecf.jaxrs.jersey.server
remote.intents.supported:array=passByValue,exactlyOnce,ordered,osgi.async,osgi.private,osgi.confidential,jaxrs
service.imported:boolean=true
service.imported.configs:array=ecf.jaxrs.jersey.server
service.intents:array=jaxrs

Properties files can be used to override values in an underlying XML EDEF file and even as an alternative, so the XML file is not needed anymore. It is even possible to override properties values for different environments, which makes it very interesting in a productive environment. So there can be a default Properties file for the basic endpoint description, then an endpoint description per service that derives from the basic settings, and even profile specific settings that changes for example the ecf.endpoint.id URLs per profile (DEV/INT/PROD). More details on that topic can be found in the ECF Wiki.

Alternatively you can also trigger a remote service import via EDEF programmatically using classes from the org.osgi.service.remoteserviceadmin package (see below). This way it is possible to dynamically import and close remote service registrations at runtime (without operating via low level OSGi bundle operations). The following snippet is an example for the programmatic registration of the service above:

Map<String, Object> properties = new HashMap<>();

properties.put("ecf.endpoint.id", "http://localhost:8282/services/camelcase");
properties.put("ecf.endpoint.id.ns", "ecf.namespace.jaxrs");
properties.put("ecf.endpoint.ts", 1642489801532l);
properties.put("ecf.jaxrs.server.pathPrefix", "/camelcase");
properties.put("ecf.rsvc.id", 1l);
properties.put("endpoint.framework.uuid", "0");
properties.put("endpoint.id", "0");
properties.put("endpoint.package.version.org.fipro.modifier.api", "1.0.0");
properties.put("endpoint.service.id", 38l);
properties.put("objectClass", new String[] { "org.fipro.modifier.api.StringModifier" });
properties.put("remote.configs.supported", new String[] { "ecf.jaxrs.jersey.server" });
properties.put("remote.intents.supported", new String[] { "passByValue", "exactlyOnce", "ordered", "osgi.async", "osgi.private", "osgi.confidential", "jaxrs" });
properties.put("service.imported", "true");
properties.put("service.intents", new String[] { "jaxrs" });
properties.put("service.imported.configs", new String[] { "ecf.jaxrs.jersey.server" });

EndpointDescription desc = new EndpointDescription(properties);
ImportRegistration importRegistration = admin.importService(desc);

Conclusion

The OSGi specification has several chapters and implementations to support a microservice architecture. The Remote Service and Remote Service Admin specifications are one of these and probably the most complicated ones, which was confirmed by several OSGi experts I talked with at conferences. Also the specification itself is not easy to understand, but I hope that this blog post helps to get a better understanding.

While Remote Services are pretty easy to implement, the complicated steps are in the setup of the runtime by collecting all necessary bundles. While the ECF project provides several examples and also tries to provide support for better bundle resolving, it is still not a trivial task. I hope this tutorial helps also in solving that topic a little bit.

Of course at runtime you might face networking issues, as I did in every talk for example. The typical fallacies are even referred in the Remote Service Specification. With the usage of JAX-RS and HTTP for the distribution of services and EDEF for a static file-based discovery, this might be less problematic. Give them a try if you are running into troubles.

At the end I again want to thank Scott Lewis for his continuous work on ECF and his support whenever I faced issues with my examples and had questions on some details. If you need an extension or if you have other requests regarding ECF or the JAX-RS Distribution Provider, please get in touch with him.

References

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Getting Started with OSGi Remote Services – Bndtools Edition

Getting Started with OSGi Remote Services – PDE Edition

At the EclipseCon Europe 2016 I held a tutorial together with Peter Kirschner named Building Nano Services with OSGi Declarative Services. The final exercise should have been the demonstration of OSGi Remote Services. It actually did not really happen because of the lack of time and networking issues. The next year at the EclipseCon Europe 2017 we joined forces again and gave a talk with the name Microservices with OSGi. In that talk we focused on OSGi Remote Services, but we again failed with the demo at the end because of networking issues. At the EclipseCon Europe 2018 I gave a talk on how to use different OSGi specifications for connecting services remotely titled How to connect your OSGi application. Of course I mentioned OSGi Remote Services there, and of course the demonstration failed again because of networking issues.

In the last years I published several blog posts and gave several talks related to OSGi, and often the topic OSGi Remote Services was raised, but never really covered in detail. Scott Lewis, the project lead of the Eclipse Communication Framework, was really helpful whenever I encountered issues with Remote Services. I promised to write a blog post about that topic as a favour for all the support. And with this blog post I finally want to keep my promise. That said, let’s start with OSGi Remote Services.

Motivation

First I want to explain the motivation for having a closer look at OSGi Remote Services. Looking at general software architecture discussions in the past, service oriented architectures and microservices are a huge topic. Per definition the idea of a microservices architecture is to have

  • a suite of small services
  • each running in its own process
  • communicating with a lightweight mechanism, e.g. HTTP
  • independently deployable
  • easy to replace

While new frameworks and tools came up over the years, the OSGi specification(s) covers these topics for a long time. Via the service registry and the service dynamics you can build up very small modules. Those modules can then be integrated into small runtimes and deployed in different environments (despite the needed JVM or a database if needed). The services in those small independent deployments can then be accessed in different ways, like using the HTTP Whiteboard or JAX-RS Whiteboard. This satisfies the aspect of a communication between services via lightweight mechanisms. For inhomogeneous environments the usage of those specifications is a good match. But it means that you need to implement the access layer on the provider side (e.g. the JAX-RS wrapper to access the service via REST) and you need to implement the service access on the consumer side by using a corresponding framework to execute the REST calls.

Ideally the developer of the service as well as the developer of the service consumer should not need to think about the infrastructure of the whole application. Well, it is always good that everybody in a project knows about everything, but the idea is to not making your code dependent on infrastructure. And this is where OSGi Remote Services come in. You develop the service and the service consumer as if they would be executed in the same runtime. In the deployment the lightweight communication will be added to support service communication over a network.

And as initially mentioned, I want to have a look at ways how to probably get rid of the networking issues I faced in the presentations in the past.

Introduction

To understand this blog post you should be familiar with OSGi services and ideally with OSGi Declarative Services. If you are not familiar with OSGi DS, you can get an introduction by reading my blog post Getting Started with OSGi Declarative Services.

In short, the OSGi Service Layer specifies a Service Producer that publishes a service, and a Service Consumer that listens and retrieves a service. This is shown in the following picture:

With OSGi Remote Services this picture is basically the same. The difference is that the services are registered and consumed across network boundaries. For OSGi Remote Services the above picture could be extended to look like the following:

Glossary

To understand the above picture and the following blog post better, here is a short glossary for the used terms:

  • Remote Service (Distributed Service)
    Basic specification to describe how OSGi services can be exported and imported to be available across network boundaries.
  • Distribution Provider
    Exports services by creating endpoints on the producer side, imports services by creating proxies to access endpoints on the consumer side, manages policies around the topology and discovers remote services.
  • Endpoint
    Communication access mechanism to a remote service that requires some protocol for communications.
  • Topology
    Mapping between services and endpoints as well as their communication characteristics.
  • Remote Service Admin (RSA)
    Provides the mechanisms to import and export services through a set of configuration types. It is a passive Distribution Provider, not taking any action to export or import itself.
  • Topology Manager
    Provides the policy for importing and exporting services via RSA and implements a Topology.
  • Discovery
    Discover / announce Endpoint Descriptions via some discovery protocol.
  • Endpoint Description
    A properties based description of an Endpoint that can be exchanged between different frameworks to create connections to each other’s services.

To get a slightly better understanding, the following picture shows some more details inside the Remote Service Implementation block.

Note:
Actually this picture is still a simplified version, as internally there are Endpoint Event Listener and Remote Service Admin Listener that are needed to trigger all the necessary actions. But to get an idea how things play together this picture should be sufficient.

Now let’s explain the picture in more detail:

Service Provider Runtime

  • A service is marked to be exported. This is done via service properties.
  • The Distribution Provider creates an endpoint for the exported service:
    • The Topology Manager gets informed about the exported service.
    • If the export configuration matches the Topology it instructs the Remote Service Admin to create an Endpoint.
    • The Remote Service Admin creates the Endpoint.
  • The Discovery gets informed via Endpoint Event Listener and announces the Endpoint to other systems via Endpoint Description.

Service Consumer Runtime

  • The Discovery discovers an Endpoint via Endpoint Description that was announced in the network.
  • The Distribution Provider creates a proxy for the service.
    • The Topology Manager learns from the Discovery about the newly discovered service (via Endpoint Event Listener), which then instructs the Remote Service Admin to import the service.
    • The Remote Service Admin then creates a local service proxy that is registered as service in the local OSGi runtime. This proxy is mapped to the remote service (or an alternative like a webservice).
  • The service proxy is used for wiring.

To simplify the picture again, the important takeaways are the Distribution Provider and the Discovery. The Distribution Provider is responsible for exporting and importing the service, the Discovery is responsible for announcing and discovering the service. The other terms are needed for a deeper understanding, but for a high level understanding of OSGi Remote Services, these two are sufficient.

Tutorial

Now it is time to get our hands dirty and play with OSGi Remote Services. This tutorial has several steps:

  1. Project Setup
  2. Service Implementation (API & Impl)
  3. Service Provider Runtime
  4. Service Consumer Implementation
  5. Service Consumer Runtime

There are different ways and tools available for OSGi development. In this tutorial I will use the Eclipse PDE Tooling (Plug-in Development Environment). I also published this tutorial with other toolings if you don’t want to use PDE:

Note:
Remember to activate the PDE DS Annotation Processing via Window → Preferences → Plug-in Development → DS Annotations.

ECF – Remote Service Runtime

While the implementation and export of an OSGi service as a Remote Service is trivial in first place, the definition of the runtime can become quite complicated. Especially collecting the necessary bundles is not that easy without some guidance.

As a reference, with Equinox as underlying OSGi framework the following bundles need to be part of the runtime as a basis:

  • Equinox OSGi
    • org.eclipse.osgi
    • org.eclipse.osgi.services
    • org.eclipse.equinox.common
    • org.eclipse.equinox.event
    • org.eclipse.osgi.util
    • org.apache.felix.scr
  • Equinox Console
    • org.apache.felix.gogo.command
    • org.apache.felix.gogo.runtime
    • org.apache.felix.gogo.shell
    • org.eclipse.equinox.console
  • ECF and dependencies
    • org.eclipse.core.jobs
    • org.eclipse.ecf
    • org.eclipse.ecf.discovery
    • org.eclipse.ecf.identity
    • org.eclipse.ecf.osgi.services.distribution
    • org.eclipse.ecf.osgi.services.remoteserviceadmin
    • org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy
    • org.eclipse.ecf.remoteservice
    • org.eclipse.ecf.remoteservice.asyncproxy
    • org.eclipse.ecf.sharedobject
    • org.eclipse.equinox.concurrent
    • org.eclipse.osgi.services.remoteserviceadmin

With the above basic runtime configuration the Remote Services will not yet work. There are still two things missing, the Discovery and the Distribution Provider. ECF provides different implementations for both. Which implementations to use needs to be defined by the project. In this tutorial we will use Zeroconf/JmDNS for the Discovery and the Generic Distribution Provider:

  • ECF Discovery – Zeroconf
    • org.eclipse.ecf.provider.jmdns
  • ECF Distribution Provider – Generic
    • org.eclipse.ecf.provider
    • org.eclipse.ecf.provider.remoteservice

Note:
You can find the list of different implementations with the documentation about the bundles, configuration types and intents in the ECF Wiki:

Project Setup

With the Eclipse PDE tooling (Plug-in Development Environment) it is a best practice to create a Target Definition. This way you explicitly specify what to consume for building your application. For this tutorial all needed plug-ins and features are available via p2 update sites, so the creation of the Target Definition is straight forward.

  • Create the target platform project
    • Main Menu → File → New → Project… → General → Project
    • Set name to org.fipro.remoteservice.target
    • Click Finish
  • Create a new target definition
    • Right click on project → New → Other… → Plug-in Development → Target Definition
    • Set the filename to org.fipro.remoteservice.target.target
    • Initialize the target definition with: Nothing: Start with an empty target definition
  • Add a new Software Site in the opened Target Definition Editor by clicking Add… in the Locations section
    • Select Software Site
    • Software Site https://download.eclipse.org/releases/2021-12
    • Uncheck Group by Category
    • Select the following items (use the filter):
      • Eclipse Platform Launcher Executables
      • Equinox Compendium SDK
      • Equinox Core SDK
    • Click Finish
  • Add a new Software Site in the opened Target Definition Editor by clicking Add… in the Locations section
  • Click Finish
  • Save the changes
  • Activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

The source of the .target file should look similar to the following snippet, just in case you are using the Generic Text Editor for creating and editing a Target Definition instead of the wizard based PDE Target Definition Editor.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?pde version="3.8"?>
<target name="org.fipro.remoteservice.target">
    <locations>
        <location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
            <repository location="https://download.eclipse.org/releases/2021-12"/>
            <unit id="org.eclipse.equinox.compendium.sdk.feature.group" version="3.22.200.v20211021-1418"/>
            <unit id="org.eclipse.equinox.core.sdk.feature.group" version="3.23.200.v20211104-1730"/>
            <unit id="org.eclipse.equinox.executable.feature.group" version="3.8.1400.v20211117-0650"/>
        </location>
        <location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
            <repository location="https://download.eclipse.org/rt/ecf/3.14.31/site.p2"/>
            <unit id="org.eclipse.ecf.remoteservice.sdk.feature.feature.group" version="3.14.31.v20220116-0708"/>
        </location>
    </locations>
</target>

Note:
The Eclipse SimRel p2 repository https://download.eclipse.org/releases/2021-12 also contains ECF, but in the older version 3.14.26. That version has a bug (I will notice later) which was fixed with 3.14.31. The current ECF version can be found via the ECF Download page.

After the creation of the Target Platform project, we need to create the Service API project and the Service Implementation project.

Service Interface

  • Create the Service API plug-in project
    • File -> New -> Plug-in Project
    • Set Project name to org.fipro.modifier.api
    • Click Next
    • Use the following settings:
      • Execution Environment: JavaSE-11
      • Uncheck Generate an activator
      • Uncheck This plug-in will make contributions to the UI
      • Create a rich client application? No
    • Click Finish
  • Create a new package org.fipro.modifier.api
  • Copy the following interface StringModifier into the created package
package org.fipro.modifier.api;

public interface StringModifier {
    String modify(String input);
}
  • Open the MANIFEST.MF file
    • on the Overview tab set the Version to 1.0.0.qualifier
    • on the Runtime tab add the org.fipro.modifier.api package to the Exported Packages
      • Specify the version 1.0.0 on the package via Properties…

Service Implementation

  • Create the Service Implementation plug-in project
    • File -> New -> Plug-in Project
    • Set Project name to org.fipro.modifier.inverter
    • Click Next
    • Use the following settings:
      • Execution Environment: JavaSE-11
      • Uncheck Generate an activator
      • Uncheck This plug-in will make contributions to the UI
      • Create a rich client application? No
    • Click Finish
  • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following two dependencies on the Imported Packages side:
      • org.fipro.modifier.api (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties… to ensure there are no runtime dependencies.
    • Add the upper version boundaries to the Import-Package statements by selecting Properties… for both imported packages and specify 2.0.0 as upper bound.
  • Create a new package org.fipro.modifier.inverter
  • Copy the following class StringInverter into the created package
package org.fipro.modifier.inverter;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

@Component(property= {
    "service.exported.interfaces=*",
    "service.exported.configs=ecf.generic.server" }
)
public class StringInverter implements StringModifier {

    @Override
    public String modify(String input) {
        return (input != null)
            ? new StringBuilder(input).reverse().toString()
            : "No input given";
    }
}

The only thing that needs to be done additionally in comparison to creating a local OSGi service, is to configure that the service should be exported as Remote Service. This is done by setting the component property service.exported.interfaces. The value of this property needs to be a list of types for which the class is registered as a service. For a simple use case like the above, the asterisk can be used, which means to export the service for all interfaces under which it is registered, but to ignore the classes. For more detailed information have a look at the Remote Service Properties section of the OSGi Compendium Specification.

The other component property used in the above example is service.exported.configs. This property is used to specify the configuration types, for which the Distribution Provider should create Endpoints. If it is not specified, the Distribution Provider is free to choose the default configuration type for the service.

Note:
In the above example we use the ECF Generic Provider. This one by default chooses a SSL configuration type, so without additional configuration the example would not work if we don’t specify the configuration type.

Additionally you can specify Intents via the service.exported.intents component property to constrain the possible communication mechanisms that a distribution provider can choose to distribute a service. An example will be provided at a later step.

Service Provider Runtime

In a PDE based project you either create a launch configuration or a product configuration. With the later you are even able to build an executable runtime from the command line via Tycho that you can then deploy.

  • Create a Product Project
    • Main Menu → File → New → Project… → General → Project
    • Set name to org.fipro.modifier.inverter.product
    • Click Finish
  • Create a new Product Configuration
    • Right click on project → New → Other… → Plug-in Development → Product Configuration
    • Click Next
    • Set the filename to org.fipro.modifier.inverter.product
    • Click Finish
  • Configure the product
    • Select the Overview tab
      • Set the General Information
        ID = org.fipro.modifier.inverter.product
        Version = 1.0.0.qualifier
        Check The product includes native launcher artifacts
      • In the Product Definition section leave the Product and Application empty and select The product configuration is based on: plug-ins
    • Select the Contents tab
      • Add the following plug-ins
        • org.apache.felix.scr
        • org.eclipse.core.jobs
        • org.eclipse.ecf
        • org.eclipse.ecf.discovery
        • org.eclipse.ecf.identity
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.ecf.osgi.services.remoteserviceadmin
        • org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy
        • org.eclipse.ecf.provider
        • org.eclipse.ecf.provider.jmdns
        • org.eclipse.ecf.provider.remoteservice
        • org.eclipse.ecf.remoteservice
        • org.eclipse.ecf.remoteservice.asyncproxy
        • org.eclipse.ecf.sharedobject
        • org.eclipse.equinox.common
        • org.eclipse.equinox.concurrent
        • org.eclipse.equinox.event
        • org.eclipse.osgi
        • org.eclipse.osgi.services
        • org.eclipse.osgi.services.remoteserviceadmin
        • org.eclipse.osgi.util
        • org.fipro.modifier.api
        • org.fipro.modifier.inverter
    • Select Configuration tab
      • Add the following bundles to the Start Levels section by clicking the Add… button:
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.ecf.provider.remoteservice
        • org.eclipse.equinox.event
      • Set Auto-Start for every bundle in the Start Levels section to true
    • Select Launching tab
      • Add
        Declipse.ignoreApp=true -Dosgi.noShutdown=true
        to the VM Arguments
        Skip trying to launch an Eclipse application and avoid that the OSGi Framework is shutdown after an Eclipse application is stopped.

Now you can save the changes and start the Inverter Service Runtime from the Overview tab via Launch an Eclipse application. But actually you won’t see anything now, unless a running process in the background.

Service Consumer

The implementation of a Remote Service Consumer also quite simple. From the development perspective there is nothing to consider. The service consumer is implemented without any additions. Only the runtime needs to be extended to contain the necessary bundles for Discovery and Distribution, which is covered in the next section.

The simplest way of implementing a service consumer is a Gogo Shell command.

  • Create the Service Consumer plug-in project
    • File -> New -> Plug-in Project
    • Set Project name to org.fipro.modifier.client
    • Click Next
    • Use the following settings:
      • Execution Environment: JavaSE-11
      • Uncheck Generate an activator
      • Uncheck This plug-in will make contributions to the UI
      • Create a rich client application? No
    • Click Finish
  • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following two dependencies on the Imported Packages side:
      • org.fipro.modifier.api (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties… to ensure there are no runtime dependencies.
    • Add the upper version boundaries to the Import-Package statements by selecting Properties… for both imported packages and specify 2.0.0 as upper bound.
  • Create a new package org.fipro.modifier.client
  • Copy the following class ModifyCommand into the created package
package org.fipro.modifier.client;

import java.util.List;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=modify"},
    service=ModifyCommand.class
)
public class ModifyCommand {

    @Reference
    volatile List<StringModifier> modifier;
	
    public void modify(String input) {
        if (modifier.isEmpty()) {
            System.out.println("No StringModifier registered");
        } else {
            modifier.forEach(m -> System.out.println(m.modify(input)));
        }
    }
}

Service Consumer Runtime

Creating a Product Project with a Product Configuration for the Service Consumer is similar to the Service Runtime. Just change the project and configuration name to org.fipro.modifier.client.product. And of course instead of org.fipro.modifier.inverter you need to add org.fipro.modifier.client and the console bundles to the Contents of the Product Configuration.

  • Create a Product Project
    • Main Menu → File → New → Project… → General → Project
    • Set name to org.fipro.modifier.client.product
    • Click Finish
  • Create a new Product Configuration
    • Right click on project → New → Other… → Plug-in Development → Product Configuration
    • Set the filename to org.fipro.modifier.client.product
  • Configure the product
    • Select the Overview tab
      • Set the General Information
        ID = org.fipro.modifier.client.product
        Version = 1.0.0.qualifier
        Check The product includes native launcher artifacts
      • In the Product Definition section leave the Product and Application empty and select The product configuration is based on: plug-ins
    • Select the Contents tab
      • Add the following plug-ins
        • org.apache.felix.gogo.command
        • org.apache.felix.gogo.runtime
        • org.apache.felix.gogo.shell
        • org.apache.felix.scr
        • org.eclipse.core.jobs
        • org.eclipse.ecf
        • org.eclipse.ecf.discovery
        • org.eclipse.ecf.identity
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.ecf.osgi.services.remoteserviceadmin
        • org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy
        • org.eclipse.ecf.provider
        • org.eclipse.ecf.provider.jmdns
        • org.eclipse.ecf.provider.remoteservice
        • org.eclipse.ecf.remoteservice
        • org.eclipse.ecf.remoteservice.asyncproxy
        • org.eclipse.ecf.sharedobject
        • org.eclipse.equinox.common
        • org.eclipse.equinox.concurrent
        • org.eclipse.equinox.console
        • org.eclipse.equinox.event
        • org.eclipse.osgi
        • org.eclipse.osgi.services
        • org.eclipse.osgi.services.remoteserviceadmin
        • org.eclipse.osgi.util
        • org.fipro.modifier.api
        • org.fipro.modifier.client
    • Select Configuration tab
      • Add the following bundles to the Start Levels section by clicking the Add… button:
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.ecf.provider.remoteservice
        • org.eclipse.equinox.event
        • Set Auto-Start for every bundle in the Start Levels section to true
    • Select Launching tab
      • Add
        console 
        to the Program Arguments
        This activates the OSGi Console in interactive mode.
      • Add
        Declipse.ignoreApp=true -Dosgi.noShutdown=true
        to the VM Arguments
        Skip trying to launch an Eclipse application and avoid that the OSGi Framework is shutdown after an Eclipse application is stopped.

Now you can save start the Inverter Service Runtime from the Overview tab via Launch an Eclipse application. Once the application is started you can execute the created Gogo Shell command via

modify <input>

If services are available, it will print out the modified results. Otherwise the message “No StringModifier registered” will be printed.

Note:
I have configured the bare minimum autostarting configuration which should actually start all required bundles based on the bundle configurations and dependencies. If you face any issues, try to check if all bundles are Active. Otherwise add additional entries in the Start Levels section.

Remote Service Admin Events

There are several events with regards to importing and exporting Remote Services, that are fired by the Remote Service Admin synchronously once they happen. These events are posted asynchronously via the OSGi Event Admin under the topic

org/osgi/service/remoteserviceadmin/<type>

Where <type> can be one of the following:

  • EXPORT_ERROR
  • EXPORT_REGISTRATION
  • EXPORT_UNREGISTRATION
  • EXPORT_UPDATE
  • EXPORT_WARNING
  • IMPORT_ERROR
  • IMPORT_REGISTRATION
  • IMPORT_UNREGISTRATION
  • IMPORT_UPDATE
  • IMPORT_WARNING

A simple event listener that prints to the console on any Remote Service Admin Event could look like this:

@Component(property = EventConstants.EVENT_TOPIC + "=org/osgi/service/remoteserviceadmin/*")
public class RemoteServiceEventListener implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println(event.getTopic());
        for (String objectClass :  ((String[])event.getProperty("objectClass"))) {
            System.out.println("\t"+objectClass);
        }
    }

}

For further details on the Remote Service Admin Events have a look at the OSGi Compendium Specification Chapter 122.7.

If you need to react synchronously on these events, you can implement a RemoteServiceAdminListener. I typically would not recommend this, unless you really want blocking calls on import/export events. Typically it is intended to be used internally by the Remote Service Admin. But for debugging purposes the ECF project also provides a DebugRemoteServiceAdminListener. It writes the endpoint description via a Writer to support debugging of Remote Services. Via the following class you could easily register a DebugRemoteServiceAdminListener via OSGi DS that prints the information on the console.

@Component
public class DebugListener
    extends DebugRemoteServiceAdminListener
    implements RemoteServiceAdminListener {
	// register the DebugRemoteServiceAdminListener via DS
}

To test this you can either add the above components to one of the existing bundles, or create a new bundle and add that bundle to the runtimes.

Runtime Debugging

The ECF project provides several ways for runtime inspection and runtime debugging. This is mainly done Gogo Shell commands provided via separate bundles. To enable the OSGi console and the ECF console commands, you need to add the following bundles to your runtime:

  • Equinox Console
    • org.apache.felix.gogo.command
    • org.apache.felix.gogo.runtime
    • org.apache.felix.gogo.shell
    • org.eclipse.equinox.console
  • ECF Console
    • org.eclipse.ecf.console
    • org.eclipse.ecf.osgi.services.remoteserviceadmin.console

If you add those bundles to the Service Provider Runtime, you also need to add the -console parameter to the Program Arguments of the Product Configuration (Launching tab) to activate the OSGi Console in interactive mode. Of course adding the ECF Console bundles to the Service Consumer Runtime is also very helpful for debugging.

With the ECF Console bundles added to the runtime, there are several commands to inspect and interact with the Remote Service Admin. As an overview the available commands are listed in the wiki:
Gogo Commands for Remote Services Development

Additionally the DebugRemoteServiceAdminListener described above is activated by default with the ECF Console bundles. It can be activated or deactivated in the runtime via the command

ecf:rsadebug <true/false>

JAX-RS Distribution Provider

One of the biggest issues I faced when working with Remote Services is networking as mentioned in the introduction. In the above example the ECF Generic Distribution Provider is used for a simpler setup. But for example in a corporate network with enabled firewalls somewhere in the network setup, the example will probably not work. As said before, the ECF project provides multiple Distribution Provider implementations, which gives the opportunity to configure the setup to match the project needs. One interesting implementation in that area is the JAX-RS Distribution Provider. Using that one could probably help solving several of the networking issues related to firewalls. But as with the whole Remote Service topic, the complexity in the setup is quite high because of the increased number of dependencies that need to be resolved.

The JAX-RS Distribution Provider implementation is available for Eclipse Jersey and Apache CXF. It uses the OSGi HttpService to register the JAX-RS resource, and of course it then also needs a Servlet container like Eclipse Jetty to provide the JAX-RS resource. I will show the usage of the Jersey based implementation in the following sections.

Project Setup

As a first step the JAX-RS Distribution Provider needs to be consumed. In PDE this means to add it to the Target Definition. Unfortunately it is not officially released via the Eclipse Foundation infrastructure, but the p2 update site is available via the GitHub project.

  • Open the target definition in org.fipro.remoteservice.target/org.fipro.remoteservice.target.target
  • Add a new Software Site in the opened Target Definition Editor by clicking Add… in the Locations section
    • Select Software Site
    • Software Site https://raw.githubusercontent.com/ECF/JaxRSProviders/master/build/
    • Uncheck Group by Category
    • Select the following items:
      • ECF Remote Services JAX-RS Jersey Client Provider
      • ECF Remote Services JAX-RS Jersey Server Provider
    • Click Finish
  • Add Jetty to the Target Definition
    • Select the Software Site https://download.eclipse.org/releases/2021-12/
    • Select Edit…
    • Select the following item:
      • Jetty Http Server Feature
  • Activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

The source of the .target file should look similar to the following snippet, just in case you are using the Generic Text Editor for creating and editing a Target Definition instead of the wizard based PDE Target Definition Editor.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?pde version="3.8"?>
<target name="org.fipro.remoteservice.target">
    <locations>
        <location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
            <repository location="https://download.eclipse.org/releases/2021-12"/>
            <unit id="org.eclipse.equinox.compendium.sdk.feature.group" version="3.22.200.v20211021-1418"/>
            <unit id="org.eclipse.equinox.core.sdk.feature.group" version="3.23.200.v20211104-1730"/>
            <unit id="org.eclipse.equinox.executable.feature.group" version="3.8.1400.v20211117-0650"/>
            <unit id="org.eclipse.equinox.server.jetty.feature.group" version="1.10.900.v20211021-1418"/>
        </location>
        <location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
            <repository location="https://raw.githubusercontent.com/ECF/JaxRSProviders/master/build/"/>
            <unit id="org.eclipse.ecf.provider.jersey.client.feature.feature.group" version="0.0.0"/>
            <unit id="org.eclipse.ecf.provider.jersey.server.feature.feature.group" version="0.0.0"/>
        </location>
        <location includeAllPlatforms="false" includeConfigurePhase="true" includeMode="planner" includeSource="true" type="InstallableUnit">
            <repository location="https://download.eclipse.org/rt/ecf/3.14.31/site.p2"/>
            <unit id="org.eclipse.ecf.remoteservice.sdk.feature.feature.group" version="3.14.31.v20220116-0708"/>
        </location>
    </locations>
</target>

JAX-RS Remote Service Implementation

The implementation of the service already looks different compared to what you have seen so far. Instead of only adding the necessary Component Properties to configure the service as a Remote Service, the service implementation does directly contain the JAX-RS annotations. That of course also means that the annotations need to be available.

  • Create the Service Implementation plug-in project
    • File -> New -> Plug-in Project
    • Set name to org.fipro.modifier.uppercase
    • Click Next
    • Use the following settings:
      • Execution Environment: JavaSE-11
      • Uncheck Generate an activator
      • Uncheck This plug-in will make contributions to the UI
      • Create a rich client application? No
    • Click Finish
  • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • javax.ws.rs
      • javax.ws.rs.core
      • org.fipro.modifier.api (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties… to ensure there are no runtime dependencies.
  • Create a new package org.fipro.modifier.uppercase
  • Copy the following UppercaseModifier class into that package
package org.fipro.modifier.uppercase;

import java.util.Locale;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

//The JAX-RS path annotation for this service
@Path("/modify")
//The OSGi DS component annotation
@Component(
    immediate = true,
    property = { 
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs"})
public class UppercaseModifier implements StringModifier {

    @GET
    // The JAX-RS annotation to specify the result type
    @Produces(MediaType.TEXT_PLAIN)
    // The JAX-RS annotation to specify that the last part
    // of the URL is used as method parameter
    @Path("/{value}")
    @Override
    public String modify(@PathParam("value") String input) {
        return (input != null)
            ? input.toUpperCase(Locale.getDefault())
            : "No input given";
    }
}

For the JAX-RS annotations, please have a look at various existing tutorials and blog posts in the internet, for example

About the OSGi DS configuration:

  • The service is an Immediate Compontent, so it is consumed by the OSGi Http Whiteboard on startup
  • Export all interfaces as Remote Service via service.exported.interfaces=*
  • Configure that JAX-RS is used as communication mechanism by the distribution provider via service.exported.intents=jaxrs

Note:
As mentioned earlier there is a bug in ECF 3.14.26 which is integrated in the Eclipse 2021-21 SimRel repo. The service.exported.intents property is not enough to get the JAX-RS resource registered. Additionally it is necessary to set service.exported.configs=ecf.jaxrs.jersey.server to make it work. This was fixed shortly after I reported it and is included with the current ECF 3.14.31 release. The basic idea of the intent configuration is to make the service independent of the underlying JAX-RS Distribution Provider implementation (Jersey vs. Apache CXF).

JAX-RS Jersey Distribution Provider Dependencies

For the JAX-RS Distribution Provider Runtime a lot more dependencies are required. The following list should cover the additional necessary base dependencies:

  • Jackson
    • com.fasterxml.jackson.core.jackson-annotations
    • com.fasterxml.jackson.core.jackson-core
    • com.fasterxml.jackson.core.jackson-databind
    • com.fasterxml.jackson.jaxrs.jackson-jaxrs-base
    • com.fasterxml.jackson.jaxrs.jackson-jaxrs-json-provider
    • com.fasterxml.jackson.module.jackson-module-jaxb-annotations
  • Jersey / Glassfish / Dependencies
    • org.glassfish.hk2.api
    • org.glassfish.hk2.external.aopalliance-repackaged
    • org.glassfish.hk2.external.jakarta.inject
    • org.glassfish.hk2.locator
    • org.glassfish.hk2.osgi-resource-locator
    • org.glassfish.hk2.utils
    • org.glassfish.jersey.containers.jersey-container-servlet
    • org.glassfish.jersey.containers.jersey-container-servlet-core
    • org.glassfish.jersey.core.jersey-client
    • org.glassfish.jersey.core.jersey-common
    • org.glassfish.jersey.core.jersey-server
    • org.glassfish.jersey.ext.jersey-entity-filtering
    • org.glassfish.jersey.inject.jersey-hk2
    • org.glassfish.jersey.media.jersey-media-jaxb
    • org.glassfish.jersey.media.jersey-media-json-jackson
    • com.sun.activation.javax.activation
    • jakarta.annotation-api
    • jakarta.servlet-api
    • jakarta.ws.rs-api
    • jakarta.xml.bind-api
    • javassist
    • javax.validation.api
    • org.slf4j.api

For the Service Provider we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Server bundles, the Jetty as embedded server and the HTTP Whiteboard:

  • ECF Distribution Provider – JAX-RS Jersey
    • org.eclipse.ecf.provider.jaxrs
    • org.eclipse.ecf.provider.jaxrs.server
    • org.eclipse.ecf.provider.jersey.server
  • Jetty
    • org.eclipse.jetty.http
    • org.eclipse.jetty.io
    • org.eclipse.jetty.security
    • org.eclipse.jetty.server
    • org.eclipse.jetty.servlet
    • org.eclipse.jetty.util
    • org.eclipse.jetty.util.ajax
  • OSGi Whiteboard (Equinox / Jetty)
    • org.eclipse.equinox.http.jetty
    • org.eclipse.equinox.http.servlet

For the Service Consumer we need the following dependencies, which are the JAX-RS Jersey Distribution Provider Client bundles to be able to access the JAX-RS resource:

  • ECF Distribution Provider – JAX-RS Jersey
    • org.eclipse.ecf.provider.jaxrs
    • org.eclipse.ecf.provider.jaxrs.client
    • org.eclipse.ecf.provider.jersey.client

Service Provider Runtime

  • Create a Product Project
    • Main Menu → File → New → Project → General → Project
    • Set name to org.fipro.modifier.uppercase.product
    • Click Finish
  • Create a new Product Configuration
    • Right click on project → New → Other… → Plug-in Development → Product Configuration
    • Set the filename to org.fipro.modifier.uppercase.product
  • Configure the product
    • Select the Overview tab
      • Set the General Information
        ID = org.fipro.modifier.uppercase.product
        Version = 1.0.0.qualifier
        Check The product includes native launcher artifacts
      • In the Product Definition section leave the Product and Application empty and select The product configuration is based on: plug-ins
    • Select the Contents tab
      • Add the following plug-ins
        • com.fasterxml.jackson.core.jackson-annotations
        • com.fasterxml.jackson.core.jackson-core
        • com.fasterxml.jackson.core.jackson-databind
        • com.fasterxml.jackson.jaxrs.jackson-jaxrs-base
        • com.fasterxml.jackson.jaxrs.jackson-jaxrs-json-provider
        • com.fasterxml.jackson.module.jackson-module-jaxb-annotations
        • com.sun.activation.javax.activation
        • jakarta.annotation-api
        • jakarta.servlet-api
        • jakarta.ws.rs-api
        • jakarta.xml.bind-api
        • javassist
        • javax.validation.api
        • org.apache.felix.gogo.command
        • org.apache.felix.gogo.runtime
        • org.apache.felix.gogo.shell
        • org.apache.felix.scr
        • org.eclipse.core.jobs
        • org.eclipse.ecf
        • org.eclipse.ecf.console
        • org.eclipse.ecf.discovery
        • org.eclipse.ecf.identity
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.ecf.osgi.services.remoteserviceadmin
        • org.eclipse.ecf.osgi.services.remoteserviceadmin.console
        • org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy
        • org.eclipse.ecf.provider.jaxrs
        • org.eclipse.ecf.provider.jaxrs.server
        • org.eclipse.ecf.provider.jersey.server
        • org.eclipse.ecf.provider.jmdns
        • org.eclipse.ecf.remoteservice
        • org.eclipse.ecf.remoteservice.asyncproxy
        • org.eclipse.ecf.sharedobject
        • org.eclipse.equinox.common
        • org.eclipse.equinox.concurrent
        • org.eclipse.equinox.console
        • org.eclipse.equinox.event
        • org.eclipse.equinox.http.jetty
        • org.eclipse.equinox.http.servlet
        • org.eclipse.jetty.http
        • org.eclipse.jetty.io
        • org.eclipse.jetty.security
        • org.eclipse.jetty.server
        • org.eclipse.jetty.servlet
        • org.eclipse.jetty.util
        • org.eclipse.jetty.util.ajax
        • org.eclipse.osgi
        • org.eclipse.osgi.services
        • org.eclipse.osgi.services.remoteserviceadmin
        • org.eclipse.osgi.util
        • org.fipro.modifier.api
        • org.fipro.modifier.uppercase
        • org.glassfish.hk2.api
        • org.glassfish.hk2.external.aopalliance-repackaged
        • org.glassfish.hk2.external.jakarta.inject
        • org.glassfish.hk2.locator
        • org.glassfish.hk2.osgi-resource-locator
        • org.glassfish.hk2.utils
        • org.glassfish.jersey.containers.jersey-container-servlet
        • org.glassfish.jersey.containers.jersey-container-servlet-core
        • org.glassfish.jersey.core.jersey-client
        • org.glassfish.jersey.core.jersey-common
        • org.glassfish.jersey.core.jersey-server
        • org.glassfish.jersey.ext.jersey-entity-filtering
        • org.glassfish.jersey.inject.jersey-hk2
        • org.glassfish.jersey.media.jersey-media-jaxb
        • org.glassfish.jersey.media.jersey-media-json-jackson
        • org.slf4j.api
    • Select Configuration tab
      • Add the following bundles to the Start Levels section by clicking the Add… button:
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.equinox.event
        • org.eclipse.equinox.http.jetty
      • Set Auto-Start for every bundle in the Start Levels section to true
    • Select Launching tab
      • Add
        console 
        to the Program Arguments
      • Add
        Declipse.ignoreApp=true -Dosgi.noShutdown=true
        to the VM Arguments
      • Add
        -Dorg.osgi.service.http.port=8181
        to the VM Arguments to configure the Http Service

Now you can start the Uppercase JAX-RS Service Runtime from the Overview tab via Launch an Eclipse application. After the runtime is started the service will be available as JAX-RS resource and can be accessed in a browser, e.g. http://localhost:8181/modify/remoteservice

Note:
Don’t worry if you see a SelectContainerException in the console. It is only an information that tells that the service from the first part of the tutorial can not be imported in the runtime of this part of the tutorial and vice versa. The first service is distributed via the Generic Provider, while the second service is distributed by the JAX-RS Provider. But both are using the JmDNS Discovery Provider.

The URL path is defined via the JAX-RS annotations, “modify” via @Path("/modify") on the class, “remoteservice” is the path parameter defined via @Path("/{value}") on the method (if you change that value, the result will change accordingly). You can extend the URL via configurations shown below:

  • Add a prefix URL path segment on runtime level:
    Add the following system property to your runtime configuration via VM Arguments
    -Decf.jaxrs.server.pathPrefix=<value>
    (e.g. -Decf.jaxrs.server.pathPrefix=/services)
  • Add a leading URL path segment on service level:
    Add the following component property to the @Component annotation
    ecf.jaxrs.server.pathPrefix=<value>
    e.g.
@Component(
    immediate = true,
    property = {
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs",
        "ecf.jaxrs.server.pathPrefix=/upper"})

If all of the above configurations are added, the new URL to the service is, e.g. http://localhost:8181/services/upper/modify/remoteservice

Additional information about available component properties can be found here: Jersey Service Properties

Note:
Especially the auto-start configuration is quite annoying with the Equinox launcher when you know that the Bnd launcher or the Felix launcher have configuration attributes for auto-starting all bundles. The Equinox launcher does not have such a configuration AFAIK, but you could achieve something similar by either implementing a custom Configurator or by registering a BundleListener that starts all bundles in RESOLVED state. I stick to the Equinox default to avoid additional topics here, but for the interested, have a look at the provided links.

Note:
With the latest version of the JAX-RS Distribution Provider, the autostart configuration is much more comfortable than before. There were several improvements to make the definition of a runtime more user friendly, so if you are already familiar with the JAX-RS Distribution Provider and used it in the past, be sure to update it to the latest version to benefit from those modifications.

Service Consumer Runtime

To consume the Remote Service provided via JAX-RS Distribution Provider, the runtime needs to be extended to include the additional dependencies:

  • Open the Product Configuration in org.fipro.modifier.client.product
    • Select the Contents tab
      • Add the following plug-ins to the existing configuration
        • com.fasterxml.jackson.core.jackson-annotations
        • com.fasterxml.jackson.core.jackson-core
        • com.fasterxml.jackson.core.jackson-databind
        • com.fasterxml.jackson.jaxrs.jackson-jaxrs-base
        • com.fasterxml.jackson.jaxrs.jackson-jaxrs-json-provider
        • com.fasterxml.jackson.module.jackson-module-jaxb-annotations
        • com.sun.activation.javax.activation
        • jakarta.annotation-api
        • jakarta.servlet-api
        • jakarta.ws.rs-api
        • jakarta.xml.bind-api
        • javassist
        • javax.validation.api
        • org.eclipse.ecf.provider.jaxrs
        • org.eclipse.ecf.provider.jaxrs.client
        • org.eclipse.ecf.provider.jersey.client
        • org.glassfish.hk2.api
        • org.glassfish.hk2.external.aopalliance-repackaged
        • org.glassfish.hk2.external.jakarta.inject
        • org.glassfish.hk2.locator
        • org.glassfish.hk2.osgi-resource-locator
        • org.glassfish.hk2.utils
        • org.glassfish.jersey.containers.jersey-container-servlet
        • org.glassfish.jersey.containers.jersey-container-servlet-core
        • org.glassfish.jersey.core.jersey-client
        • org.glassfish.jersey.core.jersey-common
        • org.glassfish.jersey.core.jersey-server
        • org.glassfish.jersey.ext.jersey-entity-filtering
        • org.glassfish.jersey.inject.jersey-hk2
        • org.glassfish.jersey.media.jersey-media-jaxb
        • org.glassfish.jersey.media.jersey-media-json-jackson

If you now start the Service Consumer Runtime and have the Service Provider Runtime also running, you can execute the following command

modify jax

This will actually lead to an error if you followed my tutorial step by step:

ServiceException: Service exception on remote service proxy

The reason is that the Service Interface does not contain the JAX-RS annotations as the service actually does, and therefore the mapping is working. So while for providing the service the interface does not need to be modified, it has to for the consumer side.

Note:
I sometimes encountered a Circular reference detected error. After some investigation this issue seems to be related to autostarting org.apache.felix.scr. If you have auto-start set to true for that bundle and see that issue, try to remove the autostart configuration for that bundle.
Also ensure that the workspace data is cleared on start, as the previous execution might have left some cached data that conflicts with the updated runtime configuration. To do this:

  • Main Menu → Run → Run Configurations… → Select the product configuration in the tree → Main tab → Activate Clear: workspace

If that doesn’t help, try to delete the run configuration and create a new one via the Product Configuration.

Extend the Service Interface

  • Open the file org.fipro.modifier.api/META-INF/MANIFEST.MF
    • Add the following entries to Imported Packages
      • javax.ws.rs
      • javax.ws.rs.core
  • Open the StringModifier class and add the JAX-RS annotations to be exactly the same as for the Service Implementation
package org.fipro.modifier.api;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Path("/modify")
public interface StringModifier {
    @GET
    @Produces(MediaType.TEXT_PLAIN)
    @Path("/{value}")
    String modify(@PathParam("value") String input);
}

If you now start the Uppercase Service Provider Runtime and the Service Consumer Runtime again, the error should be gone and you should see the expected result.

Update the “Inverter” Service Provider Runtime

After the Service Interface was extended to include the JAX-RS annotations, the first Service Provider Runtime will not resolve anymore because of missing dependencies. To fix this:

  • Open the Product Configuration in org.fipro.modifier.inverter.product
    • Select the Contents tab
      • Add the following plug-ins
        • com.sun.activation.javax.activation
        • jakarta.ws.rs-api
        • jakarta.xml.bind-api

Now you can start that Service Provider Runtime again. If the other Service Provider and the Service Consumer are also active, executing the modify command will now output the result of both services.

Endpoint Description Extender Format (EDEF)

In the tutorial we used JmDNS/Zeroconf as Discovery Provider. This way there is not much we have to do as a developer or administrator despite adding the according bundle to the runtime. This kind of Discovery is using a broadcast mechanism to announce the service in the network. In cases this doesn’t work, e.g. firewall rules that block broadcasting, it is also possible that you use a static file-based discovery. This can be done using the Endpoint Description Extender Format (EDEF) and is also supported by ECF.

Let’s create an additional service that is distributed via JAX-RS. But this time we exclude the org.eclipse.ecf.provider.jmdns bundle, so there is no additional discovery inside the Service Provider Runtime. We also add the console bundles to be able to inspect the runtime.

Note:
If you don’t want to create another service, you can also modify the previous uppercase service. In that case remove the org.eclipse.ecf.provider.jmdns bundle from the product configuration and ensure that the console bundles are added to be able to inspect the remote service runtime via the OSGi Console.

  • Create the Service Implementation plug-in project
    • File -> New -> Plug-in Project
    • Set name to org.fipro.modifier.camelcase
    • Click Next
    • Use the following settings:
      • Execution Environment: JavaSE-11
      • Uncheck Generate an activator
      • Uncheck This plug-in will make contributions to the UI
      • Create a rich client application? No
    • Click Finish
  • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • javax.ws.rs
      • javax.ws.rs.core
      • org.fipro.modifier.api (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties… to ensure there are no runtime dependencies.
  • Create a new package org.fipro.modifier.camelcase
  • Copy the following CamelCaseModifier class into that package
package org.fipro.modifier.camelcase;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

import org.fipro.modifier.api.StringModifier;
import org.osgi.service.component.annotations.Component;

@Path("/modify")
@Component(
    immediate = true,
    property = { 
        "service.exported.interfaces=*",
        "service.exported.intents=jaxrs",
        "ecf.jaxrs.server.pathPrefix=/camelcase"})
public class CamelCaseModifier implements StringModifier {

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    @Path("/{value}")
    @Override
    public String modify(@PathParam("value") String input) {
        StringBuilder builder = new StringBuilder();
        if (input != null) {
            for (int i = 0; i < input.length(); i++) {
                char currentChar = input.charAt(i);
                if (i % 2 == 0) {
                    builder.append(Character.toUpperCase(currentChar));
                } else {
                    builder.append(Character.toLowerCase(currentChar));
                }
            }
        }
        else {
            builder.append("No input given");
        }
        return builder.toString();
    }
}
  • Create a Product Project
    • Main Menu → File → New → Project → General → Project
    • Set name to org.fipro.modifier.camelcase.product
    • Click Finish
  • Create a new Product Configuration
    • Right click on project → New → Other… → Plug-in Development → Product Configuration
    • Set the filename to org.fipro.modifier.camelcase.product
  • Configure the product
    • Select the Overview tab
      • Set the General Information
        ID = org.fipro.modifier.camelcase.product
        Version = 1.0.0.qualifier
        Check The product includes native launcher artifacts
      • In the Product Definition section leave the Product and Application empty and select The product configuration is based on: plug-ins
    • Select the Contents tab
      • Add the following plug-ins
        • com.fasterxml.jackson.core.jackson-annotations
        • com.fasterxml.jackson.core.jackson-core
        • com.fasterxml.jackson.core.jackson-databind
        • com.fasterxml.jackson.jaxrs.jackson-jaxrs-base
        • com.fasterxml.jackson.jaxrs.jackson-jaxrs-json-provider
        • com.fasterxml.jackson.module.jackson-module-jaxb-annotations
        • com.sun.activation.javax.activation
        • jakarta.annotation-api
        • jakarta.servlet-api
        • jakarta.ws.rs-api
        • jakarta.xml.bind-api
        • javassist
        • javax.validation.api
        • org.apache.felix.gogo.command
        • org.apache.felix.gogo.runtime
        • org.apache.felix.gogo.shell
        • org.apache.felix.scr
        • org.eclipse.core.jobs
        • org.eclipse.ecf
        • org.eclipse.ecf.console
        • org.eclipse.ecf.discovery
        • org.eclipse.ecf.identity
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.ecf.osgi.services.remoteserviceadmin
        • org.eclipse.ecf.osgi.services.remoteserviceadmin.console
        • org.eclipse.ecf.osgi.services.remoteserviceadmin.proxy
        • org.eclipse.ecf.provider.jaxrs
        • org.eclipse.ecf.provider.jaxrs.server
        • org.eclipse.ecf.provider.jersey.server
        • org.eclipse.ecf.remoteservice
        • org.eclipse.ecf.remoteservice.asyncproxy
        • org.eclipse.ecf.sharedobject
        • org.eclipse.equinox.common
        • org.eclipse.equinox.concurrent
        • org.eclipse.equinox.console
        • org.eclipse.equinox.event
        • org.eclipse.equinox.http.jetty
        • org.eclipse.equinox.http.servlet
        • org.eclipse.jetty.http
        • org.eclipse.jetty.io
        • org.eclipse.jetty.security
        • org.eclipse.jetty.server
        • org.eclipse.jetty.servlet
        • org.eclipse.jetty.util
        • org.eclipse.jetty.util.ajax
        • org.eclipse.osgi
        • org.eclipse.osgi.services
        • org.eclipse.osgi.services.remoteserviceadmin
        • org.eclipse.osgi.util
        • org.fipro.modifier.api
        • org.fipro.modifier.camelcase
        • org.glassfish.hk2.api
        • org.glassfish.hk2.external.aopalliance-repackaged
        • org.glassfish.hk2.external.jakarta.inject
        • org.glassfish.hk2.locator
        • org.glassfish.hk2.osgi-resource-locator
        • org.glassfish.hk2.utils
        • org.glassfish.jersey.containers.jersey-container-servlet
        • org.glassfish.jersey.containers.jersey-container-servlet-core
        • org.glassfish.jersey.core.jersey-client
        • org.glassfish.jersey.core.jersey-common
        • org.glassfish.jersey.core.jersey-server
        • org.glassfish.jersey.ext.jersey-entity-filtering
        • org.glassfish.jersey.inject.jersey-hk2
        • org.glassfish.jersey.media.jersey-media-jaxb
        • org.glassfish.jersey.media.jersey-media-json-jackson
        • org.slf4j.api
    • Select Configuration tab
      • Add the following bundles to the Start Levels section by clicking the Add… button:
        • org.eclipse.ecf.osgi.services.distribution
        • org.eclipse.equinox.event
        • org.eclipse.equinox.http.jetty
      • Set Auto-Start for every bundle in the Start Levels section to true
    • Select Launching tab
      • Add
        console 
        to the Program Arguments
      • Add
        Declipse.ignoreApp=true -Dosgi.noShutdown=true
        to the VM Arguments
      • Add
        -Dorg.osgi.service.http.port=8282
        to the VM Arguments to configure the Http Service
      • Add
        -Decf.jaxrs.server.pathPrefix=/services
        to the VM Arguments to configure the URL path prefix similar to the uppercase service

Once the runtime is started the service should be available via http://localhost:8282/services/camelcase/modify/remoteservice

You probably noticed a console output on startup that shows the Endpoint Description XML. This is actually what we need for the EDEF file. You can also get the endpoint description at runtime via the ECF Gogo Command listexports <endpoint.id>:

osgi> listexports
endpoint.id                          |Exporting Container ID                       |Exported Service Id
5918da3a-a971-429f-9ff6-87abc70d4742 |http://localhost:8282/services/camelcase     |38

osgi> listexports 5918da3a-a971-429f-9ff6-87abc70d4742
<endpoint-descriptions xmlns="http://www.osgi.org/xmlns/rsa/v1.0.0">
  <endpoint-description>
    <property name="ecf.endpoint.id" value-type="String" value="http://localhost:8282/services/camelcase"/>
    <property name="ecf.endpoint.id.ns" value-type="String" value="ecf.namespace.jaxrs"/>
    <property name="ecf.endpoint.ts" value-type="Long" value="1642667915518"/>
    <property name="ecf.jaxrs.server.pathPrefix" value-type="String" value="/camelcase"/>
    <property name="ecf.rsvc.id" value-type="Long" value="1"/>
    <property name="endpoint.framework.uuid" value-type="String" value="80778aff-63c7-448d-92a5-7902eb6782ae"/>
    <property name="endpoint.id" value-type="String" value="5918da3a-a971-429f-9ff6-87abc70d4742"/>
    <property name="endpoint.package.version.org.fipro.modifier" value-type="String" value="1.0.0"/>
    <property name="endpoint.service.id" value-type="Long" value="38"/>
    <property name="objectClass" value-type="String">
      <array>
        <value>org.fipro.modifier.StringModifier</value>
      </array>
    </property>
    <property name="remote.configs.supported" value-type="String">
      <array>
        <value>ecf.jaxrs.jersey.server</value>
      </array>
    </property>
    <property name="remote.intents.supported" value-type="String">
      <array>
        <value>passByValue</value>
        <value>exactlyOnce</value>
        <value>ordered</value>
        <value>osgi.async</value>
        <value>osgi.private</value>
        <value>osgi.confidential</value>
        <value>jaxrs</value>
      </array>
    </property>
    <property name="service.imported" value-type="String" value="true"/>
    <property name="service.imported.configs" value-type="String">
      <array>
        <value>ecf.jaxrs.jersey.server</value>
      </array>
    </property>
    <property name="service.intents" value-type="String">
      <array>
        <value>jaxrs</value>
      </array>
    </property>
  </endpoint-description>
</endpoint-descriptions>

The endpoint description is needed by the Service Consumer to discover the new service. Without a Discovery that is broadcasting, the service needs to be discovered statically via an EDEF file. As the EDEF file is registered via manifest header, we create a new plug-in. You could also place it in an existing bundle like org.fipro.modifier.client, but for some more OSGi dynamics fun, let’s create a new plug-in.

  • Create the EDEF configuration plug-in project
    • File -> New -> Plug-in Project
    • Set name to org.fipro.modifier.client.edef
    • Click Next
    • Use the following settings:
      • Execution Environment: JavaSE-11
      • Uncheck Generate an activator
      • Uncheck This plug-in will make contributions to the UI
      • Create a rich client application? No
    • Click Finish
  • Create a new folder edef
  • Create a new file camelcase.xml in that folder
  • Copy the Endpoint Description XML from the previous console command execution into that file
  • Open the build.properties file and add the edef folder to the Binary Build
  • Open the META-INF/MANIFEST.MF file and add the following header
Remote-Service: edef/camelcase.xml
  • Open the Product Configuration in org.fipro.modifier.client.product
    • Select the Contents tab
      • Add the plug-in org.fipro.modifier.client.edef

If you start the Service Consumer Runtime, the service will not be available. This is because the new org.fipro.modifier.client.edef bundle is not activated as nobody requires it (the Equinox default!). But we can activate it via the console. First we need to find the bundle-id via lb and then start it via start <bundle-id>. The output should look similar to the following snippet:

osgi> lb edef
START LEVEL 6
   ID|State      |Level|Name
   63|Resolved   |    4|EDEF Discovery Configuration (1.0.0.qualifier)|1.0.0.qualifier

osgi> start 63

Now the service should be available via the modify command. If you stop the bundle, the service becomes unavailable again.

ECF Extensions to EDEF

The EDEF specification itself would not be sufficient for productive usage. For example, the values of the endpoint description properties need to match. For the endpoint.id this would be really problematic, as that value is a random generated uuid and changes on each runtime start. So if the Service Provider Runtime is restarted there is a new endpoint.id value. ECF includes a mechanism to support the discovery and the distribution even if the endpoint.id of the importer and the exporter do not match. This actually makes the EDEF file support work in productive environments.

ECF also provides a mechanism to create an endpoint description using a properties file. All the necessary endpoint description properties need to be included as properties with the respective types and values. The following example shows the properties representation for the EDEF XML of the above example. Note that for endpoint.id and endpoint.framework.uuid the type is set to uuid and the value is 0. This way ECF will generate a random UUID and the matching feature will ensure that the distribution will work even without matching id values.

ecf.endpoint.id=http://localhost:8282/services/camelcase
ecf.endpoint.id.ns=ecf.namespace.jaxrs
ecf.endpoint.ts:Long=1642761763599
ecf.jaxrs.server.pathPrefix=/camelcase
ecf.rsvc.id:Long=1
endpoint.framework.uuid:uuid=0
endpoint.id:uuid=0
endpoint.package.version.org.fipro.modifier.api=1.0.0
endpoint.service.id:Long=38
objectClass:array=org.fipro.modifier.api.StringModifier
remote.configs.supported:array=ecf.jaxrs.jersey.server
remote.intents.supported:array=passByValue,exactlyOnce,ordered,osgi.async,osgi.private,osgi.confidential,jaxrs
service.imported:boolean=true
service.imported.configs:array=ecf.jaxrs.jersey.server
service.intents:array=jaxrs

Properties files can be used to override values in an underlying XML EDEF file and even as an alternative, so the XML file is not needed anymore. It is even possible to override properties values for different environments, which makes it very interesting in a productive environment. So there can be a default Properties file for the basic endpoint description, then an endpoint description per service that derives from the basic settings, and even profile specific settings that changes for example the ecf.endpoint.id URLs per profile (DEV/INT/PROD). More details on that topic can be found in the ECF Wiki.

Alternatively you can also trigger a remote service import via EDEF programmatically using classes from the org.osgi.service.remoteserviceadmin package (see below). This way it is possible to dynamically import and close remote service registrations at runtime (without operating via low level OSGi bundle operations). The following snippet is an example for the programmatic registration of the service above:

Map<String, Object> properties = new HashMap<>();

properties.put("ecf.endpoint.id", "http://localhost:8282/services/camelcase");
properties.put("ecf.endpoint.id.ns", "ecf.namespace.jaxrs");
properties.put("ecf.endpoint.ts", 1642489801532l);
properties.put("ecf.jaxrs.server.pathPrefix", "/camelcase");
properties.put("ecf.rsvc.id", 1l);
properties.put("endpoint.framework.uuid", "0");
properties.put("endpoint.id", "0");
properties.put("endpoint.package.version.org.fipro.modifier.api", "1.0.0");
properties.put("endpoint.service.id", 38l);
properties.put("objectClass", new String[] { "org.fipro.modifier.api.StringModifier" });
properties.put("remote.configs.supported", new String[] { "ecf.jaxrs.jersey.server" });
properties.put("remote.intents.supported", new String[] { "passByValue", "exactlyOnce", "ordered", "osgi.async", "osgi.private", "osgi.confidential", "jaxrs" });
properties.put("service.imported", "true");
properties.put("service.intents", new String[] { "jaxrs" });
properties.put("service.imported.configs", new String[] { "ecf.jaxrs.jersey.server" });

EndpointDescription desc = new EndpointDescription(properties);
ImportRegistration importRegistration = admin.importService(desc);

Conclusion

The OSGi specification has several chapters and implementations to support a microservice architecture. The Remote Service and Remote Service Admin specifications are one of these and probably the most complicated ones, which was confirmed by several OSGi experts I talked with at conferences. Also the specification itself is not easy to understand, but I hope that this blog post helps to get a better understanding.

While Remote Services are pretty easy to implement, the complicated steps are in the setup of the runtime by collecting all necessary bundles. While the ECF project provides several examples and also tries to provide support for better bundle resolving, it is still not a trivial task. I hope this tutorial helps also in solving that topic a little bit.

Of course at runtime you might face networking issues, as I did in every talk for example. The typical fallacies are even referred in the Remote Service Specification. With the usage of JAX-RS and HTTP for the distribution of services and EDEF for a static file-based discovery, this might be less problematic. Give them a try if you are running into troubles.

At the end I again want to thank Scott Lewis for his continuous work on ECF and his support whenever I faced issues with my examples and had questions on some details. If you need an extension or if you have other requests regarding ECF or the JAX-RS Distribution Provider, please get in touch with him.

References

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Getting Started with OSGi Remote Services – PDE Edition

Eclipse Extended Contribution Pattern

Working in the Panorama project, we developed several architectures and designs to improve the collaboration of heterogeneous systems. Although focused on automotive and the aerospace scope, several topics are useful in general, like for example the pipelining of model processing services via a generic REST API. While on the one side the combination of several self-contained services can be achieved, the collaboration of heterogeneous organisations is also a big topic. In particular how multiple partners with different knowledge and technical skills can contribute to a common platform, e.g. the Eclipse IDE or especially APP4MC which is an Eclipse IDE based product.

The Eclipse Platform is actually designed to be extensible and there exist many products that are based on the Eclipse IDE or can be installed into the Eclipse IDE as additional plug-ins. But to create such extensions you need to know the base you want to extend. In a setup with multiple partners that use different technology stacks and have different levels of experience with Eclipse based technology, you can’t assume that everything works easily. There are partners that have either experience with Eclipse 3 or Eclipse 4, you have partners that are neither aware of the Eclipse 3 or the Eclipse 4 platform, and you even have partners that do not want to take care about the underlying platform. Therefore we needed to find a way to make it easy for anyone to contribute new features, without having too much platform dependencies to take care about.

As a big fan of OSGi Declarative Services (you might know if you read some of my previous blog posts), I searched for a way to contribute a new feature to the user interface by implementing and providing an OSGi service. As an Eclipse Platform committer I know that the Eclipse 4 programming model fits very good for connecting the OSGi layer with the Eclipse layer. Something that doesn’t work that easily with the Eclipse 3 programming model. I called the solution I developed the Extended Contribution Pattern, which I want to describe here in more detail. And I hope with the techniques I show here, I can convince more people to use OSGi Declarative Services and the Eclipse 4 programming model in their daily work when creating Eclipse based products.

The main idea is that an integration layer is implemented with the Eclipse 4 programming model. That integration layer is responsible for the contribution to the Eclipse 3 based application (again, this is Eclipse 4 + Compatibility layer). Additionally it takes and processes the contributions provided via OSGi DS.

For people knowing the Eclipse 3 programming model, this sounds pretty similar to how Extension Points work. And the idea is actually the same. But in comparison, as a developer of the integration layer:

  • you don’t need to specify the extension point in an XSD/XML way
  • you can use dependency injection instead of operating on the ExtensionRegistry, which is quite some code that is also not type safe
  • you can completely rely on the Eclipse 4 programming model and contribute simple POJOs instead of following the class hierarchy in multiple places

As a contributor to the integration layer:

  • you don’t need to specify the extension via plugin.xml
  • you don’t need to even care about the Eclipse platform
  • you simply implement an OSGi service using Declarative Service Annotations and implement a method that follows the contract of the contribution

Note:
The Integration Layer is not needed for connecting the OSGi layer with the Eclipse layer. You can directly consume OSGi services easily via injection in Eclipse 4. The Integration Layer is used to abstract out the UI integration.

Example

In this example I will show how the Extended Contribution Pattern can be used to contribute menu items to the context menu of the navigator views. Of course this could also be achieved by either contribute to the Eclipse 3 extension point or by directly contribute via Eclipse 4 model fragments. But the idea is that contributors of functionality should not care about the integration into the platform.

Step 1: Create the plugin for the integration layer

  • Switch to the Plug-in Perspective
  • Create a new Plug-in Project via File -> New -> Plug-in Project
  • Choose a meaningful name (e.g. org.fipro.contribution.integration)
  • Ensure that Generate an activator is unchecked
  • Ensure that This plug-in will make contributions to the UI is checked
  • Set Create a rich client application? is set to No

Step 2: Define the service interface

This step is the easiest one. The service interface needs to be a simple marker interface that will be used to mark a contribution class as an OSGi service.

  • Create a package org.fipro.contribution.integration
  • Create a marker interface NavigatorMenuContribution
package org.fipro.contribution.integration;

public interface NavigatorMenuContribution { }
  • Open the META-INF/MANIFEST.MF file
  • Switch to the Runtime tab and export the org.fipro.contribution.integration package
  • Specify the version 1.0.0 on the package via Properties…

Step 3: Create the plugin for the service contribution

  • Switch to the Plug-in Perspective
  • Create a new Plug-in Project via File -> New -> Plug-in Project
  • Choose a meaningful name (e.g. org.fipro.contribution.service)
  • Ensure that Generate an activator is unchecked
  • Ensure that This plug-in will make contributions to the UI is checked
  • Set Create a rich client application? is set to No

Step 4: Implement a service

Now let’s implement a service for a functionality we want to contribute. The Integration Layer is not complete yet and typically you would not show the contribution service implementation at this point. But to get a better understanding of the next steps in the Integration Layer, it is good to see how the contribution will look like.

Note:
Don’t forget to enable the DS Annotation processing in the Preferences. Otherwise the necessary OSGi Component Descriptions are not generated. As it is not enabled by default, it is a common pitfall when implementing OSGi Declarative Services with PDE tooling.

First we need to define the dependencies:

  • Open the META-INF/MANIFEST.MF file
  • Switch to the Dependencies tab and add the following packages to the Imported Packages
    • javax.annotation
      Needed for the @PostConstruct annotation
    • org.fipro.contribution.integration (1.0.0)
      Needed for the previously created marker interface
    • org.osgi.service.component.annotations [1.3.0,2.0.0) optional
      Needed for the OSGi DS annotations
  • Add the following plug-ins to the Required Plug-ins section
    • org.eclipse.jface
      Needed for showing dialogs
    • org.eclipse.core.resources
      Needed for the Eclipse Core Resources API to access the Eclipse resources
    • org.eclipse.core.runtime
      Needed as transitive dependency for operating on the resources

Note:
Typically I recommend to use Import-Package instead of Require-Bundle. For plain OSGi this is the best solution. But I learned over the years that especially in the context of Eclipse IDE contributions being that strict doesn’t work out. Especially because of some split package issues in the Eclipse Platform. My personal rule for PDE based projects is:

  • Bundles / plug-ins that contain services that are not related to UI and could be also part of other OSGi runtimes (e.g. executable jars or integrated in webservices) should only use Import-Package
  • Bundles / plug-ins that contribute to the UI, e.g. the Eclipse IDE, can also use Require-Bundle in some cases, to reduce the manual effort on dependency management

Now create the service:

  • Create a package org.fipro.contribution.service
  • Create a component class FileSizeContribution
@Component(property = {
    "name = File Size",
    "description = Show the size of the selected file" })
public class FileSizeContribution implements NavigatorMenuContribution {

    @PostConstruct
    public void showFileSize(IFile file, Shell shell) {
        URI uri = file.getRawLocationURI();
        Path path = Paths.get(uri);
        try {
            long size = Files.size(path);
            MessageDialog.openInformation(
                shell,
                "File size",
                String.format("The size of the selected file is %d bytes", size));
        } catch (IOException e) {
            MessageDialog.openError(
                shell, 
                "Failed to retrieve the file size", 
                "Exception occured on retrieving the file size: "
                + e.getLocalizedMessage());
        }
    }
}

The important things to notice in the above snippet are:

  • The class needs to implement the marker interface NavigatorMenuContribution
  • The class needs to be annotated via @Component to mark it as an OSGi DS component
  • The @Component annotation has two properties to specify the name and the description. They will later be used for the user interface integration. In my opinion these two properties are component configurations and should therefore be specified as such. You could on the other side argue that this information could also be provided via some dedicated methods, but implementing methods to provide configurations for the service instance feels incorrect.
  • The class contains a single method that is annotated via @PostConstruct. The first method parameter defines for which type the service is responsible.

For a contributor the rules are pretty simple:

  • Mark the contribution with @Component as an OSGi Declarative Service
  • Implement the marker interface
  • Provide a method that is annotated with @PostConstruct
  • The first method parameter needs to be the type the contribution takes care of

A contributor does not need to take care about the infrastructure in the Eclipse application and can focus on the feature that should be contributed.

Step 5: Implement a registry as service consumer

Back to the Integration Layer now. To provide as much flexibility on the contributor side, there needs to be a mechanism that can map that flexibility to the real integration. For this we create a registry that consumes the contributions in first place and stores them for further usage. For the storage we introduce a wrapper around the service, that stores the type for which the service should be registered and the properties that should be used in the user interface (e.g. name and description). For the service properties the issue is that the properties are provided on OSGi DS injection level and can be retrieved from the ServiceRegistry, but they are not easily accessible in the Eclipse layer. By keeping the information in a wrapper that is populated when the service becomes available, the problem can be handled.

The wrapper class looks similar to the following snippet:

public class NavigatorMenuContributionWrapper {

    private final String id;
    private final NavigatorMenuContribution instance;

    private final String name;
    private final String description;
    private final String type;

    public NavigatorMenuContributionWrapper(
        String id,
        NavigatorMenuContribution instance,
        String name,
        String description,
        String type) {

        this.id = id;
        this.instance = instance;
        this.name = name;
        this.description = description;
        this.type = type;
    }
	
    public String getId() {
        return this.id;
    }
	
    public NavigatorMenuContribution getServiceInstance() {
        return this.instance;
    }
	
    public String getName() {
        return name;
    }
	
    public String getDescription() {
        return description;
    }
	
    public String getType() {
        return type;
    }
}

Note:
If you are sure that the IDE you are contributing to is always started with Java >= 16, you can of course also implement that wrapper as a Java Record, which avoids quite some boilerplate code. In that case the accessor methods are different, as they are not prefixed with get.

public record NavigatorMenuContributionWrapper(
    String id,
    NavigatorMenuContribution serviceInstance,
    String name,
    String description,
    String type) { }

In this tutorial I will stick with the old POJO approach, so people that are not yet on the latest Java version can follow easily.

The registry that consumes the NavigatorMenuContribution services and stores them locally has the following characteristics:

  • It is an OSGi service that actually does not need an interface as there will be only one implementation. That means we need to set the service parameter on the @Component annotation.
  • We use the event strategy (method injection) for consuming the NavigatorMenuContribution services. The reason is that we need to create the wrapper instances with the component properties. Field injection would not work here.
  • The Dynamic Reference Policy is used to ensure that services can be registered/unregistered at runtime.
    (more information on reference policies can be found here).
  • There are accessor methods for retrieving the services based on the type.
  • There needs to be a method that extracts the type for which the service is responsible from the @PostConstruct method via reflection. To avoid reflection you could support a component property that gets evaluated, but that would make the contribution not so intuitive, as you would need to specify the same information twice. And actually the reflection is only executed once per service binding, so it should not really have an effect at runtime.
  • Last but not least, if you want OSGi logging you need to get the Logger via method injection of the LoggerFactory. This is due to the fact that PDE does not support DS 1.4 annotation processing. With that support you could get the Logger directly via field injection. Alternatively you can of course use a logging framework like SLF4J and don’t use the OSGi logging at all.

The complete implementation looks like this:

@Component(service = NavigatorMenuContributionRegistry.class)
public class NavigatorMenuContributionRegistry {

    LoggerFactory factory;
    Logger logger;

    private ConcurrentHashMap<String, Map<String, NavigatorMenuContributionWrapper>> registry = new ConcurrentHashMap<>();
	
    @Reference(
        cardinality = ReferenceCardinality.MULTIPLE,
        policy = ReferencePolicy.DYNAMIC)
    protected void bindService(
        NavigatorMenuContribution service, Map<String, Object> properties) {
		
        String className = getClassName(service, properties);
        if (className != null) {
            Map<String, NavigatorMenuContributionWrapper> services = 
                this.registry.computeIfAbsent(
                    className, 
                    key -> new ConcurrentHashMap<String, NavigatorMenuContributionWrapper>());
			
            String id = (String) properties.getOrDefault("id", service.getClass().getName());
            if (!services.containsKey(id)) {
                services.put(id,
                    new NavigatorMenuContributionWrapper(
                        id, 
                        service, 
                        (String) properties.getOrDefault("name", service.getClass().getSimpleName()), 
                        (String) properties.getOrDefault("description", null),
                        className));
            } else {
                if (this.logger != null) {
                    this.logger.error("A NavigatorMenuContribution with the ID {} already exists!", id);
                } else {
                    System.out.println("A NavigatorMenuContribution with the ID " + id + " already exists!");
                }
            }
        } else {
            if (this.logger != null) {
                this.logger.error(
                    "Unable to extract contribution class name for NavigatorMenuContribution {}", 
                    service.getClass().getName());
            } else {
                System.out.println(
                    "Unable to extract contribution class name for NavigatorMenuContribution " 
                    + service.getClass().getName());
            }
        }
    }

    protected void unbindService(
        NavigatorMenuContribution service, Map<String, Object> properties) {

        String className = getClassName(service, properties);
        String id = (String) properties.getOrDefault("id", service.getClass().getName());
        if (className != null) {
            Map<String, NavigatorMenuContributionWrapper> services = 
                this.registry.getOrDefault(className, new HashMap<>());
            services.remove(id);
        }
    }
	
    @SuppressWarnings("unchecked")
    public List<NavigatorMenuContributionWrapper> getServices(Class<?> clazz) {
        HashSet<String> classNames = new LinkedHashSet<>();
        if (clazz != null) {
            classNames.add(clazz.getName());
            List<Class<?>> allInterfaces = ClassUtils.getAllInterfaces(clazz);
            classNames.addAll(
                allInterfaces.stream()
                    .map(Class::getName)
                    .collect(Collectors.toList()));
        }

        return classNames.stream()
            .filter(Objects::nonNull)
            .flatMap(name -> this.registry.getOrDefault(name, new HashMap<>()).values().stream())
            .collect(Collectors.toList());
    }

    public NavigatorMenuContributionWrapper getService(String className, String id) {
        return this.registry.getOrDefault(className, new HashMap<>()).get(id);
    }

    /**
     * Extracts the class name for which the service should be
     * registered. Returns the first parameter of the method annotated with
     * {@link PostConstruct} .
     * 
     * @param service The service for which the contribution class name
     *                      should be returned.
     * @param properties    The component properties map of the
     *                      service object.
     * @return The contribution class name for which the service should be
     *         registered.
     */
    private String getClassName(NavigatorMenuContribution service, Map<String, Object> properties) {
        String className = null;

        // find method annotated with @PostConstruct
        Class<?> contributionClass = service.getClass();
        Method[] methods = contributionClass.getMethods();
        for (Method method : methods) {
            if (method.isAnnotationPresent(PostConstruct.class)) {
                Class<?>[] parameterTypes = method.getParameterTypes();
                if (parameterTypes.length > 0) {
                    if (Collection.class.isAssignableFrom(parameterTypes[0])) {
                        // extract generic information for List support
                        Type[] genericParameterTypes = method.getGenericParameterTypes();
                        if (genericParameterTypes[0] instanceof ParameterizedType) {
                            Type[] typeArguments =
                                ((ParameterizedType)genericParameterTypes[0]).getActualTypeArguments();
                            className = typeArguments.length > 0 ? typeArguments[0].getTypeName() : null;
                        }
                    } else {
                        className = parameterTypes[0].getName();
                    }
                    break;
                }
            }
        }

        return className;
    }

    @Reference(
        cardinality = ReferenceCardinality.OPTIONAL,
        policy = ReferencePolicy.DYNAMIC)
    void setLogger(LoggerFactory factory) {
        this.factory = factory;
        this.logger = factory.getLogger(getClass());
    }

    void unsetLogger(LoggerFactory loggerFactory) {
        if (this.factory == loggerFactory) {
            this.factory = null;
            this.logger = null;
        }
    }
}

Remember to update the Dependencies in the MANIFEST.MF to include the necessary packages.

Plug-in Dependencies

With the above implementation we need to add additional dependencies. To avoid complications at implementation time in the next step, we update the plug-in dependencies in advance. As we know that we want to consume OSGi services and operate on Eclipse resources, we know what dependencies we need. In a real-world project the dependencies typically grow while implementing.

  • Open the META-INF/MANIFEST.MF file
  • Switch to the Dependencies tab and add the following packages to the Imported Packages if they are not included yet
    • javax.annotation
      Needed for the @PostConstruct annotation
    • javax.inject (1.0.0)
      Needed for the common injection annotations
    • org.apache.commons.lang (2.6.0)
      Needed to be able to use ClassUtils in the inspection.
    • org.osgi.service.component.annotations [1.3.0,2.0.0) optional
      Needed for the OSGi DS annotations
    • org.osgi.service.log (1.5.0)
      Needed to be able to consume the OSGi Logger
  • Add the following plug-ins to the Required Plug-ins section if they are not included yet
    • org.eclipse.e4.core.contexts
      Needed for the IEclipseContext
    • org.eclipse.e4.core.di
      Needed for the Eclipse specific injection annotations (e.g. @Evaluate)
    • org.eclipse.e4.core.di.extensions
      Needed for the Eclipse specific injection annotations (e.g. @Service)
    • org.eclipse.e4.ui.di
      Needed for the Eclipse UI specific injection annotations (e.g. @AboutToShow)
    • org.eclipse.e4.ui.model.workbench
      Needed to dynamically creating model elements (e.g. MMenuElement)
    • org.eclipse.e4.ui.services
      Needed for the Eclipse UI service specific classes (e.g. IServiceConstants)
    • org.eclipse.e4.ui.workbench
      Needed for the Eclipse UI services (e.g. EModelService)
    • org.eclipse.jface
      Needed for showing dialogs
    • org.eclipse.core.resources
      Needed for the Eclipse Core Resources API to access the Eclipse resources
    • org.eclipse.core.runtime
      Needed as transitive dependency for operating on the resources

Step 6: Define the application model contribution

After the services and the Integration Layer are specified, let’s have a look on how to use it. For this create a Model Fragment to contribute a dynamic menu contribution to the context menus.

  • Right click on the project org.fipro.contribution.integration
  • New -> Other -> Eclipse 4 -> Model -> New Model Fragment

The wizard that opens will do the following three things:

  1. Create a file named fragment.e4xmi
    This is the application model fragment needed for the contribution.
  2. Create a plugin.xml file and add it to the build.properties
    Needed to contribute the Model Fragment via Extension Point
  3. Update the MANIFEST.MF
    Include the necessary dependency for the Extension Point

Since Eclipse 2021-06 (4.20) it is also possible to register a Model Fragment via Manifest header. To make use of this follow these steps:

  • Delete the plugin.xml file, remember to also remove it from the build.properties again
  • Add the following line to the MANIFEST.MF file
    Model-Fragment: fragment.e4xmi;apply=always
  • Remove the dependency to the bundle org.eclipse.e4.ui.model.workbench from the MANIFEST.MF file if not needed. In this example we will not remove it, as we need it in another use case for dynamically creating model elements.

Note:
Adding support for the new Model-Fragment header in the PDE tooling is currently ongoing, e.g. via Bug 572946. So with the next Eclipse 2021-12 (4.22) the manual modification is not necessary anymore. The Eclipse 2021-12 M3 is already including the support. Using that version you will see this wizard:

Model Fragment Wizard in Eclipse 2021-12 M3

The next step is to define the model contributions. This example is about contributing a Dynamic Menu Contribution to the context menu of the Navigators. Therefore it is necessary to contribute a Command, a Handler and the Menu Contribution. To do this start by opening the fragment.e4xmi file.

Command Contribution

  • Select Model Fragments in the tree on the left side of the Eclipse Model Editor
  • Click the Add button on the detail pane on the right side (can also be done via context menu in the tree)
  • In the details pane for the created Model Fragment set
    • Extended Element-ID: xpath:/
    • Feature Name: commands
    • Click the Add button to add a new Command
  • In the details pane for the created Command
    • Name: File Navigator Action
    • Click the Add button to add a Command Parameter
  • In the details pane for the created Command Parameter
    • ID: contribution.id
    • Name: ID
    • Optional: unchecked
  • Select the previously created Command in the tree and add an additional Command Parameter
  • In the details pane for the created Command Parameter
    • ID: contribution.type
    • Name: Type
    • Optional: unchecked

Handler Contribution

  • Select Model Fragments in the tree on the left side of the Eclipse Model Editor
  • Click the Add button on the detail pane on the right side (can also be done via context menu in the tree)
  • In the details pane for the created Model Fragment set
    • Extended Element-ID: xpath:/
    • Feature Name: handlers
    • Click the Add button to add a new Handler
  • In the details pane for the created Handler
    • Command: File Navigator Action
    • Click on Class URI to open the wizard for the creation of the handler implementation
      • Package: org.fipro.contribution.integration
      • Name: FileNavigatorActionHandler
      • Click Finish

Menu Contribution

  • Select Model Fragments in the tree on the left side of the Eclipse Model Editor
  • Click the Add button on the detail pane on the right side (can also be done via context menu in the tree)
  • In the details pane for the created Model Fragment set
    • Extended Element-ID: xpath:/
    • Feature Name: menuContributions
    • Click the Add button to add a new MenuContribution
  • In the details pane for the created MenuContribution
    • Parent-ID: popup
    • Position: after=additions
    • Select Menu in the dropdown and click the Add button to add a Menu
  • In the details pane for the created Menu
    • Label: Navigator Contributions
    • Visible-When Expression: ImperativeExpression
    • Select Dynamic Menu Contribution in the dropdown and click the Add button
  • In the details pane for the created Dynamic Menu Contribution
    • Click on Class URI to open the wizard for the creation of the implementation
      • Package: org.fipro.contribution.integration
      • Name: DynamicMenuContribution
      • Click Finish
  • Select the Imperative Expression of the Menu in the tree pane
    • Click on Class URI to open the wizard for the creation of the implementation
      • Package: org.fipro.contribution.integration
      • Name: ResourceExpression
      • Click Finish

After the above steps the model fragment is prepared for the contributions and the corresponding classes are generated. The next step is to implement the Imperative Expression, the Handler and the Dynamic Menu Contribution.

Imperative Expression

Imperative Expressions are the replacement for Core Expressions if you want to rely on plain Eclipse 4 without the plugin.xml. Using Imperative Expressions you have the option to implement an expression rather than describing it in an XML format. As in my opinion the definition of a Core Expression in the plugin.xml was never really intuitive, I really like the Imperative Expression in Eclipse 4. You might want to discuss that the declarative way of the Core Expressions is more powerful, but actually I have not yet found a case where an Imperative Expression is not suitable as a replacement.

The following code shows the implementation of the ResourceExpression that checks if a single element is selected and that element is an IResource and there is at least one contribution service registered for that type.

public class ResourceExpression {
	
    @Evaluate
    public boolean evaluate(
        @Optional @Named(IServiceConstants.ACTIVE_SELECTION)
        IStructuredSelection selection,
        @Service
        NavigatorMenuContributionRegistry registry) {
		
        return (selection != null && selection.size() == 1
            && (selection.getFirstElement() instanceof IResource)
            && !registry.getServices(
                   selection.getFirstElement().getClass()).isEmpty());
    }
}

Dynamic Menu Contribution

The Dynamic Menu Contribution implementation takes the selected element and tries to retrieve the registered contribution services from the registry. If services for the selected type are registered it creates the menu items that should be added to the context menu.

public class DynamicMenuContribution {
	
    @AboutToShow
    public void aboutToShow(
        List<MMenuElement> items, 
        EModelService modelService,
        MApplication app,
        @Service NavigatorMenuContributionRegistry registry,
        @Named(IServiceConstants.ACTIVE_SELECTION) IStructuredSelection selection) {
		
        List<NavigatorMenuContributionWrapper> services = 
            registry.getServices(selection.getFirstElement().getClass());

        services.forEach(s -> {
            MHandledMenuItem menuItem =
                MMenuFactory.INSTANCE.createHandledMenuItem();
            menuItem.setLabel(s.getName());
            menuItem.setTooltip(s.getDescription());
            menuItem.setElementId(s.getId());
            menuItem.setContributorURI(
                "platform:/plugin/org.fipro.contribution.integration");

            List<MCommand> command = modelService.findElements(
                app, 
                "org.fipro.contribution.integration.command.filenavigatoraction", 
                MCommand.class);
                menuItem.setCommand(command.get(0));

            MParameter parameter = MCommandsFactory.INSTANCE.createParameter();
            parameter.setName("contribution.id");
            parameter.setValue(s.getId());
            menuItem.getParameters().add(parameter);

            parameter = MCommandsFactory.INSTANCE.createParameter();
            parameter.setName("contribution.type");
            parameter.setValue(s.getType());
            menuItem.getParameters().add(parameter);

            items.add(menuItem);
        });
    }
}

Handler

The handler is triggered by selecting the generated menu item and therefore gets the provided command parameters. It is then using the ContextInjectionFactory to execute the method annotated with @PostConstruct in the service instance. The following code shows how this could look like.

public class FileNavigatorActionHandler {
	
    @Execute
    public void execute(
        @Named("contribution.type") String type,
        @Named("contribution.id") String id,
        @Named(IServiceConstants.ACTIVE_SELECTION) IStructuredSelection selection,
        @Service NavigatorMenuContributionRegistry registry,
        IEclipseContext context) {
		
        NavigatorMenuContributionWrapper wrapper =
            registry.getService(type, id);

        if (wrapper != null) {

            IEclipseContext activeContext = 
                context.createChild(type + " NavigatorMenuContribution");
            activeContext.set(wrapper.getType(), selection.getFirstElement());

            try {
                ContextInjectionFactory.invoke(
                    wrapper.getServiceInstance(),
                    PostConstruct.class,
                    activeContext);
            } finally {
                // dispose the context after the execution to avoid memory leaks
                activeContext.dispose();
            }
        }
    }
}

Step 7: Testing

Let’s verify if everything works as intended. For this simply right click on one of the projects and select
Run As -> Eclipse Application

This will start an Eclipse IDE that has the plug-ins from the workspace installed.

In the newly opened Eclipse instance create a new project. In that project create a directory and a file. If you right click on the created directory, you should not see any additional menu entry. But on performing a right click on the created file, you should find the menu entry Navigator Contributions, which is a sub-menu that contains the File Size entry. Selecting that should open a dialog that shows the size of the selected file. Hovering the File Size menu entry should also open the tooltip with the description that is provided via service property.

Note:
For this example use a simple text file. Creating for example a Java source file will not work, as a Java source file is a CompilationUnit, which is not an IResource.

Step 8: Extending the example

Now let’s extend the example and contribute some more features to verify if the Extended Contribution Pattern works.

  • Switch to the Plug-in Perspective
  • Create a new Plug-in Project via File -> New -> Plug-in Project
  • Choose a meaningful name (e.g. org.fipro.contribution.extended)
  • Ensure that Generate an activator is unchecked
  • Ensure that This plug-in will make contributions to the UI is checked
  • Set Create a rich client application? is set to No
  • Open the META-INF/MANIFEST.MF file
  • Switch to the Dependencies tab and add the following packages to the Imported Packages
    • javax.annotation
      Needed for the @PostConstruct annotation
    • org.fipro.contribution.integration (1.0.0)
      Needed for the previously created marker interface
    • org.osgi.service.component.annotations [1.3.0,2.0.0) optional
      Needed for the OSGi DS annotations
  • Add the following plug-ins to the Required Plug-ins section
    • org.eclipse.jface
      Needed for showing dialogs
    • org.eclipse.core.resources
      Needed for the Eclipse Core Resources API to access the Eclipse resources
    • org.eclipse.core.runtime
      Needed as transitive dependency for operating on the resources
  • Create another contribution for IFile handling, e.g. FileCopyContribution as shown below
@Component(property = {
    "name = File Copy", 
    "description = Create a copy of the selected file" })
public class FileCopyContribution implements NavigatorMenuContribution {

    @PostConstruct
    public void copyFile(IFile file, Shell shell) {
        URI uri = file.getRawLocationURI();
        Path path = Paths.get(uri);
        Path toPath = Paths.get(
            path.getParent().toString(), 
            "CopyOf_" + file.getName());
        try {
            Files.copy(path, toPath);
			
            // refresh the navigator
            file.getParent().refreshLocal(IResource.DEPTH_INFINITE, null);
        } catch (IOException | CoreException e) {
            MessageDialog.openError(
                shell, 
                "Failed to copy the file size", 
                "Exception occured on copying the file: "
                    + e.getLocalizedMessage());
        }
    }
}
  • Create a contribution to operate on an IFolder, e.g. FolderContentContribution as shown below
@Component(property = {
    "name = Folder Content",
    "description = Show the number of files in the selected folder" })
public class FolderContentContribution implements NavigatorMenuContribution {

    @PostConstruct
    public void showFolderContent(IFolder folder, Shell shell) {
        URI uri = folder.getRawLocationURI();
        Path path = Paths.get(uri);
        try {
            long count = Files.list(path).count();
            MessageDialog.openInformation(
                shell,
                "Folder Content",
                String.format("The folder contains %d files", count));
        } catch (IOException e) {
            MessageDialog.openError(
                shell, 
                "Failed to retrieve the folder content", 
                "Exception occured on retrieving the folder content: "
                    + e.getLocalizedMessage());
        }
    }
}

If you start the application again like before, you will see an additional menu entry in the context menu for a file, and there is now even a menu entry in the context menu of a folder.

OSGi Dynamics

A nice side effect is that the solution supports OSGi dynamics. That means a contribution can come and go at runtime without the need to restart the Eclipse IDE. To verify this, open the Host OSGi Console (open the Console view and switch to the Host OSGi Console via the view menu).

Enter the following command to find the id of the org.fipro.contribution.extended bundle:

lb fipro

Then stop that bundle via

stop <id>

The console looks like this in my environment as an example:

osgi> lb fipro
START LEVEL 6
ID|State |Level|Name
649|Active | 4|Service (1.0.0.qualifier)|1.0.0.qualifier
685|Active | 4|Integration Layer (1.0.0.qualifier)|1.0.0.qualifier
689|Active | 4|Extended (1.0.0.qualifier)|1.0.0.qualifier

osgi> stop 689

Now verify that the menu contributions for the folder and the File Copy menu entry are gone. If you start the bundle again via start <id> the menu entries are available again.

Maybe it is only me that is so excited about that, but supporting OSGi Dynamics more and more in the Eclipse IDE itself feels good.

Conclusion

With the Extended Contribution Pattern it is possible to create a framework that eases the collaboration between heterogeneous organisations. While you only need a few people that manage the Integration Layer and therefore know about Eclipse Platform details, every developer in the collaboration is able to contribute a functionality. As you can see above, the implementation of a contribution service is simple in terms of integration. This is by the way similar to how popular web frameworks are designed.

As I said in the introduction, the APP4MC project uses the Extended Contribution Pattern in various places. We have implemented a Model Visualization that shows a visualization of a selected AMALTHEA Model element, e.g. via JavaFX, PlantUML or plain SWT. You can get some more details in the APP4MC Online Help.

APP4MC 2.0 will also include context sensitive actions on selected AMALTHEA Model elements. So it is possible to contribute processing actions for a selected model element or actions to create model elements in a selected model element container.

You can also see that the combination of OSGi Declarative Services and the Eclipse 4 programming model brings a lot of benefits. And there was quite some progress over the last years to improve this. Actually the implementation and usage of OSGi services becomes really usable with the usage of the Eclipse 4 programming model, as you can easily consume services via injection (note the @Service annotation). The only thing to remember is that the PROTOTYPE scope is not yet supported in the Eclipse injection, which means the services are single instances, which blocks you from using states in your services for the Extended Contribution Pattern.

Finally some words about Eclipse 3.x vs. Eclipse 4.x. As an Eclipse Platform committer I am used to the Eclipse 4 programming model for several years. Since 2015 I published articles about the migration from Eclipse 3 to Eclipse 4 and talked about that topic on conferences. But still people rely on the Eclipse 3 programming model and ask questions about Eclipse 4 migrations. IMHO there are several reasons why Eclipse 3 is still active in so many places:

  1. The Eclipse IDE itself as one of the biggest Eclipse Platform based products is based on Eclipse 3. Well, actually it is based on Eclipse 4, but still most of the plugins and biggest projects in that area are Eclipse 3 based (e.g. Navigator views, JDT, CDT, PDE, and so on), so there is the Compatibility Layer in place to support the backwards compatibility. As long as the Eclipse IDE itself and most of the major projects/plugins are based on Eclipse 3, a complete shift of projects that extend the IDE to Eclipse 4 will never happen.
  2. There are still more tutorials in the web that show how to do things with Eclipse 3 than for Eclipse 4. There are quite some Eclipse 4 related tutorials out there and people like for example Lars Vogel, Olivier Prouvost, Jonas Helming and myself published several of those. But it doesn’t seem to be enough for the overall community.
  3. There is more tooling available around Eclipse 3 (e.g. number of wizards) than for Eclipse 4. to be honest I am probably not the target audience for wizards as I am mostly faster in creating plugins without an example created by a wizard. And once familiar with Eclipse 4, which is much simpler than the Eclipse 3 programming model, the existing wizards and tools are really sufficient.

In this blog post you should have noticed that it is possible and even not complicated to extend an Eclipse 3.x based application like the Eclipse IDE with plain Eclipse 4.x mechanisms. If you look at techniques like Imperative Expressions and contribution of model fragments via manifest header, your contributing bundle does not contain a single Eclipse 3.x mechanism like extension points and the corresponding plugin.xml file.

This also means, if you still ask yourself if you should migrate from Eclipse 3.x to Eclipse 4.x, just give it a try. Start with a small part and test what you can do. A migration scenario is not a “big bang”, you can do it incrementally. And remember, you probably won’t be able to get rid of everything, e.g. file based editors linked to the navigators, but you can improve in several spots in your project.

Here are some useful links to previous blog posts if you are not yet familiar with all the topics included here:

The sources of this blog post can be found here.

Posted in Dirk Fauth, Eclipse, Java, OSGi, Other | Comments Off on Eclipse Extended Contribution Pattern

Eclipse RCP, Java 11, JAXB

With Java 11 several packages have been removed from the JRE itself, like JAXB. This means if you use JAXB in your Java application, you need to add the necessary bundles to your runtime. In an OSGi application this gets quite complicated as you typically only declare a dependency to the API. The JAXB API and the JAXB implementation are separated, which is typically a good design. But the JAXBContext in the API bundle loads the implementation, which means the API has to know the implementation. This is causing class loading issues that become hard to solve.

This topic is of course not new and there are already some explanations like this blog post or this topic on the equinox-dev mailing list. But as it still took me a while to get it working, I write this blog post to share my findings with others. And of course to persist my findings in my “external memory” if I need it in the future again. 🙂

The first step is to add the necessary bundles to your target platform. You can either consume it from an Eclipse p2 Update Site or directly from a Maven repository using the m2e PDE Integration feature.

Note:
If you open the .target file with the Generic Text Editor, you can simply paste one of the below blocks and then resolve the target definition, instead of using the Target Editor.

Using an Eclipse p2 Update Site you can add the necessary dependencies by adding the following block to your target definition.

<location includeAllPlatforms="true" includeConfigurePhase="false" includeMode="slicer" includeSource="true" type="InstallableUnit">
  <repository location="https://download.eclipse.org/releases/2020-12/"/>
    <unit id="jakarta.xml.bind" version="2.3.3.v20201118-1818"/>
    <unit id="com.sun.xml.bind" version="2.3.3.v20201118-1818"/>
    <unit id="javax.activation" version="1.2.2.v20201119-1642"/>
    <unit id="javax.xml" version="1.3.4.v201005080400"/>
</location>

Note:
The jakarta.xml.bind bundle from Orbit is a re-bundled version of the original bundle in Maven Central and unfortunately specifies a version constraint on some javax.xml packages. As the Java runtime does not specify a version on the javax.xml packages, the configuration will fail to resolve. To solve this you need to add the javax.xml bundle to your target definition and the product configuration.

For consuming the libraries directly from a Maven repository you can add the following block if you have the m2e PDE Integration feature installed. This way you could even use newer versions that are not yet available via p2 update site.

<location includeDependencyScope="compile" includeSource="true" missingManifest="generate" type="Maven">
  <groupId>com.sun.xml.bind</groupId>
  <artifactId>jaxb-impl</artifactId>
  <version>2.3.3</version>
  <type>jar</type>
</location>
<location includeDependencyScope="compile" includeSource="true" missingManifest="generate" type="Maven">
  <groupId>jakarta.xml.bind</groupId>
  <artifactId>jakarta.xml.bind-api</artifactId>
  <version>2.3.3</version>
  <type>jar</type>
</location>

Note:
If you don’t have a JavaSE-1.8 mapped in your Eclipse IDE, or your bundle has a JavaSE-11 or higher set as Execution Environment, you need to specify the version constraint to the Import-Package statements to make PDE happy. Otherwise you will see some strange errors.

Note:
The Bundle-SymbolicName of the required bundles in Maven Central is different to the re-bundled versions in the Eclipse p2 Update Site. This needs to be kept in mind when including the bundles to the product. I will use the symbolic names of the bundles from Maven Central in the further sections.

Once the bundles are available in the target platform there are different ways to make JAXB work with Java 11 in your OSGi / Eclipse application.

Variant 1: Modify bundle and code

This is the variant that is most often described.

  1. Add the package com.sun.xml.bind.v2 to the imported packages of the bundle that uses JAXB
  2. Create the JAXBContext by using the classloader of the model object
    JAXBContext context =
    JAXBContext.newInstance(
    MyClass.class.getPackageName(),
    MyClass.class.getClassLoader());
  3. Place a jaxb.index file in the package that contains the model classes. This file contains the simple class names of all JAXB mapped classes. For more information about the format of this file, have a look at the javadoc of the JAXBContext#newInstance(String, ClassLoader) method.

The following bundles need to be added to the product in order to make JAXB work with Java 11 in OSGi:

  • jakarta.activation-api
  • jakarta.xml.bind-api
  • com.sun.xml.bind.jaxb-impl

The downside of this variant is obviously that you have to modify code and you have to add a dependency to a JAXB implementation in all places where JAXB is used. In case third-party-libraries are part of your product that you don’t have under your control, this solution is probably not suitable. And you can also not exchange the JAXB implementation easily with this approach.

Variant 2: jakarta.xml.bind-api fragment

In this variant you create a fragment named jaxb.impl.binding to the jakarta.xml.bind-api bundle that adds the package com.sun.xml.bind.v2 to the imported packages.

  • Create a Fragment Project
  • Use jakarta.xml.bind-api as the Fragment-Host
  • Add com.sun.xml.bind.v2 to the Import-Package manifest header

The resulting MANIFEST.MF should look similar to the following snippet:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: JAXB Impl Binding
Bundle-SymbolicName: jaxb.impl.binding
Bundle-Version: 1.0.0.qualifier
Fragment-Host: jakarta.xml.bind-api;bundle-version="2.3.3"
Automatic-Module-Name: jaxb.impl.binding
Bundle-RequiredExecutionEnvironment: JavaSE-11
Import-Package: com.sun.xml.bind.v2

The following bundles need to be added to the product in order to make JAXB work with Java 11 in OSGi:

  • jakarta.activation-api
  • jakarta.xml.bind-api
  • com.sun.xml.bind.jaxb-impl
  • jaxb.impl.binding

This variant seems to me the most comfortable one. There are no modifications required in the existing bundles and the dependency to the JAXB implementation is encapsulated in a fragment, which makes it easy to exchange if needed.

Variant 3: system.bundle fragment

With this variant you add the necessary bundles to the classloader the framework is started with.
Using bndtools this can be done via the -runpath instruction. The Equinox launcher does not know such an instruction. For an Eclipse RCP application you need to create system.bundle fragment. Such a fragment contains the necessary jar files and exports the packages of the wrapped jars.

  • Download the required jar files, e.g. from Maven Central, and place them in a folder named lib in the fragment project
    • jakarta.activation-api-1.2.2.jar
    • jakarta.xml.bind-api-2.3.3.jar
    • jaxb-impl-2.3.3.jar
  • Specify the Bundle-ClassPath manifest header to add the jars to the bundle classpath
  • Specify the Fragment-Host manifest header so the fragment is added to the system.bundle
  • Add the packages of the included libraries to the Export-Packages manifest header

The resulting MANIFEST.MF should look similar to the following snippet:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Extension
Bundle-SymbolicName: jaxb.extension
Bundle-Version: 1.0.0.qualifier
Fragment-Host: system.bundle; extension:=framework
Automatic-Module-Name: jaxb.extension
Bundle-RequiredExecutionEnvironment: JavaSE-11
Bundle-ClassPath: lib/jakarta.activation-api-1.2.2.jar,
 lib/jakarta.xml.bind-api-2.3.3.jar,
 lib/jaxb-impl-2.3.3.jar,
 .
Export-Package: com.sun.istack,
 com.sun.istack.localization,
 com.sun.istack.logging,
 com.sun.xml.bind,
 com.sun.xml.bind.annotation,
 com.sun.xml.bind.api,
 com.sun.xml.bind.api.impl,
 com.sun.xml.bind.marshaller,
 com.sun.xml.bind.unmarshaller,
 com.sun.xml.bind.util,
 com.sun.xml.bind.v2,
 com.sun.xml.bind.v2.bytecode,
 com.sun.xml.bind.v2.model.annotation,
 com.sun.xml.bind.v2.model.core,
 com.sun.xml.bind.v2.model.impl,
 com.sun.xml.bind.v2.model.nav,
 com.sun.xml.bind.v2.model.runtime,
 com.sun.xml.bind.v2.model.util,
 com.sun.xml.bind.v2.runtime,
 com.sun.xml.bind.v2.runtime.output,
 com.sun.xml.bind.v2.runtime.property,
 com.sun.xml.bind.v2.runtime.reflect,
 com.sun.xml.bind.v2.runtime.reflect.opt,
 com.sun.xml.bind.v2.runtime.unmarshaller,
 com.sun.xml.bind.v2.schemagen,
 com.sun.xml.bind.v2.schemagen.episode,
 com.sun.xml.bind.v2.schemagen.xmlschema,
 com.sun.xml.bind.v2.util,
 com.sun.xml.txw2,
 com.sun.xml.txw2.annotation,
 com.sun.xml.txw2.output,
 javax.activation,
 javax.xml.bind,
 javax.xml.bind.annotation,
 javax.xml.bind.annotation.adapters,
 javax.xml.bind.attachment,
 javax.xml.bind.helpers,
 javax.xml.bind.util

If you add this system.bundle fragment to the product, JAXB works the same way it did with Java 8.

This variant has the downside that you have to manage the JAXB libraries that are wrapped by the system.bundle fragment yourself, instead of simply consuming it from a repository.

Conclusion

For me the creation of a jakarta.xml.bind-api fragment as shown in Variant 2 seems to be the most comfortable variant. At least it worked in my scenarios, and also the build using Tycho 2.2 and the resulting Eclipse RCP product worked.

If you need to support Java 8 and Java 11 with your product at the same time, you should consider specifying the binding fragment as multi-release jar as explained in this blog post. Further information about multi-release jars can be found here:

If you see any issues with the jakarta.xml.bind-api fragment approach that I have not identified yet, please let me know. Maybe I am missing something important that was not covered by my tests.

Posted in Dirk Fauth, Eclipse, Java, OSGi | 2 Comments

Inspecting the OSGi runtime – New ways for Eclipse projects

I often get asked how to find and solve issues in an OSGi runtime. Especially with regards to OSGi services. I then always answer that you have two options:

While the Gogo Shell is typically already part of an Eclipse application and can be activated by passing the -console parameter to the Program Arguments, the Webconsole is not available that simple. As Eclipse application projects are mostly still created using PDE, you have to use a target definition to configure the libraries to use for development and deployment. And in the past a target platform could only consume p2 repositories. That was especially important for the Tycho builds, as the also supported Directory locations in a target definition were not supported by Tycho builds. As the Felix Webconsole is not available via p2 update site, the only way to include it to an Eclipse application was to include the necessary jars locally somehow.

Luckily there were a lot of improvements in that area, and since Tycho 2.0 also other file-based locations are supported. And with Tycho 2.2 even Maven dependencies can be included directly. At the time writing this blog post 2.2 is not yet released. But the support for Maven dependencies in a Target Definition is available in m2e. With this enhancement the inclusion of the Felix Webconsole becomes a lot easier.

Install the m2e PDE Integration

First you need to install the m2e PDE Integration into the Eclipse IDE.

  • Help – Install New Software…
  • Use the m2e Update Site: https://download.eclipse.org/technology/m2e/releases/latest/
  • Select m2e PDE Integration
  • Finish the installation

After the installation it can be used in the PDE Target Editor.

Interlude: Target Editor

IMHO the PDE Target Editor is the second worst editor in PDE, right after the Component Definition Editor. The later luckily doesn’t need to be used anymore as PDE added support for the OSGi DS Component annotations. As a replacement for the Target Editor I used the Target Platform DSL. Unfortunately the DSL seems to be not actively implemented, and therefore the new Maven location support is missing. But I’ve found out that you can use the Generic Editor for the .target file and get similar features as with the DSL. For me the most important thing is to avoid the dialog for selecting artifacts from an update site, as this one really has its problems. So the nice thing on the DSL is the code completion for unit id and version, which is also working pretty well in the Generic Editor. Which could make the DSL obsolete.

So with the new Maven location and the Generic Editor, I now suggest to use the Target Editor for adding the Maven locations and switch to the Generic Editor for adding InstallableUnits from p2 repositories.

Add the Webconsole artifacts to the Target Platform

Open a Target Definition file with the Target Editor and add the following artifacts:

  • commons-fileupload (1.4)
  • commons-io (2.4)
  • org.apache.felix.http.jetty (4.1.4)
  • org.apache.felix.inventory (1.0.6)
  • org.apache.felix.http.servlet-api (1.1.2)
  • org.apache.felix.webconsole.plugins.ds (2.1.0)
  • org.apache.felix.webconsole.plugins.event (1.1.8)
  • org.apache.felix.webconsole (4.6.0)

Note:

  • If you set the Dependencies scope to compile you get the transitive dependencies added too.
  • Unfortunately the dependencies of org.apache.felix.webconsole are not configured well in the pom.xml.
    • You will transitively get commons-fileupload in version 1.3.3, which does not satisfy the Import-Package statement in org.apache.felix.webconsole.
    • You will transitively get commons.io in version 2.6, which does not satisfy the Import-Package statement in org.apache.felix.webconsole.
    • org.apache.felix.inventory is missing.
  • I am using the Felix Http Jetty bundle as it is easier to configure than adding all the necessary Jetty bundles separately. But of course you can also use the Eclipse Jetty bundles directly from a p2 Update Site.
    • This unfortunately brings another dependency issue. The Felix Jetty bundle defines the Require-Capability header osgi.contract=JavaServlet. While the javax.servlet-api bundle that is transitively included by Maven would satisfy the technical requirements (Import Package), it is missing the capability header. To satisfy the capability you need to use org.apache.felix.http.servlet-api from Maven Central. Alternatively you can directly use the Eclipse Jetty bundles from an Eclipse Update Site and the provided javax.servlet bundle provided by Eclipse, as the Eclipse Jetty bundles to not specify the Require-Capability header.
  • If you do not find the transitive included Maven dependencies as bundles for example when creating a product or feature definition, try to reload the target definition again.

To add the Maven locations you need to:

  • Click Add… in the Target Editor
  • Select Maven
  • Provide the necessary information to select the artifact from Maven Central

The m2e PDE Integration has a nice feature to insert the values. If you have the Maven dependency XML structure in the clipboard, the values in the dialog are inserted automatically. To make it easier for adapters, here are the dependencies. Note that every dependency needs to be added separately.

<dependency>
    <groupId>commons-fileupload</groupId>
    <artifactId>commons-fileupload</artifactId>
    <version>1.4</version>
</dependency>

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.4</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.inventory</artifactId>
    <version>1.0.6</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.http.jetty</artifactId>
    <version>4.1.4</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.http.servlet-api</artifactId>
    <version>1.1.2</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.webconsole.plugins.ds</artifactId>
    <version>2.1.0</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.webconsole.plugins.event</artifactId>
    <version>1.1.8</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.webconsole</artifactId>
    <version>4.6.0</version>
    <scope>provided</scope>
</dependency>

Configure the product

If you have a feature based product you can create a new feature that includes the necessary bundles. This feature should include the following bundles:

  • org.apache.felix.http.servlet-api
  • org.apache.commons.commons-fileupload
  • org.apache.commons.io (2.4.0)
  • org.apache.felix.http.jetty
  • org.apache.felix.inventory
  • org.apache.felix.webconsole
  • org.apache.felix.webconsole.plugins.ds
  • org.apache.felix.webconsole.plugins.event

If you have a product based on bundles, ensure that these bundles are part of the Contents. Note that org.apache.commons.io needs to be included in version 2.4.0 to satisfy the dependencies of org.apache.felix.webconsole.

As Equinox has the policy to NOT activate all bundles on startup, you need to configure that the necessary bundles are started automatically:

  • Open the .product file
  • Switch to the Configuration tab
  • In the Start Levels section click Add… and add the following bundles
    • org.apache.felix.scr
    • org.apache.felix.http.jetty
    • org.apache.felix.webconsole
    • org.apache.felix.webconsole.plugins.ds
    • org.apache.felix.webconsole.plugins.event
  • Set Auto-Start for all bundles to true

Now you can launch the Eclipse application from the Overview tab via Launch an Eclipse application. The webconsole will be available via http://localhost:8080/system/console/
If you are asked for a login you can use the default admin/admin.

In the main bar of the Webconsole UI you can expand OSGi and find detailed informations on Bundles, Configuration, Events, Components, Log Service and Services. In these sub-sections you can find detailed information on the corresponding topics inside the current OSGi runtime. This way you can inspect and fix possible issues in a much more comfortable way.

Conclusion

Inspecting an OSGi runtime is much more comfortable using the Apache Felix Webconsole. With the new m2e PDE Integration finally Maven artifacts can be added as part of the target platform. Using it including the the Apache Felix Webconsole is much easier than it was before. And I am sure there are a lot more use cases that makes the live of Eclipse developers easier with that new feature. Thanks to Christoph Läubrich who added that feature lately.

Further information on the m2e PDE Integration can be found here:

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Inspecting the OSGi runtime – New ways for Eclipse projects

Build REST services with OSGi JAX-RS whiteboard

Some years ago I had a requirement to access the OSGi services inside my Eclipse application via web interface. Back then I used the OSGi HTTP Whiteboard Specification and wrapped a servlet around my service. Of course I wrote a blog post about this and named it Access OSGi Services via web interface.

That blog post was published before OSGi R7 was released. And at that time there was no simple alternative available. With R7 the JAX-RS Whiteboard Specification was added, which provides a way to achieve the same goal by using JAX-RS, which is way simpler than implementing Servlets. I gave a talk at the EclipseCon Europe 2018 with the title How to connect your OSGi application. In this talk I showed how you create a connection to your OSGi application using different specifications, namely

  • HTTP Service / HTTP Whiteboard
  • Remote Services (using ECF JAX-RS Distribution Provider)
  • JAX-RS Whiteboard

Unfortunately the recording of that talk failed, so I can only link to the slides and my GitHub repository that contains the code I used to show the different approaches in action.

In the Panorama project, in which I am currently involved, one of our goals is to provide cloud services for model processing and evaluation. As a first step we want to publish APP4MC services as cloud services (more information in the Eclipse Newsletter December 2020). There are services contained in APP4MC bundles that are free from dependencies to the Eclipse Runtime and do not require any Extension Points, and there are services in bundles that have dependencies to plug-ins that use Extension Points. But all the services we want to publish as cloud services are OSGi declarative services. While there are numerous ways and frameworks to create REST based web services (e.g. Spring Boot or Microprofile to just name two of them), I was searching for a way to do this in OSGi. Especially because I want to reduce the configuration and implementation efforts with regards to the runtime infrastructure for consuming the existing OSGi services of the project.

For the services that have dependencies to Extensions Points and require a running Eclipse Runtime, I was forced to use the HTTP Service / HTTP Whiteboard approach. The main reason for this is that because of this dependency I needed to stick with a PDE project layout. Unfortunately there is no JAX-RS Whiteboard implementation available in Eclipse and therefore not available via a p2 Update Site. Maybe it would be possible somehow, but actually the solution should be to get rid of Extension Points and the requirement for a running Eclipse runtime.

But this blog post is about JAX-RS Whiteboard and not about project layouts and Extension Points vs. Declarative Services. So I will focus on the services that have a clean dependency structure. The setup should be as comfortable as possible to be able to focus on the REST service implementation, and not struggle with the infrastructure too much.

Create the project structure

To create the project structure we can follow the steps described in the enRoute Tutorial.

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=project \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = jaxrs
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier.jaxrs
  • After setting the value for package you will get the information that for the two projects that will be created, the following defaults will be used:
    • app-artifactId: app
    • app-target-java-version: 8
    • impl-artifactId: impl

Note:
IMHO app and impl are not good values for project names. Although they are sub projects inside a Maven project, imported to the IDE this leads to confusions if you have multiple such projects in one workspace. By entering ‘n’ the defaults are declined and you need to insert the values for all parameters again. Additionally you can specify the artifactId of the app and the impl project, and the target Java version you want to develop with.

If you forget to specify different values for app and impl at creation time and want to change it afterwards, you will have several things to consider. Even with the refactoring capabilities of the IDE, you need to ensure that you do not forget something, like the fact that the name of the .bndrun file needs to be reflected in the pom.xml file.

  • After accepting the inserted values with ‘y’ the following project skeletons are created:
    • project parent folder named by the entered artifactId jaxrs
    • the app project
    • the impl project

Now the projects can be imported to the IDE of your choice. As the projects are plain Maven based Java projects, you can use any IDE. But of course my choice is Eclipse with bndtools.

  • Import the created projects via
    File – Import… – Maven – Existing Maven Projects
  • Select the created jaxrs directory

Once the import is done you should double check the dependencies of the created skeletons. Some of the dependencies and transitive dependencies in the generated pom.xml files are not up-to-date. For example Felix Jetty is included in version 4.0.6 (September 2018), while the most current version is 4.1.4 (November 2020). You can check this for example by opening the Repositories view in the Bndtools perspective and expanding the Maven Dependencies section. The libraries listed inside Maven Dependencies are added from the Maven configuration of the created project. To update the version of one of those libraries, you need to add the corresponding configuration to the dependencyManagement section of the jaxrs/pom.xml, e.g.

<dependency>
  <groupId>org.apache.felix</groupId>
  <artifactId>org.apache.felix.http.jetty</artifactId>
  <version>4.1.4</version>
</dependency>

You should also update the version of the bnd Maven plugins. The generated pom.xml files use version 4.1.0, which is pretty outdated. At the time writing this blog post the most recent version is 5.2.0.

  • Open jaxrs/pom.xml
  • Locate bnd.version in the properties section
  • Update 4.1.0 to 5.2.0
  • Right click on the jaxrs project – Maven – Update Project…
    • Have all projects checked
    • OK

Implementing the OSGi service

As the goal is to wrap an existing OSGi Declarative Service to make it accessible as web service, we use the M.U.S.E (Most Useless Service Ever) introduced in my Getting Started with OSGi Declarative Services blog post. Unfortunately the combination of Bndtools workspace projects with Bndtools Maven projects does not work well. Mainly because the Bndtools workspace projects are not automatically available as Maven modules. So we create the API and the service implementation projects also by using the OSGi enRoute archetypes.

Note:
If you have an OSGi service bundle already available via Maven, you can also use that one by adding the dependency to the pom.xml files and skip this section.

  • Go to the newly created jaxrs directory and create an API module using the api archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=api \
    -DarchetypeVersion=7.0.0
  • groupId = org.fipro.modifier
  • artifactId = api
  • version = 1.0-SNAPSHOT
  • package = org.fipro.modifier.api
  • Then create the service implementation module using the ds-component archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=ds-component \
    -DarchetypeVersion=7.0.0
  • groupId = org.fipro.modifier
  • artifactId = inverter
  • version = 1.0-SNAPSHOT
  • package = org.fipro.modifier.inverter
  • Import the created projects via
    File – Import… – Maven – Existing Maven Projects
  • Select the jaxrs directory

Service interface

  • In the Bndtools Explorer locate the api module and expand to the package org.fipro.modifier.api
  • Implement the StringModifier interface:
public interface StringModifier {
	String modify(String input);
}
  • You can delete the ConsumerInterface and the ProviderInterface which were created by the archetype.
  • Ensure that you do NOT delete the package-info.java file in the org.fipro.modifier.api package. It configures that the package is exported. If this file is missing, the package is a Private-Package and therefore not usable by other OSGi bundles.

    The package-info.java file and its content are part of the Bundle Annotations introduced with R7. Here are some links if you are interested in more detailed information:

Service implementation

  • In the Bndtools Explorer locate the inverter module.
  • Open the pom.xml file and add the dependency to the api module in the dependencies section.
<dependency>
  <groupId>org.fipro.modifier</groupId>
  <artifactId>api</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
  • Expand to the package org.fipro.modifier.inverter
  • Implement the StringInverter service:
@Component
public class StringInverter implements StringModifier {

	@Override
	public String modify(String input) {
		return new StringBuilder(input).reverse().toString();
	}
}
  • You can delete the ComponentImpl class that was created by the archetype.
  • Note that the package does not contain a package-info.java file, as the service implementation is typically NOT exposed.

Implementing the REST service

After the projects are imported to the IDE and the OSGi service to consume is available, we can start implementing the REST based service.

  • In the Bndtools Explorer locate the impl module.
  • Open the pom.xml file and add the dependency to the api module in the dependencies section.
<dependency>
  <groupId>org.fipro.modifier</groupId>
  <artifactId>api</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
  • Expand to the package org.fipro.modifier.jaxrs
  • Implement the InverterRestService:
    • Add the @Component annotation to the class definition and specify the service parameter to specify it as a service, not an immediate component.
    • Add the @JaxrsResource annotation to the class definition to mark it as a JAX-RS whiteboard resource.
      This will add the service property osgi.jaxrs.resource=true which means this service must be processed by the JAX-RS whiteboard.
    • Get a StringModifier injected using the @Reference annotation.
    • Implement a JAX-RS resource method that uses the StringModifier.
@Component(service=InverterRestService.class)
@JaxrsResource
public class InverterRestService {
    
	@Reference
	StringModifier modifier;
	
	@GET
	@Path("modify/{input}")
	public String modify(@PathParam("input") String input) {
		return modifier.modify(input);
	}
}

Interlude: PROTOTYPE Scope

When you read the specification, you will see that the example service is using the PROTOTYPE scope. The example services in the OSGi enRoute tutorials do not use the PROTOTYPE scope. So I was wondering when to use the PROTOTYPE scope for JAX-RS Whiteboard services. I was checking the specification and asked on the OSGi mailing list. Thanks to Raymond Augé who helped me understanding it better. In short, if your component implementation is stateless and you get all necessary information injected to the JAX-RS resource methods, you can avoid the PROTOTYPE scope. If you have a stateful implementation, that for example gets JAX-RS context objects for a request or session injected into a field, you have to use the PROTOTYPE scope to ensure that every information is only used by that single request. The example service in the specification therefore does not need to specify the PROTOTYPE scope, as it is a very simple example. But it is also not wrong to use the PROTOTYPE scope even for simpler services. This aligns the OSGi service design (where typically every component instance is a singleton) with the JAX-RS design, as JAX-RS natively expects to re-create resources on every request.

Prepare the application project

In the application project we need to ensure that our service is available. In case the StringInverter from above was implemented, the inverter module needs to be added to the dependencies section of the app/pom.xml file. If you want to use another service that can be consumed via Maven, you of course need to add that dependency.

  • In the Bndtools Explorer locate the app module.
  • Open the pom.xml file and add the dependency to the inverter module in the dependencies section.
<dependency>
  <groupId>org.fipro.modifier</groupId>
  <artifactId>inverter</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
  • Open app.bndrun
  • Add org.fipro.modifier.inverter to the Run Requirements
  • Click on Resolve and double check that the modules api, impl and inverter are part of the Run Bundles
  • Click on Run OSGi
  • Open a browser and navigate to http://localhost:8080/modify/fubar to see the new REST based service in action.

JSON support

As returning a plain String is quite uncommon for a web service, we now extend our setup to return the result as JSON. We will use Jackson for this, so we need to add it to the dependencies of the impl module. The simplest way is to use org.apache.aries.jax.rs.jackson.

  • In the Bndtools Explorer locate the impl module.
  • Open the pom.xml file and add the dependency to org.apache.aries.jax.rs.jackson in the dependencies section.
<dependency>
    <groupId>org.apache.aries.jax.rs</groupId>
    <artifactId>org.apache.aries.jax.rs.jackson</artifactId>
    <version>1.0.2</version>
</dependency>

Alternative: Custom Converter

Alternatively you can implement your own converter and register it as a JAX-RS Whiteboard Extension.

  • In the Bndtools Explorer locate the impl module.
  • Open the pom.xml file and add the dependency to the Jackson in the dependencies section.
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.12.0</version>
</dependency>
  • Implement the JacksonJsonConverter:
    • Add the @Component annotation to the class definition and specify the PROTOTYPE scope parameter to ensure that multiple instances can be requested.
    • Add the @JaxrsExtension annotation to the class definition to mark the service as a JAX-RS extension type that should be processed by the JAX-RS whiteboard.
    • Add the @JaxrsMediaType(APPLICATION_JSON) annotation to the class definition to mark the component as providing a serializer capable of supporting the named media type, in this case the standard media type for JSON.
    • Internally make use of the OSGi Converter Specification for the implementation.
@Component(scope = PROTOTYPE)
@JaxrsExtension
@JaxrsMediaType(APPLICATION_JSON)
public class JacksonJsonConverter<T> implements MessageBodyReader<T>, MessageBodyWriter<T> {

    @Reference(service=LoggerFactory.class)
    private Logger logger;
	
    private final Converter converter = Converters.newConverterBuilder()
            .rule(String.class, this::toJson)
            .rule(this::toObject)
            .build();

    private ObjectMapper mapper = new ObjectMapper();
    
    private String toJson(Object value, Type targetType) {
        try {
            return mapper.writeValueAsString(value);
        } catch (JsonProcessingException e) {
            logger.error("error on JSON creation", e);
            return e.getLocalizedMessage();
        }
    }

    private Object toObject(Object o, Type t) {
        try {
	    if (List.class.getName().equals(t.getTypeName())) {
                return this.mapper.readValue((String) o, List.class);
            }
            return this.mapper.readValue((String) o, String.class);
        } catch (IOException e) {
            logger.error("error on JSON parsing", e);
        }
        return CANNOT_HANDLE;
    }

    @Override
    public boolean isWriteable(
        Class<?> c, Type t, Annotation[] a, MediaType mediaType) {

        return APPLICATION_JSON_TYPE.isCompatible(mediaType) 
            || mediaType.getSubtype().endsWith("+json");
    }

    @Override
    public boolean isReadable(
        Class<?> c, Type t, Annotation[] a, MediaType mediaType) {

        return APPLICATION_JSON_TYPE.isCompatible(mediaType) 
            || mediaType.getSubtype().endsWith("+json");
    }

    @Override
    public void writeTo(
        T o, Class<?> arg1, Type arg2, Annotation[] arg3, MediaType arg4,
        MultivaluedMap<String, java.lang.Object> arg5, OutputStream out)
        throws IOException, WebApplicationException {

        String json = converter.convert(o).to(String.class);
        out.write(json.getBytes());
    }

    @SuppressWarnings("unchecked")
    @Override
    public T readFrom(
        Class<T> arg0, Type arg1, Annotation[] arg2, MediaType arg3, 
        MultivaluedMap<String, String> arg4, InputStream in) 
        throws IOException, WebApplicationException {

    	BufferedReader reader = 
            new BufferedReader(new InputStreamReader(in));
        return (T) converter.convert(reader.readLine()).to(arg1);
    }
}

Update the InverterRestService

  • Add the JAX-RS @Produces(MediaType.APPLICATION_JSON) annotation to the class definition to specify that JSON responses are created.
  • Add the @JSONRequired annotation to the class definition to mark this class to require JSON media type support.
  • Optional:
    Get multiple StringModifier injected and return a List of Strings as a result of the REST resource.
@Component(service=InverterRestService.class)
@JaxrsResource
@Produces(MediaType.APPLICATION_JSON)
@JSONRequired
public class InverterRestService {
	
	@Reference
	private volatile List<StringModifier> modifier;
	
	@GET
	@Path("modify/{input}")
	public List<String> modify(@PathParam("input") String input) {
		return modifier.stream()
				.map(mod -> mod.modify(input))
				.collect(Collectors.toList());
	}
}
  • Optional:
    Implement an additional StringModifier in the inverter module.
@Component
public class Upper implements StringModifier {

	@Override
	public String modify(String input) {
		return input.toUpperCase();
	}
}
  • In the Bndtools Explorer locate the app module.
  • Open app.bndrun
  • If you use org.apache.aries.jax.rs.jackson, add it to the Run Requirements
  • Click on Resolve to ensure that the Jackson libraries are part of the Run Bundles
  • Click on Run OSGi
  • Open a browser and navigate to http://localhost:8080/modify/fubar to see the updated result.

Multipart file upload

In the Panorama project the REST based cloud services are designed as file processing services. So you upload a file, process it and download the result. This way you can for example migrate Amalthea Model files to a newer version, perform a static analysis of an Amalthea Model and even transform an Amalthea Model to some executable format and execute the result for simulation scenarios.

When searching for file uploads with REST and Java, you only find information on how to do this with either Jersey or Apache CXF. But even though the Aries JAX-RS Whiteboard reference implementation is based on Apache CXF, none of the tutorials worked for me. The reason is that the Aries JAX-RS Whiteboard completely hides the underlying Apache CXF implementation. Thanks to Tim Ward who helped me on the OSGi mailing list, I was able to solve this. Therefore I want to share the solution here.

Multipart file upload requires support from the underlying servlet container. Using the OSGi enRoute Maven archetypes Apache Felix HTTP Jetty is included as implementation of the R7 OSGi HTTP Service and the R7 OSGi HTTP Whiteboard Specification. So a Jetty is included in the setup and multipart file uploads are supported.

Enable Multipart Support

According to the HTTP Whiteboard Specification, Multipart File Uploads need to be enabled via the corresponding component properties. This can be done for example by creating a custom JAX-RS Whiteboard Application and adding the @HttpWhiteboardServletMultipart Component Property Type annotation with the corresponding attributes.

Note:
In this tutorial I will not use this approach, but for completeness I want to share how the creation and usage of a JAX-RS Whiteboard application can be done.

@Component(service=Application.class)
@JaxrsApplicationBase("app4mc")
@JaxrsName("app4mcMigration")
@HttpWhiteboardServletMultipart(enabled = true)
public class MigrationApplication extends Application {}

In this case the JAX-RS Whiteboard resource needs to be registered on the created application by using the @JaxrsApplicationSelect Component Property Type annotation.

@Component(service=Migration.class)
@JaxrsResource
@JaxrsApplicationSelect("(osgi.jaxrs.name=app4mcMigration)")
public class Migration {
...
}

Creating custom JAX-RS Whiteboard Applications make sense if you want to publish multiple applications in one installation/server. In a scenario where only one application is published in isolation, e.g. one REST based service in one container (e.g. Docker), the creation of a custom application is not necessary. Instead it is sufficient to configure the default application provided by the Aries JAX-RS Whiteboard implementation using the Configuration Admin. The PID and the available configuration properties are listed here.

Configuring an OSGi service programmatically via Configuration Admin is not very intuitive. While it is quite powerful to change configurations at runtime, it feels uncomfortable to provide a configuration to a component from the outside. Luckily with R7 the Configurator Specification was introduced to deal with this. Using the Configurator, the component configuration can be provided using a resource in JSON format.

  • First we need to specify the requirement on the Configurator. This can be done by using the @RequireConfigurator Bundle Annotation. Using the archetype this is already done in the app module.
    • In the Bndtools Explorer locate the app module.
    • Locate the package-info.java file in src/main/java/config.
    • Verify that it looks like the following snippet.
@RequireConfigurator
package config;

import org.osgi.service.configurator.annotations.RequireConfigurator;
  • Now locate the configuration.json file in src/main/resources/OSGI-INF/configurator
  • Modify the file to contain the multipart configuration:
    • org.apache.aries.jax.rs.whiteboard.default is the PID of the default application
    • osgi.http.whiteboard.servlet.multipart.enabled is the component property for enabling multipart file uploads
{
    ":configurator:resource-version" : 1,
    ":configurator:symbolic-name" : "org.fipro.modifier.app.config",
    ":configurator:version" : "1.0-SNAPSHOT",
    
    "org.apache.aries.jax.rs.whiteboard.default" : {
        "osgi.http.whiteboard.servlet.multipart.enabled" : "true"
    }
}
  • Open app.bndrun
    • Add org.fipro.modifier.app to the Run Requirements
    • Click Resolve to recalculate the Run Bundles

Note:
While writing this blog post and tested the tutorial I noticed that on Resolve the inverter module was sometimes not resolved for whatever reason. To ensure that the application is started with all necessary bundles, add impl, app and inverter to the Run Requirements. Double check after Resolve that the following bundles are part of the Run Bundles:

  • org.fipro.modifier.api
  • org.fipro.modifier.app
  • org.fipro.modifier.impl
  • org.fipro.modifier.inverter

Process Multipart File Uploads

As the JAX-RS standards do not contain multipart support, we need to fallback to Servlet implementations. Fortunately we can get JAX-RS resources injected as method parameter or fields by using for example the @Context JAX-RS annotation. For the multipart support we can get the HttpServletRequest injected and extract the information from there.

  • Update the InverterRestService
  • Add the following JAX-RS resource method
@POST
@Path("modify/upload")
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.TEXT_PLAIN)
public Response upload(@Context HttpServletRequest request) 
        throws IOException, ServletException {

    // get the part with name "file" received by within
    // a multipart/form-data POST request
    Part part = request.getPart("file");
    if (part != null 
            && part.getSubmittedFileName() != null 
            && part.getSubmittedFileName().length() > 0) {

        StringBuilder inputBuilder = new StringBuilder();
        try (InputStream is = part.getInputStream();
                BufferedReader br = 
                    new BufferedReader(new InputStreamReader(is))) {

            String line;
            while ((line = br.readLine()) != null) {
                inputBuilder.append(line).append("\n");
            }
        }
		
        // modify file content
        String input = inputBuilder.toString();
        List<String> modified = modifier.stream()
            .map(mod -> mod.modify(input))
            .collect(Collectors.toList());

            return Response.ok(String.join("\n\n", modified)).build();
    }

    return Response.status(Status.PRECONDITION_FAILED).build();
}
  • @Consumes(MediaType.MULTIPART_FORM_DATA)
    Specify that this REST resource consumes multipart/form-data.
  • @Produces(MediaType.TEXT_PLAIN)
    Specify that the result is plain text, which is for this use case the easiest way for returning the modified file content.
  • @Context HttpServletRequest request
    The HttpServletRequest is injected as method parameter.
  • Part part = request.getPart("file")
    Extract the Part with the name file (which is actually the form parameter name) from the HttpServletRequest.

If you are using a tool like Postman, you can test if the multipart upload is working by starting the app via app.bndrun and execute a POST request on http://localhost:8080/modify/upload

Interlude: Static Resources

To also be able to test the upload without additional tools, we publish a simple form as a static resource in our application. We use the HTTP Whiteboard Specification to register an HTML form as static resource with our REST service. For this add the @HttpWhiteboardResource component property type annotation to the InverterRestService.

@HttpWhiteboardResource(pattern = "/files/*", prefix = "static")

With this configuration all requests to URLs with the /files path are mapped to resources in the static folder. The next step is therefore to add the static form to the project:

  • In the Bndtools Explorer locate the impl module.
  • Right click src/main/java – New – Folder
  • Select the main folder in the tree
  • Add resources/static in the Folder name field
  • Finish
  • Right click on the created resources folder in the Bndtools Explorer
  • Build Path – Use as Source Folder
  • Create a new file upload.html in scr/main/resources/static
<html>
<body>
    <h1>File Upload with JAX-RS</h1>
    <form
        action="http://localhost:8080/modify/upload"
        method="post"
        enctype="multipart/form-data">

        <p>
            Select a file : <input type="file" name="file" size="45"/>
        </p>

        <input type="submit" value="Upload It"/>
    </form>
</body>
</html>

After starting the app via app.bndrun you can open a browser and navigate to http://localhost:8080/files/upload.html
Now you can select a file (don’t use a binary file) and upload it to see the modification result of the REST service.

Debugging / Inspection

To debug your REST based service you can start the application by using Debug OSGi instead of Run OSGi in the app.bndrun. But in the OSGi context you often face issues even before you can debug code. For this the app archetype creates an additional debug run configuration. The debug.bndrun file is located next to the app.bndrun file in the app module.

  • In the Bndtools Explorer locate the app module.
  • Open debug.bndrun
  • Click on Resolve
  • Click on Run OSGi

With the debug run configuration the following additional features are enabled to inspect the runtime:

This allows to interact with the Gogo Shell in the Console View. And even more comfortable by using the Webconsole. For the later open a browser and navigate to http://localhost:8080/system/console. Login with the default username/password admin/admin. Using the Webconsole you can check which bundles are installed and in which state they are. You can also inspect the available OSGi DS Components and check the active configurations.

Build

As the project setup is a plain Java/Maven project, the build is pretty easy:

  • In the Bndtools Explorer locate the jaxrs module (the top level project).
  • Right click – Run As – Maven build…
  • Enter clean verify in the Goals field
  • Run

From the command line:

  • Switch to the jaxrs directory that was created by the archetype
  • Execute mvn clean verify

Note:
It can happen that an error occurs on building the app module if you followed the steps in this tutorial exactly. The reason is that the build locates a change in the Run Bundles of the app.bndrun file. But it is just a difference in the ordering of the bundles. To solve this open the app.bndrun file, remove all entries from the Run Bundles and hit Resolve again. After that the order of the Run Bundles will be the same as the one in the build.

Note:
This build process works because we used the Eclipse IDE with Bndtools. If you are using another IDE or working only on the command line, have a look at the OSGi enRoute Microservices Tutorial that explains the separate steps for building from command line.

After the build succeeds you will find the resulting app.jar in jaxrs/app/target. Execute the following line to start the self-executable jar from the command line if you are located in the jaxrs folder:

java -jar app/target/app.jar

If you also want to build the debug configuration, you need to enable this in the pom.xml file of the app module:

  • In the Bndtools Explorer locate the app module.
  • Open pom.xml
  • In the build/plugins section update the bnd-export-maven-plugin and add the debug.bndrun to the bndruns.
<plugin>
    <groupId>biz.aQute.bnd</groupId>
    <artifactId>bnd-export-maven-plugin</artifactId>
    <configuration>
        <bndruns>
            <bndrun>app.bndrun</bndrun>
            <bndrun>debug.bndrun</bndrun>
        </bndruns>
    </configuration>
</plugin>

Executing the build again, you will now also find a debug.jar in the target folder of the app module, you can use to inspect the OSGi runtime.

Summary

While setting up this tutorial I faced several issues that mainly came from missing information or misunderstandings. Luckily the OSGi community was really helpful in solving this. So my contribution back is to write this blog post to help others that struggle with similar issues. The key takeaways are:

  • Using the OSGi enRoute Maven archetypes we have plain Java Maven projects. That means:
    • There is no Bundle Descriptor File (.bnd), so the package-info.java file is an important source for the MANIFEST.MF creation.
    • Dependencies to other modules need to be specified in the pom.xml files. This also includes modules in the same workspace.

Note:
The Maven project structure also causes quite some headache if you want to wrap OSGi services from Eclipse projects like APP4MC. Usually Eclipse projects publish their results as p2 update sites and not via Maven. And for Maven projects it is no possible to consume p2 update sites. Luckily more and more projects publish their results on Maven Central. And the APP4MC project plans to also do this. We are currently cleaning up the dependencies to make it possible to at least consume the model implementation easily from any Java based project. As long as dependencies are not available via Maven Central, the only way to solve the build is to install the artifacts in the local repository. This can either be done by building and installing the resulting artifacts locally via mvn clean install. Alternatively you can use the maven-install-plugin, which can even be integrated into your Maven build if you add the artifact to install to the source code repository. Thanks to Neil Bartlett who gave me the necessary pointer on this topic.

  • With OSGi R7 there are quite some interesting new specifications, that in combination make development with OSGi a lot more comfortable. The ones used in this tutorial are:
  • Using the Maven archetypes and the OSGi R7 specifications, implementing JAX-RS REST based services is similar to approaches with other frameworks like Spring Boot or Microprofile. And if you want to wrap existing OSGi services, it is definitely the most comfortable one. If consuming OSGi services is not needed, well then every framework has its pros and cons.

The sources of this tutorial are available on GitHub.

For an extended example have a look at the APP4MC Cloud Services.

Now I have a blog post about HTTP Service / HTTP Whiteboard and JAX-RS Whiteboard. The still missing blog post about Remote Services is not forgotten, but obviously I need more time to write about it, as it is the most complicated specification in OSGi. So stay tuned for that one. 🙂

Posted in Dirk Fauth, Java, OSGi | Tagged , , | Comments Off on Build REST services with OSGi JAX-RS whiteboard

NatTable + Eclipse Collections = Performance & Memory improvements ?

Some time ago I got reports from NatTable users about high memory consumption when using NatTable with huge data sets. Especially when using trees, the row hide/show feature and/or the row grouping feature. Typically I tended to say that this is because of the huge data set in memory, not because of the NatTable implementation. But as a good open source developer I take such reports seriously and verified the statement to be sure. So I updated one of the NatTable examples that combine all three features to show about 2 million entries. Then I modified some row heights, collapsed tree nodes and hid some rows. After checking the memory consumption I was surprised. The diagram below shows the result. The heap usage goes up to and beyond 1.5 GB on scrolling. In between I performed a GC and scrolled again, which causes the those peaks and valleys.

A more detailed inspection reveals that the high memory consumption is not because of the data in memory itself. There are a lot of primitive wrapper objects and internal objects in the map implementation that consume a big portion of the memory, as you can see in the following image.

Note:
Primitive wrapper objects have a higher memory consumption than primitive values itself. As there are already good articles about that topic available I will not repeat that. If you are interested in some more details in the topic Primitives vs Objects you can have a look at Baeldung for example.

So I started to check the NatTable implementation in search of the memory issue. And I found some causes. In several places there are internal caches for the index-position mapping to improve the rendering performance. Also the row heights and column widths are stored internally in a collection if a user resized them. Additionally some scaling operations incorrectly where using Double objects instead of primitive values to avoid rounding issues on scaling.

From my experience in an Android project I remembered an article that described a similar issue. In short: Java has no collections for primitive types, therefore primitive values need to be stored via wrapper objects. In Android they introduced the SparseArray to deal with this issue. So I was searching for primitive collections in Java and found Eclipse Collections. To be honest, I heard about Eclipse Collections before, but I always thought the standard Java Collections are already good enough, so why checking some third party collections. Small spoiler: I was wrong!

Looking at the website of Eclipse Collections, they state that they have a better performance and better memory consumption than the standard Java Collections. But a good developer and architect does not simply trust statements like “take my library and all your problems are solved”. So I started my evaluation of Eclipse Collections to see if the memory and performance issues in NatTable can be solved by using them. Additionally I was looking at the Primitive Type Streams introduced with Java 8 to see if some issues can even be leveraged using that API.

Creation of test data

Right at the beginning of my evaluation I noticed the first issue. Which way should be used to create a huge collection of test data to process? I read about some discussions using the good old for-loop vs. IntStream. So I started with some basic performance measurements to compare those two. The goal was to create test data with values from 0 to 1.000.000 where every 100.000 entry is missing.

The for-loop for creating an int[] with the described values looks like this:

int[] values = new int[999_991];
int index = 0;
for (int i = 0; i < 1_000_000; i++) {
    if (i == 0 || i % 100_000 != 0) {
        values[index] = i;
        index++;
    }
}

Using the IntStream API it looks like this:

int[] values = IntStream.range(0, 1_000_000)
        .filter(i -> i == 0 || i % 100_000 != 0)
        .toArray();

Additionally I wanted to compare the performance for creating an ArrayList<Integer> via for-loop and IntStream.

ArrayList<Integer> values = new ArrayList<>(999_991);
for (int i = 0; i < 1_000_000; i++) {
    if (i == 0 || i % 100_000 != 0) {
        values.add(i);
    }
}
List<Integer> values = IntStream.range(0, 1_000_000)
        .filter(i -> (i == 0 || i % 100_000 != 0))
        .boxed()
        .collect(Collectors.toList());

The result is interesting, although not suprising. Using the for-loop for creating an int[] is the clear winner. The usage of the IntStream is not bad but definitely worse than the for-loop. So for recurring tasks and huge ranges a refactoring from for-loop to IntStream is not a good idea. The creation of collections with wrapper objects is of course even worse, as wrapper objects need to be created via boxing.

collecting int[] via for-loop 1 ms
collecting int[] via IntStream 4 ms
collecting List<Integer> via for-loop 7 ms
collecting List<Integer> via IntStream 13 ms

I also tested the usage of HashSet and TreeSet for the wrapper objects, as typically in NatTable I need distinct values, often sorted for further processing. HashSet as well as TreeSet have a worse performance in the creation scenario, but TreeSet is the clear looser here.

collecting HashSet<Integer> via for-loop 16 ms
collecting TreeSet<Integer> via for-loop 189 ms
collecting Set<Integer> via IntStream 26 ms 

Note:
Running the tests in a single execution, the numbers are worse, which is caused by the VM ramp up and class loading. Executing it 10 times the average numbers are similar to the above but are still worse because the first execution is that much worse. The numbers shown above are the average out of 100 executions. And even increasing the number of executions to 1.000 the average values are quite the same and sometimes even get drastically better because of the VM optimizations for code that gets executed often. So the numbers presented here are the average out of 100 executions.

After evaluating the performance of standard Java API for creating test data, I looked at the Eclipse Collections – Primitive Collections. I compared MutableIntList with MutableIntSet and used the different factory methods for creating the test data:

  • Iteration
    directly operate on an initial empty MutableIntList

    MutableIntList values = IntLists.mutable.withInitialCapacity(999_991);
    for (int i = 0; i < 1_000_000; i++) {
        if (i == 0 || i % 100_000 != 0) {
            values.add(i);
        }
    }

    Note: The method withInitialCapacity(int) is introduced with Eclipse Collections 10.3. In previous versions it is not possible to specify an initial capacity using the primitive type factories, you can only create an emty MutableIntList or MutableIntSet using emtpy(). Without specifying the initial capacity, the iteration approach takes 3ms for the MutableIntList and 32ms for the MutableIntSet.

  • Factory method of(int...) / with(int...)
    MutableIntList values = IntLists.mutable.of(inputArray);
  • Factory method ofAll(Iterable<Integer>) / withAll(Iterable<Integer>)
    MutableIntList values = IntLists.mutable.ofAll(inputCollection);
  • Factory method ofAll(IntStream) / withAll(IntStream)
    MutableIntList values = IntLists.mutable.ofAll(
        IntStream
            .range(0, 1_000_000)
            .filter(i -> (i == 0 || i % 100_000 != 0)));

To create MutableIntSet use the IntSetsutility class:

MutableIntSet values = IntSets.mutable.xxx

Note:
For the factory methods of course the generation of the input also needs to be taken into account. So for creating data from scratch the time for creating the array or the collection needs to be added on top.

The result shows that at creation time the MutableIntList is much faster than the MutableIntSet. And the usage of the factory method with an int[] parameter is faster than using an Integer collection or IntStream or the direct operation on the MutableIntList. The reason for this is probably that using an int[] the MutableIntList instances are actually wrapper to the int[]. In this case you alse need to be careful, as modifications done via the primitive collection are directly reflected outside of the collection.

creating MutableIntList via iteration 1 ms
creating MutableIntList of int[] 0 ms
creating MutableIntList via Integer collection 4 ms
creating MutableIntList via IntStream 6 ms

creating MutableIntSet via iteration 21 ms
creating MutableIntSet of int[] 32 ms
creating MutableIntSet of Integer collection 39 ms
creating MutableIntSet via IntStream 38 ms

In several use cases the usage of a Set would be nicer to directly avoid duplicates in the collection. In NatTable a sorted order is also needed often, but there is no TreeSet equivalent in the primitive collections. But the MutableIntList comes with some nice API to deal with this. Via distinct() we get a new MutableIntList that only contains distinct values, via sortThis() the MutableIntList is directly sorted.

The following call returns a new MutableIntList with distinct values in a sorted order, similar to a TreeSet.

MutableIntList uniqueSorted = values.distinct().sortThis();

When changing this in the test, the time for creating a MutableIntList with distinct values in a sorted order increases to about 27 ms. Still less than creating a MutableIntSet. But as our input array is already sorted and only contains distinct values, this measurment is probably not really meaningful.

The key takeaways in this part are:

  • The good old for-loop still has the best performance. It is also faster than IntStream.range().
  • The MutableIntList has a better performance at creation time compared to MutableIntSet. This is the same with default Java List and Set implementations.
  • The MutableIntList has some nice API for modifications compared to handling a primitive array, which makes it more comfortable to use.

Usage of primitive value collections

As already mentioned, Eclipse Collections come with nice and comfortable API similar to the Java Stream API. But here I don’t want to go in more detail on that API. Instead I want to see how Eclipse Collections perform when using the standard Java Collections API and compare it with the performance of the Java Collections. By doing this I want to ensure that by using Eclipse Collections the performance is getting better or at least is not becoming worse than by using the default Java collections.

contains()

The first use case is the check if a value is contained in a collection. This is done by the contains() method.

boolean found = valuesCollection.contains(search);

For the array we compare the old-school for-loop

boolean found = false;
for (int i : valuesArray) {
    if (i == search) {
        found = true;
        break;
    }
}

with the primitive streams approach

boolean found = Arrays.stream(valuesArray).anyMatch(x -> x == search);

Additionally I added a test for using Arrays.binarySearch(). But the result is not 100% comparable, as binarySearch() requires the array to be sorted in advance. Since our array already contains the test data in sorted order, this test works.

boolean found = Arrays.binarySearch(valuesArray, search) >= 0;

We use the collections/arrays that we created before and first check for the value 450.000 which exists in the middle of the collection. Below you find the execution times of the different approaches.

contains in List 1 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms
contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms

Then we execute the same setup and check for the value 2.000.000 which does not exist in the collection. This way the whole collection/array needs to be processed, while in the above case the search stops once the value is found.

contains in List 2 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms

contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms

What we can see here is that the Java Primitive Streams have the worst performance for the contains() case and the Eclipse Collections perform best. But actually there is not much difference in the performance.

indexOf()

For people with a good knowledge of the Java Collections API the specific measurement of indexOf() might look strange. This is because for example the ArrayList internally uses indexOf() in the contains() implementation. And we have tested that before. But the Eclipse Primitive Collections are not using indexOf() in contains(). They operate on the internal array. Also indexOf() is implemented differently without the use of the equals() method. So a dedicated verification is useful. Below are the results for testing an existing value and a not existing value.

Check indexOf() 450_000
indexOf in collection 0 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms

Check indexOf() 2_000_000
indexOf in collection 1 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms

The results are actually not surprising. Also in this case there is not much difference in the performance.

Note:
There is no indexOf() for Sets and of course we can also not get an index when using Java Primitive Streams. So this test only compares ArrayList, iteration on an int[] and the MutableIntList. I also skipped testing binarySearch() here, as the results would be equal to the contains() test with the same restrictions.

removeAll()

Removing multiple items from a List is a big performance issue. Before my investigation here I was not aware on how serious this issue is. What I already knew from past optimizations is, that removeAll() on an ArrayList is much worse than iterating manually over the items to remove and then remove every item solely.

For the test I am creating the base collection with 1.000.000 entries and a collection with the values from 200.000 to 299.999 that should be removed. First I execute the iteration to remove every item solely

for (Integer r : toRemoveList) {
    valueCollection.remove(r);
}

then I execute the test with removeAll()

valueCollection.removeAll(toRemoveList);

The tests are executed on an ArrayList, a HashSet, a MutableIntList and a MutableIntSet.

Additionally I added a test that uses the Primitive Stream API to filter and create a new array from the result. As this is not a modification of the original collection, the result is not 100% comparable to the other executions. But anyhow maybe interesting to see (even with a dependency to binarySearch()).

int[] result = Arrays.stream(values)
    .filter(v -> (Arrays.binarySearch(toRemove, v) < 0))
    .toArray();

Note:
The code for removing items from an array is not very comfortable. Of course we could also use some library like Apache Commons with primitive type arrays. But this is about comparing plain Java Collections with Eclipse Collections. Therefore I decided to skip this.

Below are the execution results:

remove all by primitive stream 21 ms
remove all by iteration List 29045 ms

remove all List 64068 ms
remove all by iteration Set 1 ms
remove all Set 1 ms
remove all by iteration MutableIntList 13602 ms
remove all MutableIntList 21 ms
remove all by iteration MutableIntSet 2 ms
remove all MutableIntSet 2 ms

You can see that the iteration approach on an ArrayList is almost twice as fast as using removeAll(). But still the performance is very bad. The performance for removeAll() as well as the iteration approach on a Set and a MutableIntSet are very good. Interestingly the call to removeAll() on a MutableIntList is also acceptable, while the iteration approach seems to have a performance issue.

The key takeaways in this part are:

  • The performance of the Eclipse Collections is at least as good as the standard Java Collections. In several cases even far better.
  • Some performance workarounds that were introduced with standard Java Collections could avoid the performance improvements if they are simply adapted with Eclipse Collections and not also changed.

Memory consumption

From the above measurements and observations I can say that in most cases there is a performance improvement when using Eclipse Collections compared to the standard Java Collections. And even for use cases where no big improvement can be seen, there is a small improvement or at least no performance decrease. So I decided to integrate Eclipse Collections in NatTable and use the Primitive Collections in every place where primitive values where stored in Java Collections. Additionally I fixed all places where wrapper objects were created unnecessarily. Then I executed the example from the beginning again to measure the memory consumption. And I was really impressed!

As you can see in the above graph, the heap usage stays below 250 MB even on scrolling. Remember, before using Eclipse Primitive Collections, the heap usage growed up to 1,5 GB. Going into more detail we can see that a lot of objects that were created for internal management are not created anymore. So now really the data model that should be visualized by NatTable is taking most of memory, not the NatTable itself anymore.

One thing I noticed in the tests is that there is still quite some memory allocated if the MutableIntList or MutableIntSet are cleared via clear(). Basically it is the same with the Java Collections. The collection allocates the space for the needed size. For the Eclipse Collections this means the internal array keeps its size as it only fills the array with 0. To even clean up this memory you need to assign a new empty collection instance.

Note:
The concrete IntArrayList class contains a trimToSize() method. But as you typically work agains the interfaces when using the factories, that method is not accessible, and also not all implementations contain such a method.

ArrayList vs. MutableList

The data to show in a NatTable is accessed by an IDataProvider. This is an abstraction to the underlying data structure, so that users can choose the data structure they like. The most common data structure in use is a List, and NatTable provides the ListDataProvider to simplify the usage of a List as underlying data structure. With the ListDataProvider as an abstraction there is no iteration internally. Instead there is a point access per cell via a nested for loop:

for (int column = 0; column < dataProvider.getColumnCount(); column++) {
    for (int row = 0; row < dataProvider.getRowCount(); row++) {
        dataProvider.getDataValue(column, row);
    }
}

For the ListDataProvider this means, for every cell first the row object is retrieved from the List, then the property of the row object is accessed. As NatTable is a virtual table by design, it actually never happens that all values from the underlying data structure is accessed. Only the data that is currently visible is accessed at once. While an existing performance test in the NatTable performance test suite showed an impressive performance boost by switching from ArrayList to MutableList, a more detailed benchmark revealed that both List implementations have a similar performance. I can’t tell why the existing test showed such a big difference, probably some side effects in the test setup, as the numbers swap if the test execution is swapped.

Executing the benchmark with Java 8 and Java 11 on the other hand shows a difference. Using Java 11 as runtime the tests execute about 50% faster for both ArrayList and MutableList. And it also shows that with Java 11 it makes a difference if the nested iteration iterates column or row first. While with Java 8 the execution time was similar, with Java 11 the row first approach shows a better performance.

Conclusion

Being sceptic at the beginning and have to admit that Eclipse Collections are really interesting and useful when it comes to performance and memory usage optimizations with collections in Java. The API is really handy and similar to the Java Streams API, which makes the usage quite comfortable.

My takeaways after the verification:

  • For short living collections it is often better to either use primitive type arrays, primitive streams or the MutableIntList, which has the better performance at creation compared to the MutableIntSet.
  • For storing primitive values use MutableIntSet or MutableIntList. This gives a similar memory consumption than using primitive type arrays, by granting a rich API for modifications at runtime.
  • Make use of the Eclipse Collections API to make implementation and  processing as efficient as possible.
  • When migrating from Java Collections API to Eclipse Collections, ensure that no workarounds are in the current code base. Otherwise you might loose big performance improvements.
  • Even when using a library like Eclipse Collections you need to take care about your memory management to avoid leaks at runtime, e.g. create new instance in favour of clearing huge collections.

Based on the observations above I decided that Eclipse Collections will become a major dependency for NatTable Core. With NatTable 2.0 it will be part of the NatTable Core Feature. I am sure that internally even more optimizations are possible by using Eclipse Collections. And I will investigate where and how this can be done. So you can expect even more improvements in that area in the future.

In case you think my tests are incorrect or need to be improved, or you simply want to verify my statements, here are the links to the classes I used for my verification:

In the example class I increased the number of data rows to about 2.000.000 via this code:

List<Person> personsWithAddress = PersonService.getFixedPersons();
for (int i = 1; i < 100_000; i++) {
    personsWithAddress.addAll(PersonService.getFixedPersons());
}

and I increased the row groups via these two lines of code:

rowGroupHeaderLayer.addGroup("Flanders", 0, 8 * 100_000);
rowGroupHeaderLayer.addGroup("Simpsons", 8 * 100_000, 10 * 100_000);

If some of my observations are wrong or the code can be made even better, please let me know! I am always willing to learn!

Thanks to the Eclipse Collections team for this library!

If you are interested in learning more about Eclipse Collections, you might want to check out the Eclipse Collections Kata.

Posted in Dirk Fauth, Eclipse, Java | Tagged , , | 2 Comments

NatTable – dynamic scaling enhancements

The last weeks I worked on harmonizing the scaling capabilities of NatTable. The first goal was to provide scaled versions of all internal NatTable images. This caused an update of several NatTable images like the checkbox, that you will notice in the next major release. To test the changes I implemented a basic dynamic scaling, which by accident and some additional modification became the new zoom feature in NatTable. I will give a short introduction to the new feature here, so early adaptors have a chance to test it in different scenarios before the next major release is published.

To enable the UI bindings for dynamic scaling / zooming the newly introduced ScalingUiBindingConfiguration needs to be added to the NatTable.

natTable.addConfiguration(
    new ScalingUiBindingConfiguration(natTable));

This will add a MouseWheelListener and some key bindings to zoom in/out:

  • CTRL + mousewheel up = zoom in
  • CTRL + mousewheel down = zoom out
  • CTRL + ‘+’ = zoom in
  • CTRL + ‘-‘ = zoom out
  • CTRL + ‘0’ = reset zoom

The dynamic scaling can be triggered programmatically by executing the ConfigureScalingCommand on the NatTable instance. This command already exists for quite a while, but it was mainly used internally to align the NatTable scaling with the display scaling. I have introduced new default IDpiConverter to make it easier to trigger dynamic scaling:

  • DefaultHorizontalDpiConverter
    Provides the horizontal dots per inch of the default display.
  • DefaultVerticalDpiConverter
    Provides the vertical dots per inch of the default display.
  • FixedScalingDpiConverter
    Can be created with a DPI value to set a custom scaling.

At initialization time, NatTable internally fires a ConfigureScalingCommand with the default IDpiConverter to align the scaling with the display settings.

As long as only text is included in the table, registering the ScalingUiBindingConfigurationis all you have to do. Once ICellPainter are used that render images, some additional work has to be done. The reason for this is that for performance and memory reasons the images are referenced in the painter and not requested for every rendering operation. As painters are not part of the event handling, they can not be simply updated. Also for several reasons there are mechanisms that avoid applying the registered configurations multiple times.

There are three ways to style a NatTable, and as of now this requires three different ways to handle dynamic scaling updates for image painters.

  1. AbstractRegistryConfiguration
    This is the default way that exists for a long time. Most of the default configurations provide the styling configuration this way. As there is no way to identify which configuration registers ICellPainter and how the instances are created, the ScalingUiBindingConfiguration needs to be initialized with an updater that knows which steps to perform.

    natTable.addConfiguration(
      new ScalingUiBindingConfiguration(natTable, configRegistry -> {
    
        // we need to re-create the CheckBoxPainter
        // to reflect the scaling factor on the checkboxes
        configRegistry.registerConfigAttribute(
            CellConfigAttributes.CELL_PAINTER,
            new CheckBoxPainter(),
            DisplayMode.NORMAL,
            "MARRIED");
    
      }));
  2. Theme styling
    In a ThemeConfiguration the styling options for a NatTable are collected in one place. In the previous state the ICellPainter instance creation was done on the member initialization which was quite static. Therefore the ICellPainter instance creation was moved to a new method named createPainterInstances(), so the painter update on scaling can be performed without any additional effort. For custom painter configurations this means that they should be added to a theme via IThemeExtension.

    natTable.addConfiguration(
        new ScalingUiBindingConfiguration(natTable));
    
    // additional configurations
    
    natTable.configure();
    
    ...
    
    IThemeExtension customThemeExtension = new IThemeExtension() {
    
        @Override
        public void registerStyles(IConfigRegistry configRegistry) {
            configRegistry.registerConfigAttribute(
                CellConfigAttributes.CELL_PAINTER,
                new CheckBoxPainter(),
                DisplayMode.NORMAL,
                "MARRIED");
        }
    
        @Override
        public void unregisterStyles(IConfigRegistry configRegistry) {
            configRegistry.unregisterConfigAttribute(
                CellConfigAttributes.CELL_PAINTER,
                DisplayMode.NORMAL,
                "MARRIED");
        }
    };
    
    ThemeConfiguration modernTheme = 
        new ModernNatTableThemeConfiguration();
    modernTheme.addThemeExtension(customThemeExtension);
    
    natTable.setTheme(modernTheme);
  3. CSS styling
    The CSS styling support in NatTable already manages the painter instance creation. The only thing to do here is to register a command handler that triggers the CSS apply operation actively. Otherwise the images will scale only on interactions with the UI.

    natTable.registerCommandHandler(
        new CSSConfigureScalingCommandHandler(natTable));

I have tested several scenarios, and the current state of development looks quite good. But of course I am not sure if I tested everything and found every possible edge case. Therefore it would be nice to get some feedback from early adopters if the new zoom feature is stable or not. The p2 update site with the current development snapshot can be found on the NatTable SNAPSHOTS page. From build number 900 on the feature is included. Any issues found can be reported on the corresponding Bugzilla ticket 560802.

Please also note that with the newly introduced zooming capability I have dropped the ZoomLayer. It did only increase the cell dimensions but not the font or the images. Therefore it was not functional (maybe never finished) IMHO and to avoid confusions in the future I have deleted it now.

Posted in Dirk Fauth, Eclipse, Java | Tagged , | Comments Off on NatTable – dynamic scaling enhancements

Building a “headless RCP” application with Tycho

Recently I got the request to create a “headless RCP” application from an existing Eclipse project. I was reading several posts on that and saw that a lot of people using the term “headless RCP”. First of all I have to say that “headless RCP” is a contradiction in itself. RCP means Rich Client Platform. And a rich client is typically characterized by having a graphical user interface. A headless application means to have an application with a command line interface. So the characteristic here is to have no graphical user interface. When people are talking about a “headless RCP” application, they mean to create a command line application based on code that is created for a RCP application, but without the GUI. And that actually means they want to create an OSGi application based on Equinox.

For such a scenario I typically would recommend to use bndtools or at least plain Java with the bnd Maven plugins. But there are scenarios where this is not possible, e.g. if your whole project is an Eclipse RCP project which currently forces you to use PDE tooling, and you only want to extract some parts/services to a command line tool. Well, one could also suggest to separate those parts to a separate workspace where bndtools is used and consume those parts in the RCP workspace. But that increases the complexity in the development environment, as you need to deal with two different toolings for one project.

In this blog post I will explain how to create a headless product out of an Eclipse RCP project (PDE based) and how to build it automatically with Tycho. And I will also show a nice benefit provided by the bnd Maven plugins on top of it.

Let’s start with the basics. A headless application provides functionality via command line. In an OSGi application that means to have some services that can be triggered on the command line. If your functionality is based on Eclipse Extension Points, I suggest to convert them to OSGi Declarative Services. This has several benefits, one of them is that the creation of a headless application is much easier. That said this tutorial is based on using OSGi Declarative Services. If you are not yet familiar with that, give my Getting Started with OSGi Declarative Services a try. I will use the basic bundles from the PDE variant for the headless product here.

Product Definition

For the automated product build with Tycho we need a product definition. Of course with some special configuration parameters as we actually do not have a product in Eclipse RCP terms.

  • Create the product project
    • Main Menu → File → New → Project → General → Project
    • Set name to org.fipro.headless.product
    • Ensure that the project is created in the same location as the other projects.
    • Click Finish
  • Create a new product configuration
    • Right click on project → New → Product Configuration
    • Set the filename to org.fipro.headless.product
    • Select Create configuration file with basic settings
    • Click Finish
  • Configure the product
    • Overview tab
      • ID = org.fipro.headless
      • Version = 1.0.0.qualifier
      • Uncheck The product includes native launcher artifacts
      • Leave Product and Application empty
        Product and Application are used in RCP products, and therefore not needed for a headless OSGi command line application.
      • This product configuration is based on: plug-ins
        Note:
        You can also create a product configuration that is based on features. For simplicity we use the simple plug-ins variant.
    • Contents tab
      • Add the following bundles/plug-ins:
      • Custom functionality
        • org.fipro.inverter.api
        • org.fipro.inverter.command
        • org.fipro.inverter.provider
      • OSGi console
        • org.apache.felix.gogo.command
        • org.apache.felix.gogo.runtime
        • org.apache.felix.gogo.shell
        • org.eclipse.equinox.console
      • Equinox OSGi Framework with Felix SCR for Declarative Services support
        • org.eclipse.osgi
        • org.eclipse.osgi.services
        • org.eclipse.osgi.util
        • org.apache.felix.scr
    • Configuration tab
      • Start Levels
        • org.apache.felix.scr, StartLevel = 0, Auto-Start = true
          This is necessary because Equinox has the policy to not automatically activate any bundle. Bundles are only activated if a class is directly requested from it. But the Service Component Runtime is never required directly, so without that setting, org.apache.felix.scr will never get activated.
      • Properties
        • eclipse.ignoreApp = true
          Tells Equinox to to skip trying to start an Eclipse application.
        • osgi.noShutdown = true
          The OSGi framework will not be shut down after the Eclipse application has ended. You can find further information about these properties in the Equinox Framework QuickStart Guide and the Eclipse Platform Help.

Note:
If you want to launch the application from within the IDE via the Overview tab → Launch an Eclipse application, you need to provide the parameters as launching arguments instead of configuration properties. But running a command line application from within the IDE doesn’t make much sense. Either you need to pass the same command line parameter to process, or activate the OSGi console to be able to interact with the application. This should not be part of the final build result. But to verify the setup in advance you can add the following to the Launching tab:

  • Program Arguments
    • -console
  • VM Arguments
    • -Declipse.ignoreApp=true -Dosgi.noShutdown=true

When adding the parameters in the Launching tab instead of the Configuration tab, the configurations are added to the eclipse.ini in the root folder, not to the config.ini in the configuration folder. When starting the application via jar, the eclipse.ini in the root folder is not inspected.

Tycho build

To build the product with Tycho, you don’t need any specific configuration. You simply build it by using the tycho-p2-repository-plugin and the tycho-p2-director-plugin, like you do with an Eclipse product. This is for example explained here.

Create a pom.xml in org.fipro.headless.app.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>org.fipro</groupId>
    <artifactId>org.fipro.parent</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <groupId>org.fipro</groupId>
  <artifactId>org.fipro.headless</artifactId>
  <packaging>eclipse-repository</packaging>
  <version>1.0.0-SNAPSHOT</version>

  <build>
    <plugins>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-p2-repository-plugin</artifactId>
        <version>${tycho.version}</version>
        <configuration>
          <includeAllDependencies>true</includeAllDependencies>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-p2-director-plugin</artifactId>
        <version>${tycho.version}</version>
        <executions>
          <execution>
            <id>materialize-products</id>
            <goals>
              <goal>materialize-products</goal>
            </goals>
          </execution>
          <execution>
            <id>archive-products</id>
            <goals>
              <goal>archive-products</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>

For more information about building with Tycho, have a look at the vogella Tycho tutorial.

Running the build via mvn clean verify should create the resulting product in the folder org.fipro.headless/target/products. The archive file org.fipro.headless-1.0.0-SNAPSHOT.zip contains the product artifacts and the p2 related artifacts created by the build process. For the headless application only the folders configuration and plugins are relevant, where configuration contains the config.ini with the necessary configuration attributes, and in the plugins folder you find all bundles that are part of the product.

Since we did not add a native launcher, the application can be started with the java command. Additionally we need to open the OSGi console, as we have no starter yet. From the parent folder above configuration and plugins execute the following command to start the application with a console (update the filename of org.eclipse.osgi bundle as this changes between Eclipse versions):

java -jar plugins/org.eclipse.osgi_3.15.100.v20191114-1701.jar -configuration ./configuration -console

The -configuration parameter tells the framework where it should look for the config.ini, the -console parameter opens the OSGi console.

You can now interact with the OSGi console and even start the “invert” command implemented in the Getting Started tutorial.

Native launcher

While the variant without a native launcher is better exchangeable between operating systems, it is not very comfortable to start from a users perspective. Of course you can also add a batch file for simplification, but Equinox also provides native launchers. So we will add native launchers to our product. This is fairly easy because you only need to check The product includes native launcher artifacts on the Overview tab of the product file and execute the build again.

The resulting product now also contains the following files:

  • eclipse.exe
    Eclipse executable.
  • eclipse.ini
    Configuration pointing to the launcher artifacts.
  • eclipsec.exe
    Console optimized executable.
  • org.eclipse.equinox.launcher artifacts in the plugins directory
    Native launcher artifacts.

You can find some more information on those files in the FAQ.

To start the application you can use the added executables.

eclipse.exe -console

or

eclipsec.exe -console

The main difference in first place is that eclipse.exe operates in a new shell, while eclipsec.exe stays in the same shell when opening the OSGi console. The FAQ says “On Windows, the eclipsec.exe console executable can be used for improved command line behavior.”.

Note:
You can change the name of the eclipse.exe file in the product configuration on the Launching tab by setting a Launcher Name. But this will not affect the eclipsec.exe.

Command line parameter

Starting a command line tool with an interactive OSGi console is typically not what people want. This is nice for debugging purposes, but not for productive use. In productive use you usually want to use some parameters on the command line and then process the inputs. In plain Java you take the arguments from the main() method and process them. But in an OSGi application you do not write a main() method. The framework launcher has the main() method. To start your application directly you therefore need to create some kind of starter that can inspect the launch arguments.

With OSGi Declarative Services the starter is an immediate component. That is a component that gets activated directly once all references are satisfied. To be able to inspect the command line parameters in an OSGi application, you need to know how the launcher that started it provides this information. The Equinox launcher for example provides this information via org.eclipse.osgi.service.environment.EnvironmentInfo which is provided as a service. That means you can add a @Reference for EnvironmentInfo in your declarative service, and once it is available the immediate component gets activated and the application starts.

Create new project org.fipro.headless.app

  • Create the app project
    • Main Menu → File → New → Plug-in Project
    • Set name to org.fipro.headless.app
  • Create a package via right-click on src
    • Set name to org.fipro.headless.app
  • Open the MANIFEST.MF file
    • Add the following to Imported Packages
      • org.osgi.service.component.annotations
        Remember to mark it as optional to avoid runtime dependencies to the annotations.
      • org.eclipse.osgi.service.environment
        To be able to consume the Equinox EnvironmentInfo.
      • org.fipro.inverter
        To be able to consume the functional services.
  • Add org.fipro.headless.app to the Contents of the product definition.
  • Add org.fipro.headless.app to the modules section of the pom.xml.

Create an immediate component with the name EquinoxStarter.

@Component(immediate = true)
public class EquinoxStarter {

    @Reference
    EnvironmentInfo environmentInfo;

    @Reference
    StringInverter inverter;

    @Activate
    void activate() {
        for (String arg : this.environmentInfo.getNonFrameworkArgs()) {
            System.out.println(inverter.invert(arg));
        }
    }
}

With the simple version above you will notice some issues if you are not specifying the -console parameter:

  1. If you start the application via eclipse.exe with an additional parameter, the code will be executed, but you will not see any output.
  2. If you start the application via eclipsec.exe with an additional parameter, you will see an output but the application will not finish.

If you pass the -console parameter, the output will be seen in both cases and the OSGi console opens immediately afterwards.

First let’s have a look why the application seem to hang when started via eclipsec.exe. The reason is simply that we configured osgi.noShutdown=true, which means the OSGi framework will not be shut down after the Eclipse application has ended. So the simple solution would be to specify osgi.noShutdown=false. The downside is that now using the -console parameter will not keep the OSGi console open, but close the application immediately. Also using eclipse.exe with the -console parameter will not keep the OSGi console open. So the configuration parameter osgi.noShutdown should be set dependent on whether an interactive mode via OSGi console should be supported or not.

If both variants should be supported osgi.noShutdown should be set to true and a check for the -console parameter in code needs to be added. If that parameter is not set, close the application via System.exit(0);.

-console is an Equinox framework parameter, so the check and the handling looks like this:

boolean isInteractive = Arrays
    .stream(environmentInfo.getFrameworkArgs())
    .anyMatch(arg -> "-console".equals(arg));

if (!isInteractive) {
    System.exit(0);
}

With the additional handling above, the application will stay open with an active OSGi console if -console is set, and it will close immediately if -console is not set.

The other issue we faced was that we did not see any output when using eclipse.exe. The reason is that the outputs are not sent to the executing command shell. And without specifying an additional parameter, the used command shell is not even opened. One option to handle this is to open the command shell and keep it open as long as a user input closes it again. The framework parameter is -consoleLog. And the check could be as simple as the following for example:

boolean showConsoleLog = Arrays
    .stream(environmentInfo.getFrameworkArgs())
    .anyMatch(arg -> "-consoleLog".equals(arg));

if (showConsoleLog) {
    System.out.println();
    System.out.println("***** Press Enter to exit *****");
    // just wait for a Enter
    try (BufferedReader reader = new BufferedReader(new InputStreamReader(System.in))) {
        reader.readLine();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

With the -consoleLog handling, the following call will open a new shell that shows the result and waits for the user to press ENTER to close the shell and finish the application.

eclipse.exe test -consoleLog

bnd export

Although these results are already pretty nice, it can be even better. With bnd you are able to create a single executable jar that starts the OSGi application. This makes it easier to distribute the command line application. And the call of the application is similar easy compared with the native executable, while there is no native stuff inside and therefore it is easy exchangeable between operating systems.

Using the bnd-export-maven-plugin you can achieve the same result even with a PDE-Tycho based build. But of course you need to prepare things to make it work.

The first thing to know is that the bnd-export-maven-plugin needs a bndrun file as input. So now create a file headless.bndrun in org.fipro.headless.product project that looks similar to this:

-runee: JavaSE-1.8
-runfw: org.eclipse.osgi
-runsystemcapabilities: ${native_capability}

-resolve.effective: active;skip:="osgi.service"

-runrequires: \
osgi.identity;filter:='(osgi.identity=org.fipro.headless.app)'

-runbundles: \
org.fipro.inverter.api,\
org.fipro.inverter.command,\
org.fipro.inverter.provider,\
org.fipro.headless.app,\
org.apache.felix.gogo.command,\
org.apache.felix.gogo.runtime,\
org.apache.felix.gogo.shell,\
org.eclipse.equinox.console,\
org.eclipse.osgi.services,\
org.eclipse.osgi.util,\
org.apache.felix.scr

-runproperties: \
osgi.console=
  • As we want our Eclipse Equinox based application to be bundled as a single executable jar, we specify Equinox as our OSGi framework via -runfw: org.eclipse.osgi.
  • Via -runbundles we specify the bundles that should be added to the runtime.
  • The settings below -runproperties are needed to handle the Equinox OSGi console correctly.

Unfortunately there is no automatic way to transform a PDE product definition to a bndrun file, at least I am not aware of it. And yes there is some duplication involved here, but compared to the result it is acceptable IMHO. Anyhow, with some experience in scripting it should be easy to automatically create the bndrun file out of the product definition at build time.

Now enable the bnd-export-maven-plugin for the product build in the pom.xml of org.fipro.headless.product. Note that even with a pomless build it is possible to specify a specific pom.xml in a project if something additionally to the default build is needed (which is the case here).

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-export-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <failOnChanges>false</failOnChanges>
    <bndruns>
      <bndrun>headless.bndrun</bndrun>
    </bndruns>
    <bundles>
      <include>${project.build.directory}/repository/plugins/*</include>
    </bundles>
  </configuration>
  <executions>
    <execution>
      <goals>
        <goal>export</goal>
      </goals>
    </execution>
  </executions>
</plugin>

The bndruns configuration property points to the headless.bndrun we created before. In the bundles configuration property we point to the build result of the tycho-p2-repository-plugin to build up the implicit repository. This way we are sure that all required bundles are available without the need to specify any additional repository.

After a new build you will find the file headless.jar in org.fipro.headless.product/target. You can start the command line application via

java -jar headless.jar

You will notice that the OSGi console is started, anyhow which parameters are added to the command line. And all the command line parameters are not evaluated, because not the Equinox launcher started the application. Instead the bnd launcher started it. Therefore the EnvironmentInfo is not initialized correctly.

Unfortunately Equinox will anyhow publish the EnvironmentInfo as a service even if it is not initialized. Therefore the EquinoxStarter will be satisfied and activated. But we will get a NullPointerException (that is silently catched) when it is tried to access the framework and/or non-framework args. For good coding standards the EquinoxStarter needs to check if EnvironmentInfo is correctly initialized, otherwise it should do nothing. The code could look similar to this snippet:

@Component(immediate = true)
public class EquinoxStarter {

  @Reference
  EnvironmentInfo environmentInfo;

  @Reference
  StringInverter inverter;

  @Activate
  void activate() {
    if (environmentInfo.getFrameworkArgs() != null
      && environmentInfo.getNonFrameworkArgs() != null) {

      // check if -console was provided as argument
      boolean isInteractive = Arrays
        .stream(environmentInfo.getFrameworkArgs())
        .anyMatch(arg -> "-console".equals(arg));
      // check if -console was provided as argument
      boolean showConsoleLog = Arrays
        .stream(environmentInfo.getFrameworkArgs())
        .anyMatch(arg -> "-consoleLog".equals(arg));

      for (String arg : this.environmentInfo.getNonFrameworkArgs()) {
        System.out.println(inverter.invert(arg));
      }

      // If the -consoleLog parameter is used, a separate shell is opened. 
      // To avoid that it is closed immediately a simple input is requested to
      // close, so a user can inspect the outputs.
      if (showConsoleLog) {
        System.out.println();
        System.out.println("***** Press Enter to exit *****");
        // just wait for a Enter
        try (BufferedReader reader = new BufferedReader(new InputStreamReader(System.in))) {
          reader.readLine();
        } catch (IOException e) {
          e.printStackTrace();
        }
      }

      if (!isInteractive) {
        // shutdown the application if no console was opened
        // only needed if osgi.noShutdown=true is configured
        System.exit(0);
      }
    }
  }
}

This way we avoid that the EquinoxStarter is executing any code. So despite component instance creation and destruction, nothing happens.

To handle launching via bnd launcher, we need another starter. We create a new immediate component named BndStarter.

@Component(immediate = true)
public class BndStarter {
    ...
}

The bnd launcher provides the command line parameters in a different way. Instead of EnvironmentInfo you need to get the aQute.launcher.Launcher injected with its service properties. Inside the service properties map, there is an entry for launcher.arguments whose value is a String[]. To avoid the dependency to aQute classes in our code, we reference Object and use a target filter for launcher.arguments which works fine as Launcher is published also as Object to the ServiceRegistry.

String[] launcherArgs;

@Reference(target = "(launcher.arguments=*)")
void setLauncherArguments(Object object, Map<String, Object> map) {
    this.launcherArgs = (String[]) map.get("launcher.arguments");
}

Although not necessary, we add some code to align the behavior when started via bnd launcher with the behavior when started with the Equinox launcher. That means we check for the -console parameter and stop the application if that parameter is missing. The check for -consoleLog would also not be needed, as the bnd launcher stays in the same command shell like eclipsec.exe, but for processing we also remove it. Just in case someone tries it out.

The complete code of BndStarter would then look like this:

@Component(immediate = true)
public class BndStarter {

  String[] launcherArgs;

  @Reference(target = "(launcher.arguments=*)")
  void setLauncherArguments(Object object, Map<String, Object> map) {
    this.launcherArgs = (String[]) map.get("launcher.arguments");
  }

  @Reference
  StringInverter inverter;

  @Activate
  void activate() {
    boolean isInteractive = Arrays
      .stream(launcherArgs)
      .anyMatch(arg -> "-console".equals(arg));

    // clear launcher arguments from possible framework parameter
    String[] args = Arrays
      .stream(launcherArgs)
      .filter(arg -> !"-console".equals(arg) && !"-consoleLog".equals(arg))
      .toArray(String[]::new);

    for (String arg : args) {
      System.out.println(inverter.invert(arg));
    }

    if (!isInteractive) {
      // shutdown the application if no console was opened
      // only needed if osgi.noShutdown=true is configured
      System.exit(0);
    }
  }
}

After building again, the application will directly close without the -console parameter. And if -console is used, the OSGi console stays open.

The above handling was simply done to have something similar to the Eclipse product build. As the Equinox launcher does not automatically start all bundles the -console parameter triggers a process to start the necessary Gogo Shell bundles. The bnd launcher on the other hand always starts all installed bundles. The OSGi console always comes up and can be seen in the command shell even before the BndStarter kills it. If that behavior does no satisfy your needs, you could also easily build two application variants: one with a console and one without. You simply need to create another bndrun file that does not contain the console bundles and no console configuration properties.

-runee: JavaSE-1.8
-runfw: org.eclipse.osgi
-runsystemcapabilities: ${native_capability}

-resolve.effective: active;skip:="osgi.service"

-runrequires: \
    osgi.identity;filter:='(osgi.identity=org.fipro.headless.app)'

-runbundles: \
    org.fipro.inverter.api,\
    org.fipro.inverter.provider,\
    org.fipro.headless.app,\
    org.eclipse.osgi.services,\
    org.eclipse.osgi.util,\
    org.apache.felix.scr

If you add that additional bndrun file to the bndruns section of the bnd-export-maven-plugin the build will create two exports.

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-export-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <failOnChanges>false</failOnChanges>
    <bndruns>
      <bndrun>headless.bndrun</bndrun>
      <bndrun>headless_console.bndrun</bndrun> 
    </bndruns>
    <bundles>
      <include>target/repository/plugins/*</include>
    </bundles>
  </configuration>
  <executions>
    <execution>
      <goals>
        <goal>export</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To check if the application should be stopped or not, you then need to check for the system property osgi.console.

boolean hasConsole = System.getProperty("osgi.console") != null;

If a console is configured to not stop the application. If there is no configuration for osgi.console call System.exit(0).

This tutorial showed a pretty simple example to explain the basic concepts on how to build a command line application from an Eclipse project. A real-world example can be seen in the APP4MC Model Migration addon, where the above approach is used to create a standalone model migration command line tool. This tool can be used in other environments like in build servers for example, while the integration in the Eclipse IDE remains in the same project structure.

The sources of this tutorial are available on GitHub.

If you are interested in finding out more about the Maven plugins from bnd you might want to watch this talk from EclipseCon Europe 2019. As you can see they are helpful in several situations when building OSGi applications.

Update: configurable console with bnd launcher

I tried to make the executable jar behavior similar to the Equinox one. That means, I wanted to create an application where I am able to configure via command line parameter if the console should be activated or not. Achieving this took me quite a while, as I needed to find out what causes the console to start with Equinox or not. The important thing is that the property osgi.console needs to be set to an empty String. The value is actually the port to connect to, and with that value set to an empty String, the current shell is used. In the bndrun files this property is set via -runproperties. If you remove it from the bndrun file, the console actually never starts, even if passed as system property on the command line.

Section 19.4.6 in Launching | bnd explains why. It simply says that you are able to override a launcher property via system property. But you can not add a launcher property via system property. Knowing this I solved the issue by setting the osgi.console property to an invalid value in the -runproperties section.

-runproperties: \
    osgi.console=xxx

This way the application can be started with or without a console, dependent on whether osgi.console is provided as system parameter via command line or not.

Of course the check for the -console parameter should be removed from the BndStarter to avoid that users need to provide both arguments to open a console!

I added the headless_configurable.bndrun file to the repository to show this:

Launch without console:

java -jar headless_configurable.jar Test

Launch with console:

java -jar -Dosgi.console= headless_configurable.jar

Update: bnd-indexer-maven-plugin

I got this pull request that showed an interesting extension to my approach. It uses the bnd-indexer-maven-plugin to create an index that can then be used in the bndrun files to make it editable with bndtools.

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-indexer-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <inputDir>${project.build.directory}/repository/plugins/</inputDir>
  </configuration>
  <executions>
    <execution>
      <phase>package</phase>
      <id>index</id>
      <goals>
        <goal>local-index</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To make use of this you first need to execute the build without the bnd-export-maven-plugin so the index is created out of the product build. After that you can create or edit a bndrun file by adding these lines on top:

index: target/index.xml;name="org.fipro.headless.product"

-standalone: ${index}

I am personally not a big fan of such dependencies in the build timeline. But it is surely helpful for creating the bndrun file.

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Building a “headless RCP” application with Tycho