Eclipse RCP, Java 11, JAXB

With Java 11 several packages have been removed from the JRE itself, like JAXB. This means if you use JAXB in your Java application, you need to add the necessary bundles to your runtime. In an OSGi application this gets quite complicated as you typically only declare a dependency to the API. The JAXB API and the JAXB implementation are separated, which is typically a good design. But the JAXBContext in the API bundle loads the implementation, which means the API has to know the implementation. This is causing class loading issues that become hard to solve.

This topic is of course not new and there are already some explanations like this blog post or this topic on the equinox-dev mailing list. But as it still took me a while to get it working, I write this blog post to share my findings with others. And of course to persist my findings in my “external memory” if I need it in the future again. ūüôā

The first step is to add the necessary bundles to your target platform. You can either consume it from an Eclipse p2 Update Site or directly from a Maven repository using the m2e PDE Integration feature.

Note:
If you open the .target file with the Generic Text Editor, you can simply paste one of the below blocks and then resolve the target definition, instead of using the Target Editor.

Using an Eclipse p2 Update Site you can add the necessary dependencies by adding the following block to your target definition.

<location includeAllPlatforms="true" includeConfigurePhase="false" includeMode="slicer" includeSource="true" type="InstallableUnit">
  <repository location="https://download.eclipse.org/releases/2020-12/"/>
    <unit id="jakarta.xml.bind" version="2.3.3.v20201118-1818"/>
    <unit id="com.sun.xml.bind" version="2.3.3.v20201118-1818"/>
    <unit id="javax.activation" version="1.2.2.v20201119-1642"/>
    <unit id="javax.xml" version="1.3.4.v201005080400"/>
</location>

Note:
The jakarta.xml.bind bundle from Orbit is a re-bundled version of the original bundle in Maven Central and unfortunately specifies a version constraint on some javax.xml packages. As the Java runtime does not specify a version on the javax.xml packages, the configuration will fail to resolve. To solve this you need to add the javax.xml bundle to your target definition and the product configuration.

For consuming the libraries directly from a Maven repository you can add the following block if you have the m2e PDE Integration feature installed. This way you could even use newer versions that are not yet available via p2 update site.

<location includeDependencyScope="compile" includeSource="true" missingManifest="generate" type="Maven">
  <groupId>com.sun.xml.bind</groupId>
  <artifactId>jaxb-impl</artifactId>
  <version>2.3.3</version>
  <type>jar</type>
</location>
<location includeDependencyScope="compile" includeSource="true" missingManifest="generate" type="Maven">
  <groupId>jakarta.xml.bind</groupId>
  <artifactId>jakarta.xml.bind-api</artifactId>
  <version>2.3.3</version>
  <type>jar</type>
</location>

Note:
If you don’t have a JavaSE-1.8 mapped in your Eclipse IDE, or your bundle has a JavaSE-11 or higher set as Execution Environment, you need to specify the version constraint to the Import-Package statements to make PDE happy. Otherwise you will see some strange errors.

Note:
The Bundle-SymbolicName of the required bundles in Maven Central is different to the re-bundled versions in the Eclipse p2 Update Site. This needs to be kept in mind when including the bundles to the product. I will use the symbolic names of the bundles from Maven Central in the further sections.

Once the bundles are available in the target platform there are different ways to make JAXB work with Java 11 in your OSGi / Eclipse application.

Variant 1: Modify bundle and code

This is the variant that is most often described.

  1. Add the package com.sun.xml.bind.v2 to the imported packages of the bundle that uses JAXB
  2. Create the JAXBContext by using the classloader of the model object
    JAXBContext context =
    JAXBContext.newInstance(
    MyClass.class.getPackageName(),
    MyClass.class.getClassLoader());
  3. Place a jaxb.index file in the package that contains the model classes. This file contains the simple class names of all JAXB mapped classes. For more information about the format of this file, have a look at the javadoc of the JAXBContext#newInstance(String, ClassLoader) method.

The following bundles need to be added to the product in order to make JAXB work with Java 11 in OSGi:

  • jakarta.activation-api
  • jakarta.xml.bind-api
  • com.sun.xml.bind.jaxb-impl

The downside of this variant is obviously that you have to modify code and you have to add a dependency to a JAXB implementation in all places where JAXB is used. In case third-party-libraries are part of your product that you don’t have under your control, this solution is probably not suitable. And you can also not exchange the JAXB implementation easily with this approach.

Variant 2: jakarta.xml.bind-api fragment

In this variant you create a fragment named jaxb.impl.binding to the jakarta.xml.bind-api bundle that adds the package com.sun.xml.bind.v2 to the imported packages.

  • Create a Fragment Project
  • Use jakarta.xml.bind-api as the Fragment-Host
  • Add com.sun.xml.bind.v2 to the Import-Package manifest header

The resulting MANIFEST.MF should look similar to the following snippet:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: JAXB Impl Binding
Bundle-SymbolicName: jaxb.impl.binding
Bundle-Version: 1.0.0.qualifier
Fragment-Host: jakarta.xml.bind-api;bundle-version="2.3.3"
Automatic-Module-Name: jaxb.impl.binding
Bundle-RequiredExecutionEnvironment: JavaSE-11
Import-Package: com.sun.xml.bind.v2

The following bundles need to be added to the product in order to make JAXB work with Java 11 in OSGi:

  • jakarta.activation-api
  • jakarta.xml.bind-api
  • com.sun.xml.bind.jaxb-impl
  • jaxb.impl.binding

This variant seems to me the most comfortable one. There are no modifications required in the existing bundles and the dependency to the JAXB implementation is encapsulated in a fragment, which makes it easy to exchange if needed.

Variant 3: system.bundle fragment

With this variant you add the necessary bundles to the classloader the framework is started with.
Using bndtools this can be done via the -runpath instruction. The Equinox launcher does not know such an instruction. For an Eclipse RCP application you need to create system.bundle fragment. Such a fragment contains the necessary jar files and exports the packages of the wrapped jars.

  • Download the required jar files, e.g. from Maven Central, and place them in a folder named lib in the fragment project
    • jakarta.activation-api-1.2.2.jar
    • jakarta.xml.bind-api-2.3.3.jar
    • jaxb-impl-2.3.3.jar
  • Specify the Bundle-ClassPath manifest header to add the jars to the bundle classpath
  • Specify the Fragment-Host manifest header so the fragment is added to the system.bundle
  • Add the packages of the included libraries to the Export-Packages manifest header

The resulting MANIFEST.MF should look similar to the following snippet:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Extension
Bundle-SymbolicName: jaxb.extension
Bundle-Version: 1.0.0.qualifier
Fragment-Host: system.bundle; extension:=framework
Automatic-Module-Name: jaxb.extension
Bundle-RequiredExecutionEnvironment: JavaSE-11
Bundle-ClassPath: lib/jakarta.activation-api-1.2.2.jar,
 lib/jakarta.xml.bind-api-2.3.3.jar,
 lib/jaxb-impl-2.3.3.jar,
 .
Export-Package: com.sun.istack,
 com.sun.istack.localization,
 com.sun.istack.logging,
 com.sun.xml.bind,
 com.sun.xml.bind.annotation,
 com.sun.xml.bind.api,
 com.sun.xml.bind.api.impl,
 com.sun.xml.bind.marshaller,
 com.sun.xml.bind.unmarshaller,
 com.sun.xml.bind.util,
 com.sun.xml.bind.v2,
 com.sun.xml.bind.v2.bytecode,
 com.sun.xml.bind.v2.model.annotation,
 com.sun.xml.bind.v2.model.core,
 com.sun.xml.bind.v2.model.impl,
 com.sun.xml.bind.v2.model.nav,
 com.sun.xml.bind.v2.model.runtime,
 com.sun.xml.bind.v2.model.util,
 com.sun.xml.bind.v2.runtime,
 com.sun.xml.bind.v2.runtime.output,
 com.sun.xml.bind.v2.runtime.property,
 com.sun.xml.bind.v2.runtime.reflect,
 com.sun.xml.bind.v2.runtime.reflect.opt,
 com.sun.xml.bind.v2.runtime.unmarshaller,
 com.sun.xml.bind.v2.schemagen,
 com.sun.xml.bind.v2.schemagen.episode,
 com.sun.xml.bind.v2.schemagen.xmlschema,
 com.sun.xml.bind.v2.util,
 com.sun.xml.txw2,
 com.sun.xml.txw2.annotation,
 com.sun.xml.txw2.output,
 javax.activation,
 javax.xml.bind,
 javax.xml.bind.annotation,
 javax.xml.bind.annotation.adapters,
 javax.xml.bind.attachment,
 javax.xml.bind.helpers,
 javax.xml.bind.util

If you add this system.bundle fragment to the product, JAXB works the same way it did with Java 8.

This variant has the downside that you have to manage the JAXB libraries that are wrapped by the system.bundle fragment yourself, instead of simply consuming it from a repository.

Conclusion

For me the creation of a jakarta.xml.bind-api fragment as shown in Variant 2 seems to be the most comfortable variant. At least it worked in my scenarios, and also the build using Tycho 2.2 and the resulting Eclipse RCP product worked.

If you need to support Java 8 and Java 11 with your product at the same time, you should consider specifying the binding fragment as multi-release jar as explained in this blog post. Further information about multi-release jars can be found here:

If you see any issues with the jakarta.xml.bind-api fragment approach that I have not identified yet, please let me know. Maybe I am missing something important that was not covered by my tests.

Posted in Dirk Fauth, Eclipse, Java, OSGi | 2 Comments

Inspecting the OSGi runtime – New ways for Eclipse projects

I often get asked how to find and solve issues in an OSGi runtime. Especially with regards to OSGi services. I then always answer that you have two options:

While the Gogo Shell is typically already part of an Eclipse application and can be activated by passing the -console parameter to the Program Arguments, the Webconsole is not available that simple. As Eclipse application projects are mostly still created using PDE, you have to use a target definition to configure the libraries to use for development and deployment. And in the past a target platform could only consume p2 repositories. That was especially important for the Tycho builds, as the also supported Directory locations in a target definition were not supported by Tycho builds. As the Felix Webconsole is not available via p2 update site, the only way to include it to an Eclipse application was to include the necessary jars locally somehow.

Luckily there were a lot of improvements in that area, and since Tycho 2.0 also other file-based locations are supported. And with Tycho 2.2 even Maven dependencies can be included directly. At the time writing this blog post 2.2 is not yet released. But the support for Maven dependencies in a Target Definition is available in m2e. With this enhancement the inclusion of the Felix Webconsole becomes a lot easier.

Install the m2e PDE Integration

First you need to install the m2e PDE Integration into the Eclipse IDE.

  • Help – Install New Software…
  • Use the m2e Update Site: https://download.eclipse.org/technology/m2e/releases/latest/
  • Select m2e PDE Integration
  • Finish the installation

After the installation it can be used in the PDE Target Editor.

Interlude: Target Editor

IMHO the PDE Target Editor is the second worst editor in PDE, right after the Component Definition Editor. The later luckily doesn’t need to be used anymore as PDE added support for the OSGi DS Component annotations. As a replacement for the Target Editor I used the Target Platform DSL. Unfortunately the DSL seems to be not actively implemented, and therefore the new Maven location support is missing. But I’ve found out that you can use the Generic Editor for the .target file and get similar features as with the DSL. For me the most important thing is to avoid the dialog for selecting artifacts from an update site, as this one really has its problems. So the nice thing on the DSL is the code completion for unit id and version, which is also working pretty well in the Generic Editor. Which could make the DSL obsolete.

So with the new Maven location and the Generic Editor, I now suggest to use the Target Editor for adding the Maven locations and switch to the Generic Editor for adding InstallableUnits from p2 repositories.

Add the Webconsole artifacts to the Target Platform

Open a Target Definition file with the Target Editor and add the following artifacts:

  • commons-fileupload (1.4)
  • commons-io (2.4)
  • org.apache.felix.http.jetty (4.1.4)
  • org.apache.felix.inventory (1.0.6)
  • org.apache.felix.http.servlet-api (1.1.2)
  • org.apache.felix.webconsole.plugins.ds (2.1.0)
  • org.apache.felix.webconsole.plugins.event (1.1.8)
  • org.apache.felix.webconsole (4.6.0)

Note:

  • If you set the Dependencies scope to compile you get the transitive dependencies added too.
  • Unfortunately the dependencies of org.apache.felix.webconsole are not configured well in the pom.xml.
    • You will transitively get commons-fileupload in version 1.3.3, which does not satisfy the Import-Package statement in org.apache.felix.webconsole.
    • You will transitively get commons.io in version 2.6, which does not satisfy the Import-Package statement in org.apache.felix.webconsole.
    • org.apache.felix.inventory is missing.
  • I am using the Felix Http Jetty bundle as it is easier to configure than adding all the necessary Jetty bundles separately. But of course you can also use the Eclipse Jetty bundles directly from a p2 Update Site.
    • This unfortunately brings another dependency issue. The Felix Jetty bundle defines the Require-Capability header osgi.contract=JavaServlet. While the javax.servlet-api bundle that is transitively included by Maven would satisfy the technical requirements (Import Package), it is missing the capability header. To satisfy the capability you need to use org.apache.felix.http.servlet-api from Maven Central. Alternatively you can directly use the Eclipse Jetty bundles from an Eclipse Update Site and the provided javax.servlet bundle provided by Eclipse, as the Eclipse Jetty bundles to not specify the Require-Capability header.
  • If you do not find the transitive included Maven dependencies as bundles for example when creating a product or feature definition, try to reload the target definition again.

To add the Maven locations you need to:

  • Click Add… in the Target Editor
  • Select Maven
  • Provide the necessary information to select the artifact from Maven Central

The m2e PDE Integration has a nice feature to insert the values. If you have the Maven dependency XML structure in the clipboard, the values in the dialog are inserted automatically. To make it easier for adapters, here are the dependencies. Note that every dependency needs to be added separately.

<dependency>
    <groupId>commons-fileupload</groupId>
    <artifactId>commons-fileupload</artifactId>
    <version>1.4</version>
</dependency>

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.4</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.inventory</artifactId>
    <version>1.0.6</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.http.jetty</artifactId>
    <version>4.1.4</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.http.servlet-api</artifactId>
    <version>1.1.2</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.webconsole.plugins.ds</artifactId>
    <version>2.1.0</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.webconsole.plugins.event</artifactId>
    <version>1.1.8</version>
</dependency>

<dependency>
    <groupId>org.apache.felix</groupId>
    <artifactId>org.apache.felix.webconsole</artifactId>
    <version>4.6.0</version>
    <scope>provided</scope>
</dependency>

Configure the product

If you have a feature based product you can create a new feature that includes the necessary bundles. This feature should include the following bundles:

  • org.apache.felix.http.servlet-api
  • org.apache.commons.commons-fileupload
  • org.apache.commons.io (2.4.0)
  • org.apache.felix.http.jetty
  • org.apache.felix.inventory
  • org.apache.felix.webconsole
  • org.apache.felix.webconsole.plugins.ds
  • org.apache.felix.webconsole.plugins.event

If you have a product based on bundles, ensure that these bundles are part of the Contents. Note that org.apache.commons.io needs to be included in version 2.4.0 to satisfy the dependencies of org.apache.felix.webconsole.

As Equinox has the policy to NOT activate all bundles on startup, you need to configure that the necessary bundles are started automatically:

  • Open the .product file
  • Switch to the Configuration tab
  • In the Start Levels section click Add… and add the following bundles
    • org.apache.felix.scr
    • org.apache.felix.http.jetty
    • org.apache.felix.webconsole
    • org.apache.felix.webconsole.plugins.ds
    • org.apache.felix.webconsole.plugins.event
  • Set Auto-Start for all bundles to true

Now you can launch the Eclipse application from the Overview tab via Launch an Eclipse application. The webconsole will be available via http://localhost:8080/system/console/
If you are asked for a login you can use the default admin/admin.

In the main bar of the Webconsole UI you can expand OSGi and find detailed informations on Bundles, Configuration, Events, Components, Log Service and Services. In these sub-sections you can find detailed information on the corresponding topics inside the current OSGi runtime. This way you can inspect and fix possible issues in a much more comfortable way.

Conclusion

Inspecting an OSGi runtime is much more comfortable using the Apache Felix Webconsole. With the new m2e PDE Integration finally Maven artifacts can be added as part of the target platform. Using it including the the Apache Felix Webconsole is much easier than it was before. And I am sure there are a lot more use cases that makes the live of Eclipse developers easier with that new feature. Thanks to Christoph Läubrich who added that feature lately.

Further information on the m2e PDE Integration can be found here:

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Inspecting the OSGi runtime – New ways for Eclipse projects

Build REST services with OSGi JAX-RS whiteboard

Some years ago I had a requirement to access the OSGi services inside my Eclipse application via web interface. Back then I used the OSGi HTTP Whiteboard Specification and wrapped a servlet around my service. Of course I wrote a blog post about this and named it Access OSGi Services via web interface.

That blog post was published before OSGi R7 was released. And at that time there was no simple alternative available. With R7 the JAX-RS Whiteboard Specification was added, which provides a way to achieve the same goal by using JAX-RS, which is way simpler than implementing Servlets. I gave a talk at the EclipseCon Europe 2018 with the title How to connect your OSGi application. In this talk I showed how you create a connection to your OSGi application using different specifications, namely

  • HTTP Service / HTTP Whiteboard
  • Remote Services (using ECF JAX-RS Distribution Provider)
  • JAX-RS Whiteboard

Unfortunately the recording of that talk failed, so I can only link to the slides and my GitHub repository that contains the code I used to show the different approaches in action.

In the Panorama project, in which I am currently involved, one of our goals is to provide cloud services for model processing and evaluation. As a first step we want to publish APP4MC services as cloud services (more information in the Eclipse Newsletter December 2020). There are services contained in APP4MC bundles that are free from dependencies to the Eclipse Runtime and do not require any Extension Points, and there are services in bundles that have dependencies to plug-ins that use Extension Points. But all the services we want to publish as cloud services are OSGi declarative services. While there are numerous ways and frameworks to create REST based web services (e.g. Spring Boot or Microprofile to just name two of them), I was searching for a way to do this in OSGi. Especially because I want to reduce the configuration and implementation efforts with regards to the runtime infrastructure for consuming the existing OSGi services of the project.

For the services that have dependencies to Extensions Points and require a running Eclipse Runtime, I was forced to use the HTTP Service / HTTP Whiteboard approach. The main reason for this is that because of this dependency I needed to stick with a PDE project layout. Unfortunately there is no JAX-RS Whiteboard implementation available in Eclipse and therefore not available via a p2 Update Site. Maybe it would be possible somehow, but actually the solution should be to get rid of Extension Points and the requirement for a running Eclipse runtime.

But this blog post is about JAX-RS Whiteboard and not about project layouts and Extension Points vs. Declarative Services. So I will focus on the services that have a clean dependency structure. The setup should be as comfortable as possible to be able to focus on the REST service implementation, and not struggle with the infrastructure too much.

Create the project structure

To create the project structure we can follow the steps described in the enRoute Tutorial.

mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=project \
    -DarchetypeVersion=7.0.0
  • The execution will interactively ask you for some values to be used by the project creation. Use the following values or adjust them to your personal needs:
    • groupId = org.fipro.modifier
    • artifactId = jaxrs
    • version = 1.0-SNAPSHOT
    • package = org.fipro.modifier.jaxrs
  • After setting the value for package you will get the information that for the two projects that will be created, the following defaults will be used:
    • app-artifactId: app
    • app-target-java-version: 8
    • impl-artifactId: impl

Note:
IMHO app and impl are not good values for project names. Although they are sub projects inside a Maven project, imported to the IDE this leads to confusions if you have multiple such projects in one workspace. By entering ‘n’ the defaults are declined and you need to insert the values for all parameters again. Additionally you can specify the artifactId of the app and the impl project, and the target Java version you want to develop with.

If you forget to specify different values for app and impl at creation time and want to change it afterwards, you will have several things to consider. Even with the refactoring capabilities of the IDE, you need to ensure that you do not forget something, like the fact that the name of the .bndrun file needs to be reflected in the pom.xml file.

  • After accepting the inserted values with ‘y’ the following project skeletons are created:
    • project parent folder named by the entered artifactId jaxrs
    • the app project
    • the impl project

Now the projects can be imported to the IDE of your choice. As the projects are plain Maven based Java projects, you can use any IDE. But of course my choice is Eclipse with bndtools.

  • Import the created projects via
    File – Import… – Maven – Existing Maven Projects
  • Select the created jaxrs directory

Once the import is done you should double check the dependencies of the created skeletons. Some of the dependencies and transitive dependencies in the generated pom.xml files are not up-to-date. For example Felix Jetty is included in version 4.0.6 (September 2018), while the most current version is 4.1.4 (November 2020). You can check this for example by opening the Repositories view in the Bndtools perspective and expanding the Maven Dependencies section. The libraries listed inside Maven Dependencies are added from the Maven configuration of the created project. To update the version of one of those libraries, you need to add the corresponding configuration to the dependencyManagement section of the jaxrs/pom.xml, e.g.

<dependency>
  <groupId>org.apache.felix</groupId>
  <artifactId>org.apache.felix.http.jetty</artifactId>
  <version>4.1.4</version>
</dependency>

You should also update the version of the bnd Maven plugins. The generated pom.xml files use version 4.1.0, which is pretty outdated. At the time writing this blog post the most recent version is 5.2.0.

  • Open jaxrs/pom.xml
  • Locate bnd.version in the properties section
  • Update 4.1.0 to 5.2.0
  • Right click on the jaxrs project – Maven – Update Project…
    • Have all projects checked
    • OK

Implementing the OSGi service

As the goal is to wrap an existing OSGi Declarative Service to make it accessible as web service, we use the M.U.S.E (Most Useless Service Ever) introduced in my Getting Started with OSGi Declarative Services blog post. Unfortunately the combination of Bndtools workspace projects with Bndtools Maven projects does not work well. Mainly because the Bndtools workspace projects are not automatically available as Maven modules. So we create the API and the service implementation projects also by using the OSGi enRoute archetypes.

Note:
If you have an OSGi service bundle already available via Maven, you can also use that one by adding the dependency to the pom.xml files and skip this section.

  • Go to the newly created jaxrs directory and create an API module using the api archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=api \
    -DarchetypeVersion=7.0.0
  • groupId = org.fipro.modifier
  • artifactId = api
  • version = 1.0-SNAPSHOT
  • package = org.fipro.modifier.api
  • Then create the service implementation module using the ds-component archetype:
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate \
    -DarchetypeGroupId=org.osgi.enroute.archetype \
    -DarchetypeArtifactId=ds-component \
    -DarchetypeVersion=7.0.0
  • groupId = org.fipro.modifier
  • artifactId = inverter
  • version = 1.0-SNAPSHOT
  • package = org.fipro.modifier.inverter
  • Import the created projects via
    File – Import… – Maven – Existing Maven Projects
  • Select the jaxrs directory

Service interface

  • In the Bndtools Explorer locate the api module and expand to the package org.fipro.modifier.api
  • Implement the StringModifier interface:
public interface StringModifier {
	String modify(String input);
}
  • You can delete the ConsumerInterface and the ProviderInterface which were created by the archetype.
  • Ensure that you do NOT delete the package-info.java file in the org.fipro.modifier.api package. It configures that the package is exported. If this file is missing, the package is a Private-Package and therefore not usable by other OSGi bundles.

    The package-info.java file and its content are part of the Bundle Annotations introduced with R7. Here are some links if you are interested in more detailed information:

Service implementation

  • In the Bndtools Explorer locate the inverter module.
  • Open the pom.xml file and add the dependency to the api module in the dependencies section.
<dependency>
  <groupId>org.fipro.modifier</groupId>
  <artifactId>api</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
  • Expand to the package org.fipro.modifier.inverter
  • Implement the StringInverter service:
@Component
public class StringInverter implements StringModifier {

	@Override
	public String modify(String input) {
		return new StringBuilder(input).reverse().toString();
	}
}
  • You can delete the ComponentImpl class that was created by the archetype.
  • Note that the package does not contain a package-info.java file, as the service implementation is typically NOT exposed.

Implementing the REST service

After the projects are imported to the IDE and the OSGi service to consume is available, we can start implementing the REST based service.

  • In the Bndtools Explorer locate the impl module.
  • Open the pom.xml file and add the dependency to the api module in the dependencies section.
<dependency>
  <groupId>org.fipro.modifier</groupId>
  <artifactId>api</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
  • Expand to the package org.fipro.modifier.jaxrs
  • Implement the InverterRestService:
    • Add the @Component annotation to the class definition and specify the service parameter to specify it as a service, not an immediate component.
    • Add the @JaxrsResource annotation to the class definition to mark it as a JAX-RS whiteboard resource.
      This will add the service property osgi.jaxrs.resource=true which means this service must be processed by the JAX-RS whiteboard.
    • Get a StringModifier injected using the @Reference annotation.
    • Implement a JAX-RS resource method that uses the StringModifier.
@Component(service=InverterRestService.class)
@JaxrsResource
public class InverterRestService {
    
	@Reference
	StringModifier modifier;
	
	@GET
	@Path("modify/{input}")
	public String modify(@PathParam("input") String input) {
		return modifier.modify(input);
	}
}

Interlude: PROTOTYPE Scope

When you read the specification, you will see that the example service is using the PROTOTYPE scope. The example services in the OSGi enRoute tutorials do not use the PROTOTYPE scope. So I was wondering when to use the PROTOTYPE scope for JAX-RS Whiteboard services. I was checking the specification and asked on the OSGi mailing list. Thanks to Raymond Augé who helped me understanding it better. In short, if your component implementation is stateless and you get all necessary information injected to the JAX-RS resource methods, you can avoid the PROTOTYPE scope. If you have a stateful implementation, that for example gets JAX-RS context objects for a request or session injected into a field, you have to use the PROTOTYPE scope to ensure that every information is only used by that single request. The example service in the specification therefore does not need to specify the PROTOTYPE scope, as it is a very simple example. But it is also not wrong to use the PROTOTYPE scope even for simpler services. This aligns the OSGi service design (where typically every component instance is a singleton) with the JAX-RS design, as JAX-RS natively expects to re-create resources on every request.

Prepare the application project

In the application project we need to ensure that our service is available. In case the StringInverter from above was implemented, the inverter module needs to be added to the dependencies section of the app/pom.xml file. If you want to use another service that can be consumed via Maven, you of course need to add that dependency.

  • In the Bndtools Explorer locate the app module.
  • Open the pom.xml file and add the dependency to the inverter module in the dependencies section.
<dependency>
  <groupId>org.fipro.modifier</groupId>
  <artifactId>inverter</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
  • Open app.bndrun
  • Add org.fipro.modifier.inverter to the Run Requirements
  • Click on Resolve and double check that the modules api, impl and inverter are part of the Run Bundles
  • Click on Run OSGi
  • Open a browser and navigate to http://localhost:8080/modify/fubar to see the new REST based service in action.

JSON support

As returning a plain String is quite uncommon for a web service, we now extend our setup to return the result as JSON. We will use Jackson for this, so we need to add it to the dependencies of the impl module. The simplest way is to use org.apache.aries.jax.rs.jackson.

  • In the Bndtools Explorer locate the impl module.
  • Open the pom.xml file and add the dependency to org.apache.aries.jax.rs.jackson in the dependencies section.
<dependency>
    <groupId>org.apache.aries.jax.rs</groupId>
    <artifactId>org.apache.aries.jax.rs.jackson</artifactId>
    <version>1.0.2</version>
</dependency>

Alternative: Custom Converter

Alternatively you can implement your own converter and register it as a JAX-RS Whiteboard Extension.

  • In the Bndtools Explorer locate the impl module.
  • Open the pom.xml file and add the dependency to the Jackson in the dependencies section.
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.12.0</version>
</dependency>
  • Implement the JacksonJsonConverter:
    • Add the @Component annotation to the class definition and specify the PROTOTYPE scope parameter to ensure that multiple instances can be requested.
    • Add the @JaxrsExtension annotation to the class definition to mark the service as a JAX-RS extension type that should be processed by the JAX-RS whiteboard.
    • Add the @JaxrsMediaType(APPLICATION_JSON) annotation to the class definition to mark the component as providing a serializer capable of supporting the named media type, in this case the standard media type for JSON.
    • Internally make use of the OSGi Converter Specification for the implementation.
@Component(scope = PROTOTYPE)
@JaxrsExtension
@JaxrsMediaType(APPLICATION_JSON)
public class JacksonJsonConverter<T> implements MessageBodyReader<T>, MessageBodyWriter<T> {

    @Reference(service=LoggerFactory.class)
    private Logger logger;
	
    private final Converter converter = Converters.newConverterBuilder()
            .rule(String.class, this::toJson)
            .rule(this::toObject)
            .build();

    private ObjectMapper mapper = new ObjectMapper();
    
    private String toJson(Object value, Type targetType) {
        try {
            return mapper.writeValueAsString(value);
        } catch (JsonProcessingException e) {
            logger.error("error on JSON creation", e);
            return e.getLocalizedMessage();
        }
    }

    private Object toObject(Object o, Type t) {
        try {
	    if (List.class.getName().equals(t.getTypeName())) {
                return this.mapper.readValue((String) o, List.class);
            }
            return this.mapper.readValue((String) o, String.class);
        } catch (IOException e) {
            logger.error("error on JSON parsing", e);
        }
        return CANNOT_HANDLE;
    }

    @Override
    public boolean isWriteable(
        Class<?> c, Type t, Annotation[] a, MediaType mediaType) {

        return APPLICATION_JSON_TYPE.isCompatible(mediaType) 
            || mediaType.getSubtype().endsWith("+json");
    }

    @Override
    public boolean isReadable(
        Class<?> c, Type t, Annotation[] a, MediaType mediaType) {

        return APPLICATION_JSON_TYPE.isCompatible(mediaType) 
            || mediaType.getSubtype().endsWith("+json");
    }

    @Override
    public void writeTo(
        T o, Class<?> arg1, Type arg2, Annotation[] arg3, MediaType arg4,
        MultivaluedMap<String, java.lang.Object> arg5, OutputStream out)
        throws IOException, WebApplicationException {

        String json = converter.convert(o).to(String.class);
        out.write(json.getBytes());
    }

    @SuppressWarnings("unchecked")
    @Override
    public T readFrom(
        Class<T> arg0, Type arg1, Annotation[] arg2, MediaType arg3, 
        MultivaluedMap<String, String> arg4, InputStream in) 
        throws IOException, WebApplicationException {

    	BufferedReader reader = 
            new BufferedReader(new InputStreamReader(in));
        return (T) converter.convert(reader.readLine()).to(arg1);
    }
}

Update the InverterRestService

  • Add the JAX-RS @Produces(MediaType.APPLICATION_JSON) annotation to the class definition to specify that JSON responses are created.
  • Add the @JSONRequired annotation to the class definition to mark this class to require JSON media type support.
  • Optional:
    Get multiple StringModifier injected and return a List of Strings as a result of the REST resource.
@Component(service=InverterRestService.class)
@JaxrsResource
@Produces(MediaType.APPLICATION_JSON)
@JSONRequired
public class InverterRestService {
	
	@Reference
	private volatile List<StringModifier> modifier;
	
	@GET
	@Path("modify/{input}")
	public List<String> modify(@PathParam("input") String input) {
		return modifier.stream()
				.map(mod -> mod.modify(input))
				.collect(Collectors.toList());
	}
}
  • Optional:
    Implement an additional StringModifier in the inverter module.
@Component
public class Upper implements StringModifier {

	@Override
	public String modify(String input) {
		return input.toUpperCase();
	}
}
  • In the Bndtools Explorer locate the app module.
  • Open app.bndrun
  • If you use org.apache.aries.jax.rs.jackson, add it to the Run Requirements
  • Click on Resolve to ensure that the Jackson libraries are part of the Run Bundles
  • Click on Run OSGi
  • Open a browser and navigate to http://localhost:8080/modify/fubar to see the updated result.

Multipart file upload

In the Panorama project the REST based cloud services are designed as file processing services. So you upload a file, process it and download the result. This way you can for example migrate Amalthea Model files to a newer version, perform a static analysis of an Amalthea Model and even transform an Amalthea Model to some executable format and execute the result for simulation scenarios.

When searching for file uploads with REST and Java, you only find information on how to do this with either Jersey or Apache CXF. But even though the Aries JAX-RS Whiteboard reference implementation is based on Apache CXF, none of the tutorials worked for me. The reason is that the Aries JAX-RS Whiteboard completely hides the underlying Apache CXF implementation. Thanks to Tim Ward who helped me on the OSGi mailing list, I was able to solve this. Therefore I want to share the solution here.

Multipart file upload requires support from the underlying servlet container. Using the OSGi enRoute Maven archetypes Apache Felix HTTP Jetty is included as implementation of the R7 OSGi HTTP Service and the R7 OSGi HTTP Whiteboard Specification. So a Jetty is included in the setup and multipart file uploads are supported.

Enable Multipart Support

According to the HTTP Whiteboard Specification, Multipart File Uploads need to be enabled via the corresponding component properties. This can be done for example by creating a custom JAX-RS Whiteboard Application and adding the @HttpWhiteboardServletMultipart Component Property Type annotation with the corresponding attributes.

Note:
In this tutorial I will not use this approach, but for completeness I want to share how the creation and usage of a JAX-RS Whiteboard application can be done.

@Component(service=Application.class)
@JaxrsApplicationBase("app4mc")
@JaxrsName("app4mcMigration")
@HttpWhiteboardServletMultipart(enabled = true)
public class MigrationApplication extends Application {}

In this case the JAX-RS Whiteboard resource needs to be registered on the created application by using the @JaxrsApplicationSelect Component Property Type annotation.

@Component(service=Migration.class)
@JaxrsResource
@JaxrsApplicationSelect("(osgi.jaxrs.name=app4mcMigration)")
public class Migration {
...
}

Creating custom JAX-RS Whiteboard Applications make sense if you want to publish multiple applications in one installation/server. In a scenario where only one application is published in isolation, e.g. one REST based service in one container (e.g. Docker), the creation of a custom application is not necessary. Instead it is sufficient to configure the default application provided by the Aries JAX-RS Whiteboard implementation using the Configuration Admin. The PID and the available configuration properties are listed here.

Configuring an OSGi service programmatically via Configuration Admin is not very intuitive. While it is quite powerful to change configurations at runtime, it feels uncomfortable to provide a configuration to a component from the outside. Luckily with R7 the Configurator Specification was introduced to deal with this. Using the Configurator, the component configuration can be provided using a resource in JSON format.

  • First we need to specify the requirement on the Configurator. This can be done by using the @RequireConfigurator Bundle Annotation. Using the archetype this is already done in the app module.
    • In the Bndtools Explorer locate the app module.
    • Locate the package-info.java file in src/main/java/config.
    • Verify that it looks like the following snippet.
@RequireConfigurator
package config;

import org.osgi.service.configurator.annotations.RequireConfigurator;
  • Now locate the configuration.json file in src/main/resources/OSGI-INF/configurator
  • Modify the file to contain the multipart configuration:
    • org.apache.aries.jax.rs.whiteboard.default is the PID of the default application
    • osgi.http.whiteboard.servlet.multipart.enabled is the component property for enabling multipart file uploads
{
    ":configurator:resource-version" : 1,
    ":configurator:symbolic-name" : "org.fipro.modifier.app.config",
    ":configurator:version" : "1.0-SNAPSHOT",
    
    "org.apache.aries.jax.rs.whiteboard.default" : {
        "osgi.http.whiteboard.servlet.multipart.enabled" : "true"
    }
}
  • Open app.bndrun
    • Add org.fipro.modifier.app to the Run Requirements
    • Click Resolve to recalculate the Run Bundles

Note:
While writing this blog post and tested the tutorial I noticed that on Resolve the inverter module was sometimes not resolved for whatever reason. To ensure that the application is started with all necessary bundles, add impl, app and inverter to the Run Requirements. Double check after Resolve that the following bundles are part of the Run Bundles:

  • org.fipro.modifier.api
  • org.fipro.modifier.app
  • org.fipro.modifier.impl
  • org.fipro.modifier.inverter

Process Multipart File Uploads

As the JAX-RS standards do not contain multipart support, we need to fallback to Servlet implementations. Fortunately we can get JAX-RS resources injected as method parameter or fields by using for example the @Context JAX-RS annotation. For the multipart support we can get the HttpServletRequest injected and extract the information from there.

  • Update the InverterRestService
  • Add the following JAX-RS resource method
@POST
@Path("modify/upload")
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.TEXT_PLAIN)
public Response upload(@Context HttpServletRequest request) 
        throws IOException, ServletException {

    // get the part with name "file" received by within
    // a multipart/form-data POST request
    Part part = request.getPart("file");
    if (part != null 
            && part.getSubmittedFileName() != null 
            && part.getSubmittedFileName().length() > 0) {

        StringBuilder inputBuilder = new StringBuilder();
        try (InputStream is = part.getInputStream();
                BufferedReader br = 
                    new BufferedReader(new InputStreamReader(is))) {

            String line;
            while ((line = br.readLine()) != null) {
                inputBuilder.append(line).append("\n");
            }
        }
		
        // modify file content
        String input = inputBuilder.toString();
        List<String> modified = modifier.stream()
            .map(mod -> mod.modify(input))
            .collect(Collectors.toList());

            return Response.ok(String.join("\n\n", modified)).build();
    }

    return Response.status(Status.PRECONDITION_FAILED).build();
}
  • @Consumes(MediaType.MULTIPART_FORM_DATA)
    Specify that this REST resource consumes multipart/form-data.
  • @Produces(MediaType.TEXT_PLAIN)
    Specify that the result is plain text, which is for this use case the easiest way for returning the modified file content.
  • @Context HttpServletRequest request
    The HttpServletRequest is injected as method parameter.
  • Part part = request.getPart("file")
    Extract the Part with the name file (which is actually the form parameter name) from the HttpServletRequest.

If you are using a tool like Postman, you can test if the multipart upload is working by starting the app via app.bndrun and execute a POST request on http://localhost:8080/modify/upload

Interlude: Static Resources

To also be able to test the upload without additional tools, we publish a simple form as a static resource in our application. We use the HTTP Whiteboard Specification to register an HTML form as static resource with our REST service. For this add the @HttpWhiteboardResource component property type annotation to the InverterRestService.

@HttpWhiteboardResource(pattern = "/files/*", prefix = "static")

With this configuration all requests to URLs with the /files path are mapped to resources in the static folder. The next step is therefore to add the static form to the project:

  • In the Bndtools Explorer locate the impl module.
  • Right click src/main/java – New – Folder
  • Select the main folder in the tree
  • Add resources/static in the Folder name field
  • Finish
  • Right click on the created resources folder in the Bndtools Explorer
  • Build Path – Use as Source Folder
  • Create a new file upload.html in scr/main/resources/static
<html>
<body>
    <h1>File Upload with JAX-RS</h1>
    <form
        action="http://localhost:8080/modify/upload"
        method="post"
        enctype="multipart/form-data">

        <p>
            Select a file : <input type="file" name="file" size="45"/>
        </p>

        <input type="submit" value="Upload It"/>
    </form>
</body>
</html>

After starting the app via app.bndrun you can open a browser and navigate to http://localhost:8080/files/upload.html
Now you can select a file (don’t use a binary file) and upload it to see the modification result of the REST service.

Debugging / Inspection

To debug your REST based service you can start the application by using Debug OSGi instead of Run OSGi in the app.bndrun. But in the OSGi context you often face issues even before you can debug code. For this the app archetype creates an additional debug run configuration. The debug.bndrun file is located next to the app.bndrun file in the app module.

  • In the Bndtools Explorer locate the app module.
  • Open debug.bndrun
  • Click on Resolve
  • Click on Run OSGi

With the debug run configuration the following additional features are enabled to inspect the runtime:

This allows to interact with the Gogo Shell in the Console View. And even more comfortable by using the Webconsole. For the later open a browser and navigate to http://localhost:8080/system/console. Login with the default username/password admin/admin. Using the Webconsole you can check which bundles are installed and in which state they are. You can also inspect the available OSGi DS Components and check the active configurations.

Build

As the project setup is a plain Java/Maven project, the build is pretty easy:

  • In the Bndtools Explorer locate the jaxrs module (the top level project).
  • Right click – Run As – Maven build…
  • Enter clean verify in the Goals field
  • Run

From the command line:

  • Switch to the jaxrs directory that was created by the archetype
  • Execute mvn clean verify

Note:
It can happen that an error occurs on building the app module if you followed the steps in this tutorial exactly. The reason is that the build locates a change in the Run Bundles of the app.bndrun file. But it is just a difference in the ordering of the bundles. To solve this open the app.bndrun file, remove all entries from the Run Bundles and hit Resolve again. After that the order of the Run Bundles will be the same as the one in the build.

Note:
This build process works because we used the Eclipse IDE with Bndtools. If you are using another IDE or working only on the command line, have a look at the OSGi enRoute Microservices Tutorial that explains the separate steps for building from command line.

After the build succeeds you will find the resulting app.jar in jaxrs/app/target. Execute the following line to start the self-executable jar from the command line if you are located in the jaxrs folder:

java -jar app/target/app.jar

If you also want to build the debug configuration, you need to enable this in the pom.xml file of the app module:

  • In the Bndtools Explorer locate the app module.
  • Open pom.xml
  • In the build/plugins section update the bnd-export-maven-plugin and add the debug.bndrun to the bndruns.
<plugin>
    <groupId>biz.aQute.bnd</groupId>
    <artifactId>bnd-export-maven-plugin</artifactId>
    <configuration>
        <bndruns>
            <bndrun>app.bndrun</bndrun>
            <bndrun>debug.bndrun</bndrun>
        </bndruns>
    </configuration>
</plugin>

Executing the build again, you will now also find a debug.jar in the target folder of the app module, you can use to inspect the OSGi runtime.

Summary

While setting up this tutorial I faced several issues that mainly came from missing information or misunderstandings. Luckily the OSGi community was really helpful in solving this. So my contribution back is to write this blog post to help others that struggle with similar issues. The key takeaways are:

  • Using the OSGi enRoute Maven archetypes we have plain Java Maven projects. That means:
    • There is no Bundle Descriptor File (.bnd), so the package-info.java file is an important source for the MANIFEST.MF creation.
    • Dependencies to other modules need to be specified in the pom.xml files. This also includes modules in the same workspace.

Note:
The Maven project structure also causes quite some headache if you want to wrap OSGi services from Eclipse projects like APP4MC. Usually Eclipse projects publish their results as p2 update sites and not via Maven. And for Maven projects it is no possible to consume p2 update sites. Luckily more and more projects publish their results on Maven Central. And the APP4MC project plans to also do this. We are currently cleaning up the dependencies to make it possible to at least consume the model implementation easily from any Java based project. As long as dependencies are not available via Maven Central, the only way to solve the build is to install the artifacts in the local repository. This can either be done by building and installing the resulting artifacts locally via mvn clean install. Alternatively you can use the maven-install-plugin, which can even be integrated into your Maven build if you add the artifact to install to the source code repository. Thanks to Neil Bartlett who gave me the necessary pointer on this topic.

  • With OSGi R7 there are quite some interesting new specifications, that in combination make development with OSGi a lot more comfortable. The ones used in this tutorial are:
  • Using the Maven archetypes and the OSGi R7 specifications, implementing JAX-RS REST based services is similar to approaches with other frameworks like Spring Boot or Microprofile. And if you want to wrap existing OSGi services, it is definitely the most comfortable one. If consuming OSGi services is not needed, well then every framework has its pros and cons.

The sources of this tutorial are available on GitHub.

For an extended example have a look at the APP4MC Cloud Services.

Now I have a blog post about HTTP Service / HTTP Whiteboard and JAX-RS Whiteboard. The still missing blog post about Remote Services is not forgotten, but obviously I need more time to write about it, as it is the most complicated specification in OSGi. So stay tuned for that one. ūüôā

Posted in Dirk Fauth, Java, OSGi | Tagged , , | Comments Off on Build REST services with OSGi JAX-RS whiteboard

NatTable + Eclipse Collections = Performance & Memory improvements ?

Some time ago I got reports from NatTable users about high memory consumption when using NatTable with huge data sets. Especially when using trees, the row hide/show feature and/or the row grouping feature. Typically I tended to say that this is because of the huge data set in memory, not because of the NatTable implementation. But as a good open source developer I take such reports seriously and verified the statement to be sure. So I updated one of the NatTable examples that combine all three features to show about 2 million entries. Then I modified some row heights, collapsed tree nodes and hid some rows. After checking the memory consumption I was surprised. The diagram below shows the result. The heap usage goes up to and beyond 1.5 GB on scrolling. In between I performed a GC and scrolled again, which causes the those peaks and valleys.

A more detailed inspection reveals that the high memory consumption is not because of the data in memory itself. There are a lot of primitive wrapper objects and internal objects in the map implementation that consume a big portion of the memory, as you can see in the following image.

Note:
Primitive wrapper objects have a higher memory consumption than primitive values itself. As there are already good articles about that topic available I will not repeat that. If you are interested in some more details in the topic Primitives vs Objects you can have a look at Baeldung for example.

So I started to check the NatTable implementation in search of the memory issue. And I found some causes. In several places there are internal caches for the index-position mapping to improve the rendering performance. Also the row heights and column widths are stored internally in a collection if a user resized them. Additionally some scaling operations incorrectly where using Double objects instead of primitive values to avoid rounding issues on scaling.

From my experience in an Android project I remembered an article that described a similar issue. In short: Java has no collections for primitive types, therefore primitive values need to be stored via wrapper objects. In Android they introduced the SparseArray to deal with this issue. So I was searching for primitive collections in Java and found Eclipse Collections. To be honest, I heard about Eclipse Collections before, but I always thought the standard Java Collections are already good enough, so why checking some third party collections. Small spoiler: I was wrong!

Looking at the website of Eclipse Collections, they state that they have a better performance and better memory consumption than the standard Java Collections. But a good developer and architect does not simply trust statements like “take my library and all your problems are solved”. So I started my evaluation of Eclipse Collections to see if the memory and performance issues in NatTable can be solved by using them. Additionally I was looking at the Primitive Type Streams introduced with Java 8 to see if some issues can even be leveraged using that API.

Creation of test data

Right at the beginning of my evaluation I noticed the first issue. Which way should be used to create a huge collection of test data to process? I read about some discussions using the good old for-loop vs. IntStream. So I started with some basic performance measurements to compare those two. The goal was to create test data with values from 0 to 1.000.000 where every 100.000 entry is missing.

The for-loop for creating an int[] with the described values looks like this:

int[] values = new int[999_991];
int index = 0;
for (int i = 0; i < 1_000_000; i++) {
    if (i == 0 || i % 100_000 != 0) {
        values[index] = i;
        index++;
    }
}

Using the IntStream API it looks like this:

int[] values = IntStream.range(0, 1_000_000)
        .filter(i -> i == 0 || i % 100_000 != 0)
        .toArray();

Additionally I wanted to compare the performance for creating an ArrayList<Integer> via for-loop and IntStream.

ArrayList<Integer> values = new ArrayList<>(999_991);
for (int i = 0; i < 1_000_000; i++) {
    if (i == 0 || i % 100_000 != 0) {
        values.add(i);
    }
}
List<Integer> values = IntStream.range(0, 1_000_000)
        .filter(i -> (i == 0 || i % 100_000 != 0))
        .boxed()
        .collect(Collectors.toList());

The result is interesting, although not suprising. Using the for-loop for creating an int[] is the clear winner. The usage of the IntStream is not bad but definitely worse than the for-loop. So for recurring tasks and huge ranges a refactoring from for-loop to IntStream is not a good idea. The creation of collections with wrapper objects is of course even worse, as wrapper objects need to be created via boxing.

collecting int[] via for-loop 1 ms
collecting int[] via IntStream 4 ms
collecting List<Integer> via for-loop 7 ms
collecting List<Integer> via IntStream 13 ms

I also tested the usage of HashSet and TreeSet for the wrapper objects, as typically in NatTable I need distinct values, often sorted for further processing. HashSet as well as TreeSet have a worse performance in the creation scenario, but TreeSet is the clear looser here.

collecting HashSet<Integer> via for-loop 16 ms
collecting TreeSet<Integer> via for-loop 189 ms
collecting Set<Integer> via IntStream 26 ms 

Note:
Running the tests in a single execution, the numbers are worse, which is caused by the VM ramp up and class loading. Executing it 10 times the average numbers are similar to the above but are still worse because the first execution is that much worse. The numbers shown above are the average out of 100 executions. And even increasing the number of executions to 1.000 the average values are quite the same and sometimes even get drastically better because of the VM optimizations for code that gets executed often. So the numbers presented here are the average out of 100 executions.

After evaluating the performance of standard Java API for creating test data, I looked at the Eclipse Collections – Primitive Collections. I compared MutableIntList with MutableIntSet and used the different factory methods for creating the test data:

  • Iteration
    directly operate on an initial empty MutableIntList

    MutableIntList values = IntLists.mutable.withInitialCapacity(999_991);
    for (int i = 0; i < 1_000_000; i++) {
        if (i == 0 || i % 100_000 != 0) {
            values.add(i);
        }
    }

    Note: The method withInitialCapacity(int) is introduced with Eclipse Collections 10.3. In previous versions it is not possible to specify an initial capacity using the primitive type factories, you can only create an emty MutableIntList or MutableIntSet using emtpy(). Without specifying the initial capacity, the iteration approach takes 3ms for the MutableIntList and 32ms for the MutableIntSet.

  • Factory method of(int...) / with(int...)
    MutableIntList values = IntLists.mutable.of(inputArray);
  • Factory method ofAll(Iterable<Integer>) / withAll(Iterable<Integer>)
    MutableIntList values = IntLists.mutable.ofAll(inputCollection);
  • Factory method ofAll(IntStream) / withAll(IntStream)
    MutableIntList values = IntLists.mutable.ofAll(
        IntStream
            .range(0, 1_000_000)
            .filter(i -> (i == 0 || i % 100_000 != 0)));

To create MutableIntSet use the IntSetsutility class:

MutableIntSet values = IntSets.mutable.xxx

Note:
For the factory methods of course the generation of the input also needs to be taken into account. So for creating data from scratch the time for creating the array or the collection needs to be added on top.

The result shows that at creation time the MutableIntList is much faster than the MutableIntSet. And the usage of the factory method with an int[] parameter is faster than using an Integer collection or IntStream or the direct operation on the MutableIntList. The reason for this is probably that using an int[] the MutableIntList instances are actually wrapper to the int[]. In this case you alse need to be careful, as modifications done via the primitive collection are directly reflected outside of the collection.

creating MutableIntList via iteration 1 ms
creating MutableIntList of int[] 0 ms
creating MutableIntList via Integer collection 4 ms
creating MutableIntList via IntStream 6 ms

creating MutableIntSet via iteration 21 ms
creating MutableIntSet of int[] 32 ms
creating MutableIntSet of Integer collection 39 ms
creating MutableIntSet via IntStream 38 ms

In several use cases the usage of a Set would be nicer to directly avoid duplicates in the collection. In NatTable a sorted order is also needed often, but there is no TreeSet equivalent in the primitive collections. But the MutableIntList comes with some nice API to deal with this. Via distinct() we get a new MutableIntList that only contains distinct values, via sortThis() the MutableIntList is directly sorted.

The following call returns a new MutableIntList with distinct values in a sorted order, similar to a TreeSet.

MutableIntList uniqueSorted = values.distinct().sortThis();

When changing this in the test, the time for creating a MutableIntList with distinct values in a sorted order increases to about 27 ms. Still less than creating a MutableIntSet. But as our input array is already sorted and only contains distinct values, this measurment is probably not really meaningful.

The key takeaways in this part are:

  • The good old for-loop still has the best performance. It is also faster than IntStream.range().
  • The MutableIntList has a better performance at creation time compared to MutableIntSet. This is the same with default Java List and Set implementations.
  • The MutableIntList has some nice API for modifications compared to handling a primitive array, which makes it more comfortable to use.

Usage of primitive value collections

As already mentioned, Eclipse Collections come with nice and comfortable API similar to the Java Stream API. But here I don’t want to go in more detail on that API. Instead I want to see how Eclipse Collections perform when using the standard Java Collections API and compare it with the performance of the Java Collections. By doing this I¬†want to ensure that by using Eclipse Collections the performance is getting better or at least is not becoming worse than by using the default Java collections.

contains()

The first use case is the check if a value is contained in a collection. This is done by the contains() method.

boolean found = valuesCollection.contains(search);

For the array we compare the old-school for-loop

boolean found = false;
for (int i : valuesArray) {
    if (i == search) {
        found = true;
        break;
    }
}

with the primitive streams approach

boolean found = Arrays.stream(valuesArray).anyMatch(x -> x == search);

Additionally I added a test for using Arrays.binarySearch(). But the result is not 100% comparable, as binarySearch() requires the array to be sorted in advance. Since our array already contains the test data in sorted order, this test works.

boolean found = Arrays.binarySearch(valuesArray, search) >= 0;

We use the collections/arrays that we created before and first check for the value 450.000 which exists in the middle of the collection. Below you find the execution times of the different approaches.

contains in List 1 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms
contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms

Then we execute the same setup and check for the value 2.000.000 which does not exist in the collection. This way the whole collection/array needs to be processed, while in the above case the search stops once the value is found.

contains in List 2 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms

contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms

What we can see here is that the Java Primitive Streams have the worst performance for the contains() case and the Eclipse Collections perform best. But actually there is not much difference in the performance.

indexOf()

For people with a good knowledge of the Java Collections API the specific measurement of indexOf() might look strange. This is because for example the ArrayList internally uses indexOf() in the contains() implementation. And we have tested that before. But the Eclipse Primitive Collections are not using indexOf() in contains(). They operate on the internal array. Also indexOf() is implemented differently without the use of the equals() method. So a dedicated verification is useful. Below are the results for testing an existing value and a not existing value.

Check indexOf() 450_000
indexOf in collection 0 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms

Check indexOf() 2_000_000
indexOf in collection 1 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms

The results are actually not surprising. Also in this case there is not much difference in the performance.

Note:
There is no indexOf() for Sets and of course we can also not get an index when using Java Primitive Streams. So this test only compares ArrayList, iteration on an int[] and the MutableIntList. I also skipped testing binarySearch() here, as the results would be equal to the contains() test with the same restrictions.

removeAll()

Removing multiple items from a List is a big performance issue. Before my investigation here I was not aware on how serious this issue is. What I already knew from past optimizations is, that removeAll() on an ArrayList is much worse than iterating manually over the items to remove and then remove every item solely.

For the test I am creating the base collection with 1.000.000 entries and a collection with the values from 200.000 to 299.999 that should be removed. First I execute the iteration to remove every item solely

for (Integer r : toRemoveList) {
    valueCollection.remove(r);
}

then I execute the test with removeAll()

valueCollection.removeAll(toRemoveList);

The tests are executed on an ArrayList, a HashSet, a MutableIntList and a MutableIntSet.

Additionally I added a test that uses the Primitive Stream API to filter and create a new array from the result. As this is not a modification of the original collection, the result is not 100% comparable to the other executions. But anyhow maybe interesting to see (even with a dependency to binarySearch()).

int[] result = Arrays.stream(values)
    .filter(v -> (Arrays.binarySearch(toRemove, v) < 0))
    .toArray();

Note:
The code for removing items from an array is not very comfortable. Of course we could also use some library like Apache Commons with primitive type arrays. But this is about comparing plain Java Collections with Eclipse Collections. Therefore I decided to skip this.

Below are the execution results:

remove all by primitive stream 21 ms
remove all by iteration List 29045 ms

remove all List 64068 ms
remove all by iteration Set 1 ms
remove all Set 1 ms
remove all by iteration MutableIntList 13602 ms
remove all MutableIntList 21 ms
remove all by iteration MutableIntSet 2 ms
remove all MutableIntSet 2 ms

You can see that the iteration approach on an ArrayList is almost twice as fast as using removeAll(). But still the performance is very bad. The performance for removeAll() as well as the iteration approach on a Set and a MutableIntSet are very good. Interestingly the call to removeAll() on a MutableIntList is also acceptable, while the iteration approach seems to have a performance issue.

The key takeaways in this part are:

  • The performance of the Eclipse Collections is at least as good as the standard Java Collections. In several cases even far better.
  • Some performance workarounds that were introduced with standard Java Collections could avoid the performance improvements if they are simply adapted with Eclipse Collections and not also changed.

Memory consumption

From the above measurements and observations I can say that in most cases there is a performance improvement when using Eclipse Collections compared to the standard Java Collections. And even for use cases where no big improvement can be seen, there is a small improvement or at least no performance decrease. So I decided to integrate Eclipse Collections in NatTable and use the Primitive Collections in every place where primitive values where stored in Java Collections. Additionally I fixed all places where wrapper objects were created unnecessarily. Then I executed the example from the beginning again to measure the memory consumption. And I was really impressed!

As you can see in the above graph, the heap usage stays below 250 MB even on scrolling. Remember, before using Eclipse Primitive Collections, the heap usage growed up to 1,5 GB. Going into more detail we can see that a lot of objects that were created for internal management are not created anymore. So now really the data model that should be visualized by NatTable is taking most of memory, not the NatTable itself anymore.

One thing I noticed in the tests is that there is still quite some memory allocated if the MutableIntList or MutableIntSet are cleared via clear(). Basically it is the same with the Java Collections. The collection allocates the space for the needed size. For the Eclipse Collections this means the internal array keeps its size as it only fills the array with 0. To even clean up this memory you need to assign a new empty collection instance.

Note:
The concrete IntArrayList class contains a trimToSize() method. But as you typically work agains the interfaces when using the factories, that method is not accessible, and also not all implementations contain such a method.

ArrayList vs. MutableList

The data to show in a NatTable is accessed by an IDataProvider. This is an abstraction to the underlying data structure, so that users can choose the data structure they like. The most common data structure in use is a List, and NatTable provides the ListDataProvider to simplify the usage of a List as underlying data structure. With the ListDataProvider as an abstraction there is no iteration internally. Instead there is a point access per cell via a nested for loop:

for (int column = 0; column < dataProvider.getColumnCount(); column++) {
    for (int row = 0; row < dataProvider.getRowCount(); row++) {
        dataProvider.getDataValue(column, row);
    }
}

For the ListDataProvider this means, for every cell first the row object is retrieved from the List, then the property of the row object is accessed. As NatTable is a virtual table by design, it actually never happens that all values from the underlying data structure is accessed. Only the data that is currently visible is accessed at once. While an existing performance test in the NatTable performance test suite showed an impressive performance boost by switching from ArrayList to MutableList, a more detailed benchmark revealed that both List implementations have a similar performance. I can’t tell why the existing test showed such a big difference, probably some side effects in the test setup, as the numbers swap if the test execution is swapped.

Executing the benchmark with Java 8 and Java 11 on the other hand shows a difference. Using Java 11 as runtime the tests execute about 50% faster for both ArrayList and MutableList. And it also shows that with Java 11 it makes a difference if the nested iteration iterates column or row first. While with Java 8 the execution time was similar, with Java 11 the row first approach shows a better performance.

Conclusion

Being sceptic at the beginning and have to admit that Eclipse Collections are really interesting and useful when it comes to performance and memory usage optimizations with collections in Java. The API is really handy and similar to the Java Streams API, which makes the usage quite comfortable.

My takeaways after the verification:

  • For short living collections it is often better to either use primitive type arrays, primitive streams or the MutableIntList, which has the better performance at creation compared to the MutableIntSet.
  • For storing primitive values use MutableIntSet or MutableIntList. This gives a similar memory consumption than using primitive type arrays, by granting a rich API for modifications at runtime.
  • Make use of the Eclipse Collections API to make implementation and¬† processing as efficient as possible.
  • When migrating from Java Collections API to Eclipse Collections, ensure that no workarounds are in the current code base. Otherwise you might loose big performance improvements.
  • Even when using a library like Eclipse Collections you need to take care about your memory management to avoid leaks at runtime, e.g. create new instance in favour of clearing huge collections.

Based on the observations above I decided that Eclipse Collections will become a major dependency for NatTable Core. With NatTable 2.0 it will be part of the NatTable Core Feature. I am sure that internally even more optimizations are possible by using Eclipse Collections. And I will investigate where and how this can be done. So you can expect even more improvements in that area in the future.

In case you think my tests are incorrect or need to be improved, or you simply want to verify my statements, here are the links to the classes I used for my verification:

In the example class I increased the number of data rows to about 2.000.000 via this code:

List<Person> personsWithAddress = PersonService.getFixedPersons();
for (int i = 1; i < 100_000; i++) {
    personsWithAddress.addAll(PersonService.getFixedPersons());
}

and I increased the row groups via these two lines of code:

rowGroupHeaderLayer.addGroup("Flanders", 0, 8 * 100_000);
rowGroupHeaderLayer.addGroup("Simpsons", 8 * 100_000, 10 * 100_000);

If some of my observations are wrong or the code can be made even better, please let me know! I am always willing to learn!

Thanks to the Eclipse Collections team for this library!

If you are interested in learning more about Eclipse Collections, you might want to check out the Eclipse Collections Kata.

Posted in Dirk Fauth, Eclipse, Java | Tagged , , | 2 Comments

NatTable – dynamic scaling enhancements

The last weeks I worked on harmonizing the scaling capabilities of NatTable. The first goal was to provide scaled versions of all internal NatTable images. This caused an update of several NatTable images like the checkbox, that you will notice in the next major release. To test the changes I implemented a basic dynamic scaling, which by accident and some additional modification became the new zoom feature in NatTable. I will give a short introduction to the new feature here, so early adaptors have a chance to test it in different scenarios before the next major release is published.

To enable the UI bindings for dynamic scaling / zooming the newly introduced ScalingUiBindingConfiguration needs to be added to the NatTable.

natTable.addConfiguration(
    new ScalingUiBindingConfiguration(natTable));

This will add a MouseWheelListener and some key bindings to zoom in/out:

  • CTRL + mousewheel up = zoom in
  • CTRL + mousewheel down = zoom out
  • CTRL + ‘+’ = zoom in
  • CTRL + ‘-‘ = zoom out
  • CTRL + ‘0’ = reset zoom

The dynamic scaling can be triggered programmatically by executing the ConfigureScalingCommand on the NatTable instance. This command already exists for quite a while, but it was mainly used internally to align the NatTable scaling with the display scaling. I have introduced new default IDpiConverter to make it easier to trigger dynamic scaling:

  • DefaultHorizontalDpiConverter
    Provides the horizontal dots per inch of the default display.
  • DefaultVerticalDpiConverter
    Provides the vertical dots per inch of the default display.
  • FixedScalingDpiConverter
    Can be created with a DPI value to set a custom scaling.

At initialization time, NatTable internally fires a ConfigureScalingCommand with the default IDpiConverter to align the scaling with the display settings.

As long as only text is included in the table, registering the ScalingUiBindingConfigurationis all you have to do. Once ICellPainter are used that render images, some additional work has to be done. The reason for this is that for performance and memory reasons the images are referenced in the painter and not requested for every rendering operation. As painters are not part of the event handling, they can not be simply updated. Also for several reasons there are mechanisms that avoid applying the registered configurations multiple times.

There are three ways to style a NatTable, and as of now this requires three different ways to handle dynamic scaling updates for image painters.

  1. AbstractRegistryConfiguration
    This is the default way that exists for a long time. Most of the default configurations provide the styling configuration this way. As there is no way to identify which configuration registers ICellPainter and how the instances are created, the ScalingUiBindingConfiguration needs to be initialized with an updater that knows which steps to perform.

    natTable.addConfiguration(
      new ScalingUiBindingConfiguration(natTable, configRegistry -> {
    
        // we need to re-create the CheckBoxPainter
        // to reflect the scaling factor on the checkboxes
        configRegistry.registerConfigAttribute(
            CellConfigAttributes.CELL_PAINTER,
            new CheckBoxPainter(),
            DisplayMode.NORMAL,
            "MARRIED");
    
      }));
  2. Theme styling
    In a ThemeConfiguration the styling options for a NatTable are collected in one place. In the previous state the ICellPainter instance creation was done on the member initialization which was quite static. Therefore the ICellPainter instance creation was moved to a new method named createPainterInstances(), so the painter update on scaling can be performed without any additional effort. For custom painter configurations this means that they should be added to a theme via IThemeExtension.

    natTable.addConfiguration(
        new ScalingUiBindingConfiguration(natTable));
    
    // additional configurations
    
    natTable.configure();
    
    ...
    
    IThemeExtension customThemeExtension = new IThemeExtension() {
    
        @Override
        public void registerStyles(IConfigRegistry configRegistry) {
            configRegistry.registerConfigAttribute(
                CellConfigAttributes.CELL_PAINTER,
                new CheckBoxPainter(),
                DisplayMode.NORMAL,
                "MARRIED");
        }
    
        @Override
        public void unregisterStyles(IConfigRegistry configRegistry) {
            configRegistry.unregisterConfigAttribute(
                CellConfigAttributes.CELL_PAINTER,
                DisplayMode.NORMAL,
                "MARRIED");
        }
    };
    
    ThemeConfiguration modernTheme = 
        new ModernNatTableThemeConfiguration();
    modernTheme.addThemeExtension(customThemeExtension);
    
    natTable.setTheme(modernTheme);
  3. CSS styling
    The CSS styling support in NatTable already manages the painter instance creation. The only thing to do here is to register a command handler that triggers the CSS apply operation actively. Otherwise the images will scale only on interactions with the UI.

    natTable.registerCommandHandler(
        new CSSConfigureScalingCommandHandler(natTable));

I have tested several scenarios, and the current state of development looks quite good. But of course I am not sure if I tested everything and found every possible edge case. Therefore it would be nice to get some feedback from early adopters if the new zoom feature is stable or not. The p2 update site with the current development snapshot can be found on the NatTable SNAPSHOTS page. From build number 900 on the feature is included. Any issues found can be reported on the corresponding Bugzilla ticket 560802.

Please also note that with the newly introduced zooming capability I have dropped the ZoomLayer. It did only increase the cell dimensions but not the font or the images. Therefore it was not functional (maybe never finished) IMHO and to avoid confusions in the future I have deleted it now.

Posted in Dirk Fauth, Eclipse, Java | Tagged , | Comments Off on NatTable – dynamic scaling enhancements

Building a “headless RCP” application with Tycho

Recently I got the request to create a “headless RCP” application from an existing Eclipse project. I was reading several posts on that and saw that a lot of people using the term “headless RCP”. First of all I have to say that “headless RCP” is a contradiction in itself. RCP means Rich Client Platform. And a rich client is typically characterized by having a graphical user interface. A headless application means to have an application with a command line interface. So the characteristic here is to have no graphical user interface. When people are talking about a “headless RCP” application, they mean to create a command line application based on code that is created for a RCP application, but without the GUI. And that actually means they want to create an OSGi application based on Equinox.

For such a scenario I typically would recommend to use bndtools or at least plain Java with the bnd Maven plugins. But there are scenarios where this is not possible, e.g. if your whole project is an Eclipse RCP project which currently forces you to use PDE tooling, and you only want to extract some parts/services to a command line tool. Well, one could also suggest to separate those parts to a separate workspace where bndtools is used and consume those parts in the RCP workspace. But that increases the complexity in the development environment, as you need to deal with two different toolings for one project.

In this blog post I will explain how to create a headless product out of an Eclipse RCP project (PDE based) and how to build it automatically with Tycho. And I will also show a nice benefit provided by the bnd Maven plugins on top of it.

Let’s start with the basics. A headless application provides functionality via command line. In an OSGi application that means to have some services that can be triggered on the command line. If your functionality is based on Eclipse Extension Points, I suggest to convert them to OSGi Declarative Services. This has several benefits, one of them is that the creation of a headless application is much easier. That said this tutorial is based on using OSGi Declarative Services. If you are not yet familiar with that, give my Getting Started with OSGi Declarative Services a try. I will use the basic bundles from the PDE variant for the headless product here.

Product Definition

For the automated product build with Tycho we need a product definition. Of course with some special configuration parameters as we actually do not have a product in Eclipse RCP terms.

  • Create the product project
    • Main Menu¬†‚Üí File¬†‚Üí New¬†‚Üí Project¬†‚Üí General¬†‚Üí Project
    • Set name to¬†org.fipro.headless.product
    • Ensure that the project is created in the same location as the other projects.
    • Click¬†Finish
  • Create a new¬†product configuration
    • Right click on project¬†‚Üí New¬†‚Üí¬†Product Configuration
    • Set the filename to¬†org.fipro.headless.product
    • Select Create configuration file with basic settings
    • Click Finish
  • Configure the product
    • Overview tab
      • ID = org.fipro.headless
      • Version = 1.0.0.qualifier
      • Uncheck The product includes native launcher artifacts
      • Leave Product and Application empty
        Product and Application are used in RCP products, and therefore not needed for a headless OSGi command line application.
      • This product configuration is based on: plug-ins
        Note:
        You can also create a product configuration that is based on features. For simplicity we use the simple plug-ins variant.
    • Contents tab
      • Add the following bundles/plug-ins:
      • Custom functionality
        • org.fipro.inverter.api
        • org.fipro.inverter.command
        • org.fipro.inverter.provider
      • OSGi console
        • org.apache.felix.gogo.command
        • org.apache.felix.gogo.runtime
        • org.apache.felix.gogo.shell
        • org.eclipse.equinox.console
      • Equinox OSGi Framework with Felix SCR for Declarative Services support
        • org.eclipse.osgi
        • org.eclipse.osgi.services
        • org.eclipse.osgi.util
        • org.apache.felix.scr
    • Configuration tab
      • Start Levels
        • org.apache.felix.scr, StartLevel = 0, Auto-Start = true
          This is necessary because Equinox has the policy to not automatically activate any bundle. Bundles are only activated if a class is directly requested from it. But the Service Component Runtime is never required directly, so without that setting, org.apache.felix.scr will never get activated.
      • Properties
        • eclipse.ignoreApp = true
          Tells Equinox to to skip trying to start an Eclipse application.
        • osgi.noShutdown = true
          The OSGi framework will not be shut down after the Eclipse application has ended. You can find further information about these properties in the Equinox Framework QuickStart Guide and the Eclipse Platform Help.

Note:
If you want to launch the application from within the IDE via the Overview tab ‚Üí Launch an Eclipse application, you need to provide the parameters as launching arguments instead of configuration properties. But running a command line application from within the IDE doesn’t make much sense. Either you need to pass the same command line parameter to process, or activate the OSGi console to be able to interact with the application. This should not be part of the final build result. But to verify the setup in advance you can add the following to the Launching tab:

  • Program Arguments
    • -console
  • VM Arguments
    • -Declipse.ignoreApp=true -Dosgi.noShutdown=true

When adding the parameters in the Launching tab instead of the Configuration tab, the configurations are added to the eclipse.ini in the root folder, not to the config.ini in the configuration folder. When starting the application via jar, the eclipse.ini in the root folder is not inspected.

Tycho build

To build the product with Tycho, you don’t need any specific configuration. You simply build it by using the tycho-p2-repository-plugin and the tycho-p2-director-plugin, like you do with an Eclipse product. This is for example explained here.

Create a pom.xml in org.fipro.headless.app.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>org.fipro</groupId>
    <artifactId>org.fipro.parent</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <groupId>org.fipro</groupId>
  <artifactId>org.fipro.headless</artifactId>
  <packaging>eclipse-repository</packaging>
  <version>1.0.0-SNAPSHOT</version>

  <build>
    <plugins>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-p2-repository-plugin</artifactId>
        <version>${tycho.version}</version>
        <configuration>
          <includeAllDependencies>true</includeAllDependencies>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-p2-director-plugin</artifactId>
        <version>${tycho.version}</version>
        <executions>
          <execution>
            <id>materialize-products</id>
            <goals>
              <goal>materialize-products</goal>
            </goals>
          </execution>
          <execution>
            <id>archive-products</id>
            <goals>
              <goal>archive-products</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>

For more information about building with Tycho, have a look at the vogella Tycho tutorial.

Running the build via mvn clean verify should create the resulting product in the folder org.fipro.headless/target/products. The archive file org.fipro.headless-1.0.0-SNAPSHOT.zip contains the product artifacts and the p2 related artifacts created by the build process. For the headless application only the folders configuration and plugins are relevant, where configuration contains the config.ini with the necessary configuration attributes, and in the plugins folder you find all bundles that are part of the product.

Since we did not add a native launcher, the application can be started with the java command. Additionally we need to open the OSGi console, as we have no starter yet. From the parent folder above configuration and plugins execute the following command to start the application with a console (update the filename of org.eclipse.osgi bundle as this changes between Eclipse versions):

java -jar plugins/org.eclipse.osgi_3.15.100.v20191114-1701.jar -configuration ./configuration -console

The -configuration parameter tells the framework where it should look for the config.ini, the -console parameter opens the OSGi console.

You can now interact with the OSGi console and even start the “invert” command implemented in the Getting Started tutorial.

Native launcher

While the variant without a native launcher is better exchangeable between operating systems, it is not very comfortable to start from a users perspective. Of course you can also add a batch file for simplification, but Equinox also provides native launchers. So we will add native launchers to our product. This is fairly easy because you only need to check The product includes native launcher artifacts on the Overview tab of the product file and execute the build again.

The resulting product now also contains the following files:

  • eclipse.exe
    Eclipse executable.
  • eclipse.ini
    Configuration pointing to the launcher artifacts.
  • eclipsec.exe
    Console optimized executable.
  • org.eclipse.equinox.launcher artifacts in the plugins directory
    Native launcher artifacts.

You can find some more information on those files in the FAQ.

To start the application you can use the added executables.

eclipse.exe -console

or

eclipsec.exe -console

The main difference in first place is that eclipse.exe¬†operates in a new shell, while eclipsec.exe stays in the same shell when opening the OSGi console. The FAQ says “On Windows, the eclipsec.exe console executable can be used for improved command line behavior.”.

Note:
You can change the name of the eclipse.exe file in the product configuration on the Launching tab by setting a Launcher Name. But this will not affect the eclipsec.exe.

Command line parameter

Starting a command line tool with an interactive OSGi console is typically not what people want. This is nice for debugging purposes, but not for productive use. In productive use you usually want to use some parameters on the command line and then process the inputs. In plain Java you take the arguments from the main() method and process them. But in an OSGi application you do not write a main() method. The framework launcher has the main() method. To start your application directly you therefore need to create some kind of starter that can inspect the launch arguments.

With OSGi Declarative Services the starter is an immediate component. That is a component that gets activated directly once all references are satisfied. To be able to inspect the command line parameters in an OSGi application, you need to know how the launcher that started it provides this information. The Equinox launcher for example provides this information via org.eclipse.osgi.service.environment.EnvironmentInfo which is provided as a service. That means you can add a @Reference for EnvironmentInfo in your declarative service, and once it is available the immediate component gets activated and the application starts.

Create new project org.fipro.headless.app

  • Create the app project
    • Main Menu¬†‚Üí File¬†‚Üí New¬†‚Üí Plug-in Project
    • Set name to¬†org.fipro.headless.app
  • Create a package via right-click on src
    • Set name to¬†org.fipro.headless.app
  • Open the MANIFEST.MF file
    • Add the following to Imported Packages
      • org.osgi.service.component.annotations
        Remember to mark it as optional to avoid runtime dependencies to the annotations.
      • org.eclipse.osgi.service.environment
        To be able to consume the Equinox EnvironmentInfo.
      • org.fipro.inverter
        To be able to consume the functional services.
  • Add org.fipro.headless.app to the Contents of the product definition.
  • Add org.fipro.headless.app to the modules section of the pom.xml.

Create an immediate component with the name EquinoxStarter.

@Component(immediate = true)
public class EquinoxStarter {

    @Reference
    EnvironmentInfo environmentInfo;

    @Reference
    StringInverter inverter;

    @Activate
    void activate() {
        for (String arg : this.environmentInfo.getNonFrameworkArgs()) {
            System.out.println(inverter.invert(arg));
        }
    }
}

With the simple version above you will notice some issues if you are not specifying the -console parameter:

  1. If you start the application via eclipse.exe with an additional parameter, the code will be executed, but you will not see any output.
  2. If you start the application via eclipsec.exe with an additional parameter, you will see an output but the application will not finish.

If you pass the -console parameter, the output will be seen in both cases and the OSGi console opens immediately afterwards.

First let’s have a look why the application seem to hang when started via eclipsec.exe. The reason is simply that we configured osgi.noShutdown=true, which means the OSGi framework will not be shut down after the Eclipse application has ended. So the simple solution would be to specify osgi.noShutdown=false. The downside is that now using the -console parameter will not keep the OSGi console open, but close the application immediately. Also using eclipse.exe with the -console parameter will not keep the OSGi console open. So the configuration parameter osgi.noShutdown should be set dependent on whether an interactive mode via OSGi console should be supported or not.

If both variants should be supported osgi.noShutdown should be set to true and a check for the -console parameter in code needs to be added. If that parameter is not set, close the application via System.exit(0);.

-console is an Equinox framework parameter, so the check and the handling looks like this:

boolean isInteractive = Arrays
    .stream(environmentInfo.getFrameworkArgs())
    .anyMatch(arg -> "-console".equals(arg));

if (!isInteractive) {
    System.exit(0);
}

With the additional handling above, the application will stay open with an active OSGi console if -console is set, and it will close immediately if -console is not set.

The other issue we faced was that we did not see any output when using eclipse.exe. The reason is that the outputs are not sent to the executing command shell. And without specifying an additional parameter, the used command shell is not even opened. One option to handle this is to open the command shell and keep it open as long as a user input closes it again. The framework parameter is -consoleLog. And the check could be as simple as the following for example:

boolean showConsoleLog = Arrays
    .stream(environmentInfo.getFrameworkArgs())
    .anyMatch(arg -> "-consoleLog".equals(arg));

if (showConsoleLog) {
    System.out.println();
    System.out.println("***** Press Enter to exit *****");
    // just wait for a Enter
    try (BufferedReader reader = new BufferedReader(new InputStreamReader(System.in))) {
        reader.readLine();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

With the -consoleLog handling, the following call will open a new shell that shows the result and waits for the user to press ENTER to close the shell and finish the application.

eclipse.exe test -consoleLog

bnd export

Although these results are already pretty nice, it can be even better. With bnd you are able to create a single executable jar that starts the OSGi application. This makes it easier to distribute the command line application. And the call of the application is similar easy compared with the native executable, while there is no native stuff inside and therefore it is easy exchangeable between operating systems.

Using the bnd-export-maven-plugin you can achieve the same result even with a PDE-Tycho based build. But of course you need to prepare things to make it work.

The first thing to know is that the bnd-export-maven-plugin needs a bndrun file as input. So now create a file headless.bndrun in org.fipro.headless.product project that looks similar to this:

-runee: JavaSE-1.8
-runfw: org.eclipse.osgi
-runsystemcapabilities: ${native_capability}

-resolve.effective: active;skip:="osgi.service"

-runrequires: \
osgi.identity;filter:='(osgi.identity=org.fipro.headless.app)'

-runbundles: \
org.fipro.inverter.api,\
org.fipro.inverter.command,\
org.fipro.inverter.provider,\
org.fipro.headless.app,\
org.apache.felix.gogo.command,\
org.apache.felix.gogo.runtime,\
org.apache.felix.gogo.shell,\
org.eclipse.equinox.console,\
org.eclipse.osgi.services,\
org.eclipse.osgi.util,\
org.apache.felix.scr

-runproperties: \
osgi.console=
  • As we want our Eclipse Equinox based application to be bundled as a single executable jar, we specify Equinox as our OSGi framework via -runfw: org.eclipse.osgi.
  • Via -runbundles we specify the bundles that should be added to the runtime.
  • The settings below -runproperties are needed to handle the Equinox OSGi console correctly.

Unfortunately there is no automatic way to transform a PDE product definition to a bndrun file, at least I am not aware of it. And yes there is some duplication involved here, but compared to the result it is acceptable IMHO. Anyhow, with some experience in scripting it should be easy to automatically create the bndrun file out of the product definition at build time.

Now enable the bnd-export-maven-plugin for the product build in the pom.xml of org.fipro.headless.product. Note that even with a pomless build it is possible to specify a specific pom.xml in a project if something additionally to the default build is needed (which is the case here).

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-export-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <failOnChanges>false</failOnChanges>
    <bndruns>
      <bndrun>headless.bndrun</bndrun>
    </bndruns>
    <bundles>
      <include>${project.build.directory}/repository/plugins/*</include>
    </bundles>
  </configuration>
  <executions>
    <execution>
      <goals>
        <goal>export</goal>
      </goals>
    </execution>
  </executions>
</plugin>

The bndruns configuration property points to the headless.bndrun we created before. In the bundles configuration property we point to the build result of the tycho-p2-repository-plugin to build up the implicit repository. This way we are sure that all required bundles are available without the need to specify any additional repository.

After a new build you will find the file headless.jar in org.fipro.headless.product/target. You can start the command line application via

java -jar headless.jar

You will notice that the OSGi console is started, anyhow which parameters are added to the command line. And all the command line parameters are not evaluated, because not the Equinox launcher started the application. Instead the bnd launcher started it. Therefore the EnvironmentInfo is not initialized correctly.

Unfortunately Equinox will anyhow publish the EnvironmentInfo as a service even if it is not initialized. Therefore the EquinoxStarter will be satisfied and activated. But we will get a NullPointerException (that is silently catched) when it is tried to access the framework and/or non-framework args. For good coding standards the EquinoxStarter needs to check if EnvironmentInfo is correctly initialized, otherwise it should do nothing. The code could look similar to this snippet:

@Component(immediate = true)
public class EquinoxStarter {

  @Reference
  EnvironmentInfo environmentInfo;

  @Reference
  StringInverter inverter;

  @Activate
  void activate() {
    if (environmentInfo.getFrameworkArgs() != null
      && environmentInfo.getNonFrameworkArgs() != null) {

      // check if -console was provided as argument
      boolean isInteractive = Arrays
        .stream(environmentInfo.getFrameworkArgs())
        .anyMatch(arg -> "-console".equals(arg));
      // check if -console was provided as argument
      boolean showConsoleLog = Arrays
        .stream(environmentInfo.getFrameworkArgs())
        .anyMatch(arg -> "-consoleLog".equals(arg));

      for (String arg : this.environmentInfo.getNonFrameworkArgs()) {
        System.out.println(inverter.invert(arg));
      }

      // If the -consoleLog parameter is used, a separate shell is opened. 
      // To avoid that it is closed immediately a simple input is requested to
      // close, so a user can inspect the outputs.
      if (showConsoleLog) {
        System.out.println();
        System.out.println("***** Press Enter to exit *****");
        // just wait for a Enter
        try (BufferedReader reader = new BufferedReader(new InputStreamReader(System.in))) {
          reader.readLine();
        } catch (IOException e) {
          e.printStackTrace();
        }
      }

      if (!isInteractive) {
        // shutdown the application if no console was opened
        // only needed if osgi.noShutdown=true is configured
        System.exit(0);
      }
    }
  }
}

This way we avoid that the EquinoxStarter is executing any code. So despite component instance creation and destruction, nothing happens.

To handle launching via bnd launcher, we need another starter. We create a new immediate component named BndStarter.

@Component(immediate = true)
public class BndStarter {
    ...
}

The bnd launcher provides the command line parameters in a different way. Instead of EnvironmentInfo you need to get the aQute.launcher.Launcher injected with its service properties. Inside the service properties map, there is an entry for launcher.arguments whose value is a String[]. To avoid the dependency to aQute classes in our code, we reference Object and use a target filter for launcher.arguments which works fine as Launcher is published also as Object to the ServiceRegistry.

String[] launcherArgs;

@Reference(target = "(launcher.arguments=*)")
void setLauncherArguments(Object object, Map<String, Object> map) {
    this.launcherArgs = (String[]) map.get("launcher.arguments");
}

Although not necessary, we add some code to align the behavior when started via bnd launcher with the behavior when started with the Equinox launcher. That means we check for the -console parameter and stop the application if that parameter is missing. The check for -consoleLog would also not be needed, as the bnd launcher stays in the same command shell like eclipsec.exe, but for processing we also remove it. Just in case someone tries it out.

The complete code of BndStarter would then look like this:

@Component(immediate = true)
public class BndStarter {

  String[] launcherArgs;

  @Reference(target = "(launcher.arguments=*)")
  void setLauncherArguments(Object object, Map<String, Object> map) {
    this.launcherArgs = (String[]) map.get("launcher.arguments");
  }

  @Reference
  StringInverter inverter;

  @Activate
  void activate() {
    boolean isInteractive = Arrays
      .stream(launcherArgs)
      .anyMatch(arg -> "-console".equals(arg));

    // clear launcher arguments from possible framework parameter
    String[] args = Arrays
      .stream(launcherArgs)
      .filter(arg -> !"-console".equals(arg) && !"-consoleLog".equals(arg))
      .toArray(String[]::new);

    for (String arg : args) {
      System.out.println(inverter.invert(arg));
    }

    if (!isInteractive) {
      // shutdown the application if no console was opened
      // only needed if osgi.noShutdown=true is configured
      System.exit(0);
    }
  }
}

After building again, the application will directly close without the -console parameter. And if -console is used, the OSGi console stays open.

The above handling was simply done to have something similar to the Eclipse product build. As the Equinox launcher does not automatically start all bundles the -console parameter triggers a process to start the necessary Gogo Shell bundles. The bnd launcher on the other hand always starts all installed bundles. The OSGi console always comes up and can be seen in the command shell even before the BndStarter kills it. If that behavior does no satisfy your needs, you could also easily build two application variants: one with a console and one without. You simply need to create another bndrun file that does not contain the console bundles and no console configuration properties.

-runee: JavaSE-1.8
-runfw: org.eclipse.osgi
-runsystemcapabilities: ${native_capability}

-resolve.effective: active;skip:="osgi.service"

-runrequires: \
    osgi.identity;filter:='(osgi.identity=org.fipro.headless.app)'

-runbundles: \
    org.fipro.inverter.api,\
    org.fipro.inverter.provider,\
    org.fipro.headless.app,\
    org.eclipse.osgi.services,\
    org.eclipse.osgi.util,\
    org.apache.felix.scr

If you add that additional bndrun file to the bndruns section of the bnd-export-maven-plugin the build will create two exports.

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-export-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <failOnChanges>false</failOnChanges>
    <bndruns>
      <bndrun>headless.bndrun</bndrun>
      <bndrun>headless_console.bndrun</bndrun> 
    </bndruns>
    <bundles>
      <include>target/repository/plugins/*</include>
    </bundles>
  </configuration>
  <executions>
    <execution>
      <goals>
        <goal>export</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To check if the application should be stopped or not, you then need to check for the system property osgi.console.

boolean hasConsole = System.getProperty("osgi.console") != null;

If a console is configured to not stop the application. If there is no configuration for osgi.console call System.exit(0).

This tutorial showed a pretty simple example to explain the basic concepts on how to build a command line application from an Eclipse project. A real-world example can be seen in the APP4MC Model Migration addon, where the above approach is used to create a standalone model migration command line tool. This tool can be used in other environments like in build servers for example, while the integration in the Eclipse IDE remains in the same project structure.

The sources of this tutorial are available on GitHub.

If you are interested in finding out more about the Maven plugins from bnd you might want to watch this talk from EclipseCon Europe 2019. As you can see they are helpful in several situations when building OSGi applications.

Update: configurable console with bnd launcher

I tried to make the executable jar behavior similar to the Equinox one. That means, I wanted to create an application where I am able to configure via command line parameter if the console should be activated or not. Achieving this took me quite a while, as I needed to find out what causes the console to start with Equinox or not. The important thing is that the property osgi.console needs to be set to an empty String. The value is actually the port to connect to, and with that value set to an empty String, the current shell is used. In the bndrun files this property is set via -runproperties. If you remove it from the bndrun file, the console actually never starts, even if passed as system property on the command line.

Section 19.4.6 in Launching | bnd explains why. It simply says that you are able to override a launcher property via system property. But you can not add a launcher property via system property. Knowing this I solved the issue by setting the osgi.console property to an invalid value in the -runproperties section.

-runproperties: \
    osgi.console=xxx

This way the application can be started with or without a console, dependent on whether osgi.console is provided as system parameter via command line or not.

Of course the check for the -console parameter should be removed from the BndStarter to avoid that users need to provide both arguments to open a console!

I added the headless_configurable.bndrun file to the repository to show this:

Launch without console:

java -jar headless_configurable.jar Test

Launch with console:

java -jar -Dosgi.console= headless_configurable.jar

Update: bnd-indexer-maven-plugin

I got this pull request that showed an interesting extension to my approach. It uses the bnd-indexer-maven-plugin to create an index that can then be used in the bndrun files to make it editable with bndtools.

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-indexer-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <inputDir>${project.build.directory}/repository/plugins/</inputDir>
  </configuration>
  <executions>
    <execution>
      <phase>package</phase>
      <id>index</id>
      <goals>
        <goal>local-index</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To make use of this you first need to execute the build without the bnd-export-maven-plugin so the index is created out of the product build. After that you can create or edit a bndrun file by adding these lines on top:

index: target/index.xml;name="org.fipro.headless.product"

-standalone: ${index}

I am personally not a big fan of such dependencies in the build timeline. But it is surely helpful for creating the bndrun file.

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Building a “headless RCP” application with Tycho

POM-less Tycho enhanced

With Tycho 0.24 POM-less Tycho builds were introduced. This Maven extension was a big step forward with regards to build configuration, as plugin, feature and plugin test projects don’t need a dedicated pom.xml file anymore. Therefore there are less¬†pom.xml files that need to be updated with every version increase. Instead these pom.xml files are generated out of the metadata provided via the MANIFEST.MF file at build time.

Although the initial implementation was already a big improvement, it had some flaws:

  • Only plugins, features and test plugins¬†were supported
  • target definition, update site and product builds still needed a dedicated pom.xml file
  • test plugins/bundles needed the suffix .tests
  • in structured environments “POM-less parents” or “connector POMs” were needed to be added manually

With Tycho 1.5 these flaws are finally fixed to further improve POM-less Tycho builds. To make use of those enhancements you need to follow these steps:

  1. Update the version of the tycho-pomless extension in .mvn/extension.xml to 1.5.1
  2. Update the tycho version in the parent pom.xml to 1.5.1 (ideally only in the properties section to avoid changes in multiple locations)
  3. Make the parent pom.xml file resolvable by sub-modules.
    This can be done the following ways:

    1. Place the parent pom.xml file in the root folder of the project structure (default)
    2. Configure the parent POM location globally via system property¬†tycho.pomless.parent which defaults to “..”.
    3. Override the global default by defining tycho.pomless.parent in the build.properties of each individual project.
    4. In pom.xml files that are not generated by the tycho-pomless extension but managed manually (e.g. because of additional build plugins), configure the relativePath for the parent like shown in the following example:
      <parent>
          <groupId>my.group.id</groupId>
          <artifactId>parent</artifactId>
          <version>1.0.0-SNAPSHOT</version>
          <relativePath>../../pom.xml</relativePath>
      </parent>
  4. Delete the pom.xml in the target definition project (if nothing special is configured in there).
  5. Delete the pom.xml in the update site project (if nothing special is configured in there).
  6. Delete the pom.xml in the product project (if nothing special is configured in there).

If you have your project structure setup similar to the structured environments, the following steps need to be performed additionally in order to make POM-less Tycho work correctly:

  1. Change the modules section of the parent pom.xml to only point to the structure folders and not every single module:
    <modules>
        <module>bundles</module>
        <module>tests</module>
        <module>features</module>
        <module>releng</module>
    </modules>

    This will automatically generate the “connector POMs” that point to the parent pom.xml¬†in the module folders. The name of these generated files is .polyglot.pom.tycho and they are removed once the build is finished.¬†The generated “connector POM” files can even be referenced in the relativePath.

    <parent>
        <groupId>my.group.id</groupId>
        <artifactId>bundles</artifactId>
        <version>1.0.0-SNAPSHOT</version>
        <relativePath>../pom.tycho</relativePath>
    </parent>

    The generation of the “connector POMs” has the advantage that new modules can be simply created and added to the build, without the need to update the parent pom.xml modules section. On the other hand it is not possible to skip single modules in the build by removing them from the modules section.

    Note:
    Additionally a file named pom.tycho¬†is generated in each sub-folder, that lists the modules that are detected by the automatic module detection. Looking into the sources it seems like the idea of this file is to separate “connector POM” and module collection, to be able to manually list the modules that should be build. That file should be deleted if it is generated, but if it already exists it should stay untouched.
    While testing in a Windows environment I noticed that somes the pom.tycho files stay as leftovers in the sub-folders even they were generated. This seems to be a bug and I reported it here. In case you see such leftovers that are not intended, make sure you delete them and do not commit them into the repository if you like the generation approach. Otherwise the automatic module detection is not executed and therefore new modules are not added automatically.

  2. Ensure that all modules are placed in a folder structure with the following folder names:
    1. bundles
    2. plugins
    3. tests
    4. features
    5. sites
    6. products
    7. releng

    Note:
    If you have additional folders or folders with different names, they are not taken up for automatic “connector POM” generation. To support additional folder names you can specify the system property tycho.pomless.aggregator.names¬†where the value is a comma separated list of folder names.
    For example, let’s assume instead of a¬†releng folder the build related modules are placed in a folder named build. So instead of releng you would point to build in the modules section. Starting the build now leads to an error saying that there is no pom.xml found in the build folder. Starting the build the following way solves that issue:

    mvn -Dtycho.pomless.aggregator.names=bundles,plugins,tests,features,sites,products,build clean verify

With these enhancements it is now possible to set up a Maven Tycho build for PDE based Eclipse RCP projects with a single pom.xml file.

Note:
The Maven versions 3.6.1 and 3.6.2 are known to fail with the pomless extension. There are issues reported here and here. Both are already fixed so by using Maven 3.6.3 the issues should not be seen anymore.

I would also like to mention that these enhancements where contributed by Christoph L√§ubrich who wasn’t a committer in the Tycho project at that time. Another good example for the power of open source! So thanks for the contributions to make the POM-less Tycho build more convenient for all of us.

Posted in Dirk Fauth, Eclipse, OSGi | 1 Comment

Add JavaFX controls to a SWT Eclipse 4 application ‚Äď Eclipse RCP Cookbook UPDATE

I wrote about this topic already a while ago on another blog. But since then quite a few things have changed and I wanted to publish an updated version of that blog post. Because of various reasons I decided to publish it here ;-).


As explained in JavaFX Interoperability with SWT it is possible to embed JavaFX controls in a SWT UI. This is useful for example if you want to softly migrate big applications from SWT to JavaFX or if you need to add animations or special JavaFX controls without completely migrating your application.

The following recipe will show how to integrate JavaFX with an Eclipse 4 application. It will cover the usage of Java 8 with integrated JavaFX, and Java 11 with separate JavaFX 11. The steps for Java 11 should also apply for newer versions of Java and JavaFX.

Cookware

For Java 11 with separate JavaFX 11 the following preparations need to be done:

Ingredients

This recipe uses the Eclipse RCP Cookbook ‚Äď Basic Recipe. To get started fast with this recipe, we have prepared the basic recipe for you on GitHub.

To use the prepared basic recipe to follow this tutorial, import the project by cloning the Git repository:

  • File¬†‚Üí Import¬†‚Üí Git¬†‚Üí Projects from Git
  • Click Next
  • Select Clone URI
  • Enter URI¬†https://github.com/fipro78/e4-cookbook-basic-recipe.git
  • Click Next
  • Select the master branch
  • Click Next
  • Choose a directory where you want to store the checked out sources
  • Click Next
  • Select Import existing projects
  • Click Next
  • Click Finish

Preparation

Step 1: Update the Target Platform

  • Open the target definition org.fipro.eclipse.tutorial.target.target in the project¬†org.fipro.eclipse.tutorial.target
  • Add a new Software Site by clicking Add‚Ķ in the Locations section
    • Select Software Site
    • Software Site for the e(fx)clipse 3.6.0 release build
      http://download.eclipse.org/efxclipse/runtime-released/3.6.0/site
    • Expand FX Target and check Minimal JavaFX OSGi integration bundles
      (Runtime extension to add JavaFX support)
  • Optional:
    If you use the RCP e4 Target Platform Feature instead for additional e(fx)clipse features that can be included, you additionally need to add p2 and EMF Edit to the target definition because of transitive dependencies

    • Select the update site¬†http://download.eclipse.org/releases/2019-06
    • Click Edit
    • Check Equinox p2, headless functionalities
    • Check EMF Edit
  • Click Finish
  • Activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

Java 11

If you use Java 11 or greater you need to add an additional update site as explained here.

  • Add a new Software Site by clicking Add‚Ķ in the Locations section
    • Select Software Site
    • http://downloads.efxclipse.bestsolution.at/p2-repos/openjfx-11/repository/
    • Disable Group by Category as the items are not categorized and check all available items
      • openjfx.media.feature¬†
      • openjfx.standard.feature
      • openjfx.swing.feature
      • openjfx.swt.feature
      • openjfx.web.feature

Note:
If you are using the Target Definition DSL, the TPD file should look similar to the following snippet which includes the Minimal JavaFX OSGi integration bundles and the RCP e4 Target Platform Feature:

target "E4 Cookbook Target Platform"

with source requirements

location "http://download.eclipse.org/releases/2019-06" {
    org.eclipse.equinox.executable.feature.group
    org.eclipse.sdk.feature.group
    org.eclipse.equinox.p2.core.feature.feature.group
    org.eclipse.emf.edit.feature.group
}

location "http://download.eclipse.org/efxclipse/runtime-released/3.6.0/site" {
    org.eclipse.fx.runtime.min.feature.feature.group
    org.eclipse.fx.target.rcp4.feature.feature.group
}

// only needed for Java 11 with OpenJFX 11
location "http://downloads.efxclipse.bestsolution.at/p2-repos/openjfx-11/repository/" {
    openjfx.media.feature.feature.group
    openjfx.standard.feature.feature.group
    openjfx.swing.feature.feature.group
    openjfx.swt.feature.feature.group
    openjfx.web.feature.feature.group
}

Step 2: Update the Plug-in project

  • Open the InverterPart in the project¬†org.fipro.eclipse.tutorial.inverter
    • Add a¬†javafx.embed.swt.FXCanvas to the parent Composite in InverterPart#postConstruct(Composite)
    • Create an instance of javafx.scene.layout.BorderPane
    • Create a javafx.scene.Scene instance that takes the created BorderPane as root node and sets the background color to be the same as the background color of the parent Shell
    • Set the created javafx.scene.Scene to the FXCanvas
// add FXCanvas for adding JavaFX controls to the UI
FXCanvas canvas = new FXCanvas(parent, SWT.NONE);
GridDataFactory
    .fillDefaults()
    .grab(true, true)
    .span(3, 1)
    .applyTo(canvas);

// create the root layout pane
BorderPane layout = new BorderPane();

// create a Scene instance
// set the layout container as root
// set the background fill to the background color of the shell
Scene scene = new Scene(layout, Color.rgb(
    parent.getShell().getBackground().getRed(),
    parent.getShell().getBackground().getGreen(),
    parent.getShell().getBackground().getBlue()));

// set the Scene to the FXCanvas
canvas.setScene(scene);

Now JavaFX controls can be added to the scene graph via the BorderPane instance.

  • Remove the output control of type org.eclipse.swt.widgets.Text
  • Create¬†an output control of type¬†javafx.scene.control.Label
  • Add the created¬†javafx.scene.control.Label to the center of the BorderPane
javafx.scene.control.Label output = new javafx.scene.control.Label();
layout.setCenter(output);

Add some animations to see some more JavaFX features.

  • Create a javafx.animation.RotateTransition that rotates the output label.
  • Create¬†a¬†javafx.animation.ScaleTransition that scales the output label.
  • Create a¬†javafx.animation.ParallelTransition that combines the RotateTransition and the ScaleTransition. This way both transitions are executed in parallel.
  • Add starting the animation¬†in the SelectionAdapter and the KeyAdapter that are executed for reverting a String.
RotateTransition rotateTransition = 
    new RotateTransition(Duration.seconds(1), output);
rotateTransition.setByAngle(360);

ScaleTransition scaleTransition = 
    new ScaleTransition(Duration.seconds(1), output);
scaleTransition.setFromX(1.0);
scaleTransition.setFromY(1.0);
scaleTransition.setToX(4.0);
scaleTransition.setToY(4.0);

ParallelTransition parallelTransition = 
    new ParallelTransition(rotateTransition, scaleTransition);

button.addSelectionListener(new SelectionAdapter() {
    @Override
    public void widgetSelected(SelectionEvent e) {
        output.setText(StringInverter.invert(input.getText()));
        parallelTransition.play();
    }
});

Step 3: Update the Product Configuration

  • Open the file¬†org.fipro.eclipse.tutorial.app.product¬†in the project org.fipro.eclipse.tutorial.product
  • Switch to the Contents tab and add additional features
    • Option A: Use the Minimal JavaFX OSGi integration bundles
      • org.eclipse.fx.runtime.min.feature
    • Option B: Use¬†the RCP e4 Target Platform Feature
      • org.eclipse.fx.target.rcp4.feature
      • org.eclipse.equinox.p2.core.feature
      • org.eclipse.ecf.core.feature
      • org.eclipse.ecf.filetransfer.feature
      • org.eclipse.emf.edit
  • Switch to the Launching tab
    • Add -Dosgi.framework.extensions=org.eclipse.fx.osgi to the VM Arguments
      (adapter hook to get JavaFX-SWT integration on the classpath)

Java 11:

You also need to add the openjfx features to bundle it with your application:

  • openjfx.media.feature
  • openjfx.standard.feature
  • openjfx.swing.feature
  • openjfx.swt.feature
  • openjfx.web.feature
  • Start the application from within the IDE
    • Open the Product Configuration in the¬†org.fipro.eclipse.tutorial.product project
    • Select the Overview tab
    • Click¬†Launch an Eclipse Application in the Testing section

Note:
If you have org.eclipse.equinox.p2.reconciler.dropins in the Start Levels of the Configuration tab, you also need to add org.eclipse.equinox.p2.extras.feature in the included features of the Contents tab so the product build succeeds in later stages. I personally tend to remove it as dropins have been deprecated by the p2 team quite a while ago.

The started application should look similar to the following screenshot.

Maven Tycho build

To build a deliverable product it is recommended to use Maven Tycho. Using pomless Tycho you only need a single pom.xml file for the build configuration and not one pom.xml file per project. Since Tycho 1.5 this is even true for the target platform, update site and product projects.

To enable the Maven build with pomless Tycho for the example project you need to create two files:

  1. Create e4-cookbook-basic-recipe/.mvn/extension.xml to enable the pomless Tycho extension
    <?xml version="1.0" encoding="UTF-8"?>
    <extensions>
        <extension>
            <groupId>org.eclipse.tycho.extras</groupId>
            <artifactId>tycho-pomless</artifactId>
            <version>1.5.1</version>
        </extension>
    </extensions>
  2. Create e4-cookbook-basic-recipe/pom.xml to configure the Maven build
    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
      <modelVersion>4.0.0</modelVersion>
    
      <groupId>org.fipro.eclipse.tutorial</groupId>
      <artifactId>parent</artifactId>
      <version>1.0.0-SNAPSHOT</version>
    
      <packaging>pom</packaging>
    
      <modules>
        <module>org.fipro.eclipse.tutorial.target</module>
        <module>org.fipro.eclipse.tutorial.inverter</module>
        <module>org.fipro.eclipse.tutorial.app</module>
        <module>org.fipro.eclipse.tutorial.feature</module>
        <module>org.fipro.eclipse.tutorial.product</module>
      </modules>
    
      <properties>
        <tycho-version>1.5.1</tycho-version>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
      </properties>
    
      <build>
        <plugins>
          <plugin>
            <groupId>org.eclipse.tycho</groupId>
            <artifactId>tycho-maven-plugin</artifactId>
            <version>${tycho-version}</version>
            <extensions>true</extensions>
          </plugin>
          <plugin>
            <groupId>org.eclipse.tycho</groupId>
            <artifactId>target-platform-configuration</artifactId>
            <version>${tycho-version}</version>
            <configuration>
              <target>
                <artifact>
                  <groupId>org.fipro.eclipse.tutorial</groupId>
                  <artifactId>org.fipro.eclipse.tutorial.target</artifactId>
                  <version>1.0.0-SNAPSHOT</version>
                </artifact>
              </target>
              <environments>
                <environment>
                  <os>win32</os>
                  <ws>win32</ws>
                  <arch>x86_64</arch>
                </environment>
                <environment>
                  <os>linux</os>
                  <ws>gtk</ws>
                  <arch>x86_64</arch>
                </environment>
                <environment>
                  <os>macosx</os>
                  <ws>cocoa</ws>
                  <arch>x86_64</arch>
                </environment>
              </environments>
            </configuration>
          </plugin>
        </plugins>
    
        <pluginManagement>
          <plugins>
            <plugin>
              <groupId>org.eclipse.tycho</groupId>
              <artifactId>tycho-p2-director-plugin</artifactId>
              <version>${tycho-version}</version>
            </plugin>
          </plugins>
        </pluginManagement>
      </build>
    </project>

As JavaFX is not on the default classpath, the location of the JavaFX libraries need to be configured in the Tycho build for compile time resolution. If the build is executed with Java 8 for Java 8, the following section needs to be added in the pluginManagement section, where the JAVA_HOME environment variable points to your JDK installation:

<plugin>
  <groupId>org.eclipse.tycho</groupId>
  <artifactId>tycho-compiler-plugin</artifactId>
  <version>${tycho-version}</version>
  <configuration>
    <encoding>UTF-8</encoding>
    <extraClasspathElements>
      <extraClasspathElement>
        <groupId>com.oracle</groupId>
        <artifactId>javafx</artifactId>
        <version>8.0.0-SNAPSHOT</version>
        <systemPath>${JAVA_HOME}/jre/lib/jfxswt.jar</systemPath>
        <scope>system</scope>
      </extraClasspathElement>
    </extraClasspathElements>
  </configuration>
</plugin>

Java 11

With Java 11 it is slightly more complicated. On the one hand the OpenJFX libraries are available via Maven Central and can be added as extra classpath elements via Maven. But the javafx-swt module is not available via Maven Central as reported here. That means for OpenJFX 11 following section needs to be added in the pluginManagement section, where the JAVAFX_HOME environment variable points to your OpenJFX installation:

<plugin>
  <groupId>org.eclipse.tycho</groupId>
  <artifactId>tycho-compiler-plugin</artifactId>
  <version>${tycho-version}</version>
  <configuration>
    <encoding>UTF-8</encoding>
    <extraClasspathElements>
      <extraClasspathElement>
        <groupId>org.openjfx</groupId>
        <artifactId>javafx-controls</artifactId>
        <version>11.0.2</version>
      </extraClasspathElement>
      <extraClasspathElement>
        <groupId>org.openjfx</groupId>
        <artifactId>javafx-swt</artifactId>
        <version>11.0.2</version>
        <systemPath>${JAVAFX_HOME}/lib/javafx-swt.jar</systemPath>
        <scope>system</scope>
      </extraClasspathElement>
    </extraClasspathElements>
  </configuration>
</plugin>

Start the build

mvn clean verify

The resulting product variants for each platform is located under
e4-cookbook-basic-recipe/org.fipro.eclipse.tutorial.product/target/products

Note:
If you included the openjfx bundles in your product and start the product with Java 8, the JavaFX 8 classes will be used. If you use Java 11 + to start the application, the classes from the openjfx bundles will be loaded. The e(fx)clipse classloader hook will take care of this.

Currently only OpenJFX 11 is available in the re-bundled form. If you are interested about newer OpenJFX versions you can have a look at the openjfx-osgi repository on GitHub or get in contact with BestSolution.at who created and provide the bundles.

The complete source code of the example can be found on GitHub.

Posted in Dirk Fauth, Eclipse, Java | Comments Off on Add JavaFX controls to a SWT Eclipse 4 application ‚Äď Eclipse RCP Cookbook UPDATE

OSGi Event Admin – Publish & Subscribe

In this blog post I want to write about the publish & subscribe mechanism in OSGi, provided via the OSGi Event Admin Service. Of course I will show this in combination with OSGi Declarative Services, because this is the technology I currently like very much, as you probably know from my previous blog posts.

I will start with some basics and then show an example as usual. At last I will give some information about how to use the event mechanism in Eclipse RCP development, especially related to the combination between OSGi services and the GUI.

If you want to read further details on the Event Admin Service Specification have a look at the OSGi Spec. In Release 6 it is covered in the Compendium Specification Chapter 113.

Let’s start with the basics. The Event Admin Service¬†is based on the Publish-Subscribe pattern. There is an event publisher and an event consumer. Both do not know each other in any way, which provides a high decoupling. Simplified you could say, the event publisher sends an event to a channel, not knowing if anybody will receive that event. On the other side there is an event consumer ready to receive events, not knowing if there is anybody available for sending events. This simplified view is shown in the following picture:

Technically both sides are using the Event Admin Service in some way. The event publisher uses it directly to send an event to the channel. The event consumer uses it indirectly by registering an event handler to the EventAdmin to receive events. This can be done programmatically. But with OSGi DS it is very easy to register an event handler by using the whiteboard pattern.

Event

An Event object has a topic and some event properties. It is an immutable object to ensure that every handler gets the same object with the same state.

The topic defines the type of the event and is intended to serve as first-level filter for determining which handlers should receive the event. It is a String arranged in a hierarchical namespace. And the recommendation is to use a convention similar to the Java package name scheme by using reverse domain names (fully/qualified/package/ClassName/ACTION). Doing this ensures uniqueness of events. This is of course only a recommendation and you are free to use pseudo class names to make the topic better readable.

Event properties are used to provide additional information about the event. The key is a String and the value can be technically any object. But it is recommended to only use String objects and primitive type wrappers. There are two reasons for this:

  1. Other types cannot be passed to handlers that reside external from the Java VM.
  2. Other classes might be mutable, which means any handler that receives the event could change values. This break the immutability rule for events.

Common Bundle

It is some kind of best practice to place common stuff in a common bundle to which the event publisher bundle and the event consumer bundle can have a dependency to. In our case this will only be the definition of the supported topics and property keys in a constants class, to ensure that both implementations share the same definition, without the need to be dependent on each other.

  • Create a new project org.fipro.mafia.common
  • Create a new package org.fipro.mafia.common
  • Create a new class MafiaBossConstants
public final class MafiaBossConstants {

    private MafiaBossConstants() {
        // private default constructor for constants class
        // to avoid someone extends the class
    }

    public static final String TOPIC_BASE = "org/fipro/mafia/Boss/";
    public static final String TOPIC_CONVINCE = TOPIC_BASE + "CONVINCE";
    public static final String TOPIC_ENCASH = TOPIC_BASE + "ENCASH";
    public static final String TOPIC_SOLVE = TOPIC_BASE + "SOLVE";
    public static final String TOPIC_ALL = TOPIC_BASE + "*";

    public static final String PROPERTY_KEY_TARGET = "target";

}
  • PDE
    • Open the MANIFEST.MF file and on the Overview tab set the Version to 1.0.0 (remove the qualifier).
    • Switch to the Runtime tab and export the org.fipro.mafia.common¬†package.
    • Specify the version 1.0.0 on the package via Properties…
  • Bndtools
    • Open the bnd.bnd file
    • Add the package org.fipro.mafia.common to the Export Packages

In MafiaBossConstants we specify the topic base with a pseudo class org.fipro.mafia.Boss, which results in the topic base org/fipro/mafia/Boss. We specify action topics that start with the topic base and end with the actions CONVINCE, ENCASH and SOLVE. And additionally we specify a topic that starts with the base and ends with the wildcard ‘*’.

These constants will be used by the event publisher and the event consumer soon.

Event Publisher

The Event Publisher uses the Event Admin Service to send events synchronously or asynchronously. Using DS this is pretty easy.

We will create an Event Publisher based on the idea of a mafia boss. The boss simply commands a job execution and does not care who is doing it. Also it is not of interest if there are many people doing the same job. The job has to be done!

  • Create a new project org.fipro.mafia.boss
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.boss¬†project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations¬†as Optional via Properties‚Ķ
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common

Note:
Adding org.osgi.service.event to the Imported Packages with PDE on a current Equinox target will provide a package version 1.3.1. You need to change this to 1.3.0 if you intend to run the same bundle with a different Event Admin Service implementation. In general it is a bad practice to rely on a bugfix version. Especially when thinking about interfaces, as any change to an interface typically is a breaking change.
To clarify the statement above. As the package org.osgi.service.event contains more than just the EventAdmin interface, the bugfix version increase is surely correct in Equinox, as there was probably a bugfix in some code inside the package. The only bad thing is to restrict the package wiring on the consumer side to a bugfix version, as this would restrict your code to only run with the Equinox implementation of the Event Admin Service.

  • Create a new package org.fipro.mafia.boss
  • Create a new class BossCommand
@Component(
    property = {
        "osgi.command.scope=fipro",
        "osgi.command.function=boss" },
    service = BossCommand.class)
public class BossCommand {

    @Reference
    EventAdmin eventAdmin;

    @Descriptor("As a mafia boss you want something to be done")
    public void boss(
        @Descriptor("the command that should be executed. "
            + "possible values are: convince, encash, solve")
        String command,
        @Descriptor("who should be 'convinced', "
            + "'asked for protection money' or 'finally solved'")
        String target) {

        // create the event properties object
        Map<String, Object> properties = new HashMap<>();
        properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
        Event event = null;

        switch (command) {
            case "convince":
                event = new Event(MafiaBossConstants.TOPIC_CONVINCE, properties);
                break;
            case "encash":
                event = new Event(MafiaBossConstants.TOPIC_ENCASH, properties);
                break;
            case "solve":
                event = new Event(MafiaBossConstants.TOPIC_SOLVE, properties);
                break;
            default:
                System.out.println("Such a command is not known!");
        }

        if (event != null) {
            eventAdmin.postEvent(event);
        }
    }
}

Note:
The code snippet above uses the annotation @Descriptor to specify additional information for the command. This information will be shown when executing help boss in the OSGi console. To make this work with PDE you need to import the package org.apache.felix.service.command with status=provisional. Because the PDE editor does not support adding additional information to package imports, you need to do this manually in the MANIFEST.MF tab of the Plugin Manifest Editor. The Import-Package header would look like this:

Import-Package: org.apache.felix.service.command;status=provisional;version="0.10.0",
 org.fipro.mafia.common;version="[1.0.0,2.0.0)",
 org.osgi.service.component.annotations;version="[1.3.0,2.0.0)";resolution:=optional,
 org.osgi.service.event;version="[1.3.0,2.0.0)"

With Bndtools you need to add org.apache.felix.gogo.runtime to the Build Path in the bnd.bnd file so the @Descriptor annotation can be resolved.

There are three things to notice in the BossCommand implementation:

  • There is a mandatory reference to EventAdmin which is required for sending events.
  • The Event objects are created using a specific topic and a Map<String, Object> that contains the additional event properties.
  • The event is sent asynchronously via EventAdmin#postEvent(Event)

The BossCommand will create an event using the topic that corresponds to the given command parameter. The target parameter will be added to a map that is used as event properties. This event will then be send to a channel via the EventAdmin. In the example we use EventAdmin#postEvent(Event) which sends the event asynchronously. That means, we send the event but do not wait until available handlers have finished the processing. If it is required to wait until the processing is done, you need to use EventAdmin#sendEvent(Event), which sends the event synchronously. But sending events synchronously is significantly more expensive, as the Event Admin Service implementation needs to ensure that every handler has finished processing before it returns. It is therefore recommended to prefer the usage of asynchronous event processing.

Note:
The code snippet uses the Field Strategy for referencing the EventAdmin. If you are using PDE this will work with Eclipse Oxygen. With Eclipse Neon you need to use the Event Strategy. In short, you need to write the bind-event-method for referencing EventAdmin because Equinox DS supports only DS 1.2 and the annotation processing in Eclipse Neon also only supports the DS 1.2 style annotations.

Event Consumer

In our example the boss does not have to tell someone explicitly to do the job. He just mentions that the job has to be done. Let’s assume we have a small organization without hierarchies. So we skip the captains etc. and simply implement some soldiers. They have specialized, so we have three soldiers, each listening to one special topic.

  • Create a new project org.fipro.mafia.soldier
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.soldier¬†project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations¬†as Optional via Properties‚Ķ
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common
  • Create a new package org.fipro.mafia.soldier
  • Create the following three soldiers Luigi, Mario and Giovanni
@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_CONVINCE)
public class Luigi implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Luigi: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " was 'convinced' to support our family");
    }

}
@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ENCASH)
public class Mario implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Mario: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " payed for protection");
    }

}
@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_SOLVE)
public class Giovanni implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Giovanni: We 'solved' the issue with "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));
    }

}

Technically we have created special EventHandler for different topics. You should notice the following facts:

  • We are using OSGi DS to register the event handler using the whiteboard pattern. On the consumer side we don’t need to know the EventAdmin itself.
  • We¬†need to implement¬†org.osgi.service.event.EventHandler
  • We need to register for a topic via service property event.topics, otherwise the handler will not listen for any event.
  • Via Event#getProperty(String) we are able to access event property values.

The following service properties are supported by event handlers:

Service Registration Property Description
event.topics Specify the topics of interest to an EventHandler service. This property is mandatory.
event.filter Specify a filter to further select events of interest to an EventHandler service. This property is optional.
event.delivery Specifying the delivery qualities requested by an EventHandler service. This property is optional.

The property keys and some default keys for event properties are specified in org.osgi.service.event.EventConstants.

Launch the example

Before moving on and explaining further, let’s start the example and verify that each command from the boss is only handled by one soldier.

With PDE perform the following steps:

  • Select the menu entry Run -> Run Configurations‚Ķ
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Event Mafia
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.mafia.boss
      • org.fipro.mafia.common
      • org.fipro.mafia.soldier
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.apache.felix.gogo.shell
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.services
      • org.eclipse.osgi.util
    • Equinox¬†Event Admin
      • org.eclipse.equinox.event
  • Ensure that Default Auto-Start is set to true
  • Click Run

With Bndtools perform the following steps:

  • Open the launch.bndrun file in the org.fipro.mafia.boss¬†project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.mafia.boss
    • org.fipro.mafia.common
    • org.fipro.mafia.soldier
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Click Run OSGi

Execute the boss command to see the different results. This can look similar to the following:

osgi> boss convince Angelo
osgi> Luigi: Angelo was 'convinced' to support our family
boss encash Wong
osgi> Mario: Wong payed for protection
boss solve Tattaglia
osgi> Giovanni: We 'solved' the issue with Tattaglia

Handle multiple event topics

It is also possible to register for multiple event topics. Say Pete is a tough guy who is good in ENCASH and SOLVE issues. So he registers for those topics.

@Component(
    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_CONVINCE,
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_SOLVE })
public class Pete implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Pete: I took care of "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));
    }

}

As you can see the service property event.topics is declared multiple times via the @Component annotation type element property. This way an array of Strings is configured for the service property, so the handler reacts on both topics.

If you execute the example now and call boss convince xxx or boss solve xxx you will notice that Pete is also responding.

It is also possible to use the asterisk wildcard as last token of a topic. This way the handler will receive all events for topics that start with the left side of the wildcard.

Let’s say we have a very motivated young guy called Ray who wants to prove himself to the boss. So he takes every command from the boss. For this we set the service property event.topics=org/fipro/mafia/Boss/*

@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ALL)
public class Ray implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        String topic = event.getTopic();
        Object target = event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET);

        switch (topic) {
            case MafiaBossConstants.TOPIC_CONVINCE:
                System.out.println("Ray: I helped in punching the shit out of" + target);
                break;
            case MafiaBossConstants.TOPIC_ENCASH:
                System.out.println("Ray: I helped getting the money from " + target);
                break;
            case MafiaBossConstants.TOPIC_SOLVE:
                System.out.println("Ray: I helped killing " + target);
                break;
            default: System.out.println("Ray: I helped with whatever was requested!");
        }
    }

}

Executing the example again will show that Ray is responding on every boss command.

It is also possible to filter events based on event properties by setting the service property event.filter. The value needs to be an LDAP filter. For example, although Ray is a motivated and loyal soldier, he refuses to handle events that target his friend Sonny.

The following snippet shows how to specify a filter that excludes event processing if the target is Sonny.

@Component(
    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "=" + "(!(target=Sonny))"})
public class Ray implements EventHandler {

Execute the example and call two commands:

  • boss solve Angelo
  • boss solve Sonny

You will notice that Ray will respond on the first call, but he will not show up on the second call.

Note:
The filter expression can only be applied on event properties. It is not possible to use that filter on service properties.

At last it is possible to configure in which order the event handler wants the events to be delivered. This can either be ordered in the same way they are posted, or unordered. The service property event.delivery can be used to change the default behavior, which is to receive the events from a single thread in the same order as they were posted.

If an event handler does not need to receive events in the order as they were posted, you need to specify the service property event.delivery=async.unordered.

@Component(
    property = {
        EventConstants.EVENT_TOPIC + "="
            + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "="
            + "(!(target=Sonny))",
        EventConstants.EVENT_DELIVERY + "="
            + EventConstants.DELIVERY_ASYNC_UNORDERED})

The value for ordered delivery is async.ordered which is the default. The values are also defined in the EventConstants.

Capabilities

By using the event mechanism the code is highly decoupled. In general this is a good thing, but it also makes it hard to identify issues. One common issue in Eclipse RCP for example is to forget to automatically start the bundle org.eclipse.equinox.event. Things will simply not work in such a case, without any errors or warnings shown on startup.

The reason for this is that the related interfaces like EventAdmin and EventHandler are located in the bundle org.eclipse.osgi.services. The bundle wiring therefore shows that everything is ok on startup, because all interfaces and classes are available. But we require a bundle that contains an implementation of EventAdmin. If you remember my Getting Started Tutorial, such a requirement can be specified by using capabilities.

To show the implications, let’s play with the Run Configuration:

  • Uncheck org.eclipse.equinox.event from the list of bundles
  • Launch the configuration
  • execute lb¬†on the command line (or ss on Equinox if you are more familiar with that) and check the bundle states
    • Notice that all bundles are in ACTIVE state
  • execute scr:list (or list on Equinox < Oxygen) to check the state of the DS components
    • Notice that org.fipro.mafia.boss.BossCommand has an unsatisfied reference
    • Notice that all other EventHandler services are satisfied

That is of course a the correct behavior. The BossCommand service has a mandatory reference to EventAdmin and there is no such service available. So it has an unsatisfied reference. The EventHandler implementations do not have such a dependency, so they are satisfied. And that is even fine when thinking in the publish & subscribe pattern. They can be active and waiting for events to process, even if there is nobody available to send an event. But it makes it hard to find the issue. And when using Tycho and the Surefire Plugin to execute tests, it will even never work because nobody tells the test runtime that org.eclipse.equinox.event needs to be available and started in advance.

This can be solved by adding the Require-Capability header to require an osgi.service for objectClass=org.osgi.service.event.EventAdmin.

Require-Capability: osgi.service;
 filter:="(objectClass=org.osgi.service.event.EventAdmin)"

By specifying the Require-Capability header like above, the capability will be checked when the bundles are resolved. So starting the example after the Require-Capability header was added will show an error and the bundle org.fipro.mafia.boss will not be activated.

If you add the bundle org.eclipse.equinox.event again to the Run Configuration and launch it again, there are no issues.

As p2 does still not support OSGi capabilities, the p2.inf file needs to be created in the META-INF folder with the following content:

requires.1.namespace = osgi.service
requires.1.name = org.osgi.service.event.EventAdmin

Typically you would specify the Require-Capability to the EventAdmin service with the directive effective:=active. This implies that the OSGi framework will resolve the bundle without checking if another bundle provides the capability. It can then be more seen as a documentation which services are required from looking into the MANIFEST.MF.

Important Note:
Specifying the Require-Capability header and the p2 capabilities for org.osgi.service.event.EventAdmin will only work with Eclipse Oxygen. I contributed the necessary changes to Equinox for Oxygen M1 with Bug 416047. With a org.eclipse.equinox.event bundle in a version >= 1.4.0 you should be able to specify the capabilities. In previous versions the necessary Provide-Capability and p2 capability configuration in that bundle are missing.

Handling events in Eclipse RCP UI

When looking at the architecture of an Eclipse RCP application, you will notice that the UI layer is not created via OSGi DS (actually that is not a surprise!). And we can not simply say that our view parts are created via DS, because the lifecycle of a part is controlled by other mechanics. But as an Eclipse RCP application is typcially an application based on OSGi, all the OSGi mechanisms can be used. Of course not that convenient as with using OSGi DS directly.

The direction from the UI layer to the OSGi service layer is pretty easy. You simply need to retrieve the service you want to uw3. With Eclipse 4 you simply get the desired service injected using @Inject or @Inject in combination with @Service since Eclipse Oxygen (see OSGi Declarative Services news in Eclipse Oxygen). With Eclipse 3.x you needed to retrieve the service programmatically via the BundleContext.

The other way to communicate from a service to the UI layer is something different. There are two ways to consider from my point of view:

This blog post is about the event mechanism in OSGi, so I don’t want to go in detail with the observer pattern approach. It simply means that you extend the service interface to accept listeners to perform callbacks. Which in return means you need to retrieve the service in the view part for example, and register a callback function from there.

With the Publish & Subscribe pattern we register an EventHandler that reacts on events. It is a similar approach to the Observer pattern, with some slight differences. But this is not a design pattern blog post, we are talking about the event mechanism. And we already registered an EventHandler using OSGi DS. The difference to the scenario using DS is that we need to register the EventHandler programmatically. For OSGi experts that used the event mechanism before DS came up, this is nothing new. For all others that learn about it, it could be interesting.

The following snippet shows how to retrieve a BundleContext instance and register a service programmatically. In earlier days this was done in an Activator, as there you have access to the BundleContext. Nowadays it is recommended to use the FrameworkUtil class to retrieve the BundleContext when needed, and to avoid Activators to reduce startup time.

private ServiceRegistration<?> eventHandler;

...

// retrieve the bundle of the calling class
Bundle bundle = FrameworkUtil.getBundle(getClass());
BundleContext bc = (bundle != null) ? bundle.getBundleContext() : null;
if (bc != null) {
    // create the service properties instance
    Dictionary<String, Object> properties = new Hashtable<>();
    properties.put(EventConstants.EVENT_TOPIC, MafiaBossConstants.TOPIC_ALL);
    // register the EventHandler service
    eventHandler = bc.registerService(
        EventHandler.class.getName(),
        new EventHandler() {

            @Override
            public void handleEvent(Event event) {
                // ensure to update the UI in the UI thread
                Display.getDefault().asyncExec(() -> handlerLabel.setText(
                        "Received boss command "
                            + event.getTopic()
                            + " for target "
                            + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)));
            }
        },
        properties);
}

This code can be technically added anywhere in the UI code, e.g. in a view, an editor or a handler. But of course you should be aware that the event handler also should be unregistered once the connected UI class is destroyed. For example, you implement a view part that registers a listener similar to the above to update the UI everytime an event is received. That means the handler has a reference to a UI element that should be updated. If the part is destroyed, also the UI element is destroyed. If you don’t unregister the EventHandler when the part is destroyed, it will still be alive and react on events and probably cause exceptions without proper disposal checks. It is also a cause for memory leaks, as the EventHandler references a UI element instance that is already disposed but can not be cleaned up by the GC as it is still referenced.

Note:
The event handling is executed in its own event thread. Updates to the UI can only be performed in the main or UI thread, otherwise you will get a SWTException for Invalid thread access. Therefore it is necessary to ensure that UI updates performed in an event handler are executed in the UI thread. For further information have a look at Eclipse Jobs and Background Processing.
For the UI synchronization you should also consider using asynchronous execution via Display#asyncExec() or UISynchronize#asyncExec(). Using synchronous execution via syncExec() will block the event handler thread until the UI update is done.

If you stored the ServiceRegistration object returned by BundleContext#registerService() as shown in the example above, the following snippet can be used to unregister the handler if the part is destroyed:

if (eventHandler != null) {
    eventHandler.unregister();
}

In Eclipse 3.x this needs to be done in the overriden dispose() method. In Eclipse 4 it can be done in the method annotated with @PreDestroy.

Note:
Ensure that the bundle that contains the code is in ACTIVE state so there is a BundleContext. This can be achieved by setting Bundle-ActivationPolicy: lazy in the MANIFEST.MF.

Handling events in Eclipse RCP UI with Eclipse 4

In Eclipse 4 the event handling mechanism is provided to the RCP development via the EventBroker. The EventBroker is a service that uses the EventAdmin and additionally provides injection support. To learn more about the EventBroker and the event mechanism provided by Eclipse 4 you should read the related tutorials, like

We are focusing on the event consumer here. Additionally to registering the EventHandler programmatically, it is possible in Eclipse 4 to specify a method for method injection that is called on event handling by additionally providing support for injection.

Such an event handler method looks similar to the following snippet:

@Inject
@Optional
void handleConvinceEvent(
        @UIEventTopic(MafiaBossConstants.TOPIC_CONVINCE) String target) {
    e4HandlerLabel.setText("Received boss CONVINCE command for " + target);
}

By using @UIEventTopic you ensure that the code is executed in the UI thread. If you don’t care about the UI thread, you can use @EventTopic instead. The handler that is registered in the back will also be automatically unregistered if the containing instance is destroyed.

While the method gets directly invoked as event handler, the injection does not work without modifications on the event producer side. For this the data that should be used for injection needs to be added to the event properties for the key org.eclipse.e4.data. This key is specified as a constant in IEventBroker. But using the constant would also introduce a dependency to org.eclipse.e4.core.services, which is not always intended for event producer bundles. Therefore modifying the generation of the event properties map in BossCommand will make the E4 event handling injection work:

// create the event properties object
Map<String, Object> properties = new HashMap<>();
properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
properties.put("org.eclipse.e4.data", target);

Note:
The EventBroker additionally adds the topic to the event properties for the key event.topics. In Oxygen it does not seem to be necessary anymore.

The sources for this tutorial are hosted on GitHub in the already existing projects:

The PDE version also includes a sample project org.fipro.mafia.ui which is a very simple RCP application that shows the usage of the event handler in a view part.

Posted in Dirk Fauth, Eclipse, Java, OSGi | 2 Comments

Access OSGi Services via web interface

In this blog post I want to share a simple approach to make OSGi services available via web interface. I will show a simple approach that includes the following:

  • Embedding a Jetty ¬†Webserver in an OSGi application
  • Registering a Servlet via OSGi DS using the HTTP Whiteboard specification

I will only cover this simple scenario here and will not cover accessing OSGi services via REST interface. If you are interested in that you might want to look at the OSGi РJAX-RS Connector, which looks also very nice. Maybe I will look at this in another blog post. For now I will focus on embedding a Jetty Server and deploy some resources.

I will skip the introduction on OSGi DS and extend the examples from my Getting Started with OSGi Declarative Services blog. It is easier to follow this post when done the other tutorial first, but it is not required if you adapt the contents here to your environment.

As a first step create a new project org.fipro.inverter.http. In this project we will add the resources created in this tutorial. If you use PDE you should create a new Plug-in Project, with Bndtools create a new Bnd OSGi Project using the Component Development template.

PDE – Target Platform

In PDE it is best practice to create a Target Definition so the work is based on a specific set of bundles and we don’t need to install bundles in our IDE. Follow these steps to create a Target Definition for this tutorial:

  • Create a new target definition
    • Right click on project org.fipro.inverter.http¬†‚Üí New¬†‚Üí Other… ‚Üí Plug-in Development ‚Üí Target Definition
    • Set the filename to org.fipro.inverter.http.target
    • Initialize the target definition with: Nothing: Start with an empty target definition
  • Add a new Software Site in the opened Target Definition Editor by clicking Add‚Ķ in the Locations section
    • Select Software Site
    • Software Site¬†http://download.eclipse.org/releases/oxygen
    • Disable Group by Category
    • Select the following entries
      • Equinox Core SDK
      • Equinox Compendium SDK
      • Jetty Http Server Feature
    • Click Finish
  • Optional: Add a new Software Site to include JUnit to the Target Definition (only needed in case you followed all previous tutorials on OSGi DS or want to integrate JUnit tests for your services)
    • Software Site¬†http://download.eclipse.org/tools/orbit/R-builds/R20170307180635/repository
    • Select JUnit Testing Framework
    • Click Finish
  • Save your work and activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

Bndtools –¬†Repository

Using Bndtools is different as you already know if you followed my previous blog posts. To be also able to follow this blog post by using Bndtools, I will describe the necessary steps here.

We will use Apache Felix in combination with Bndtools instead of Equinox. This way we don’t need to modify the predefined repository and can start without further actions. The needed Apache Felix bundles are already available.

PDE – Prepare project dependencies

We will prepare the project dependencies in advance so it is easier to copy and paste the code samples to the project. Within the Eclipse IDE the Quick Fixes would also support adding the dependencies afterwards of course.

  • Open the MANIFEST.MF file of the org.fipro.inverter.http project and switch to the Dependencies tab
  • Add the following two dependencies on the Imported Packages side:
    • javax.servlet (3.1.0)
    • javax.servlet.http (3.1.0)
    • org.fipro.inverter (1.0.0)
    • org.osgi.service.component.annotations (1.3.0)
  • Mark org.osgi.service.component.annotations¬†as Optional via Properties‚Ķ
  • Add the upper version boundaries to the Import-Package statements.

Bndtools – Prepare project dependencies

  • Open the bnd.bnd file of the org.fipro.inverter.http project and switch to the Build tab
  • Add the following bundles to the Build Path
    • org.apache.http.felix.jetty
    • org.apache.http.felix.servlet-api
    • org.fipro.inverter.api

Create a Servlet implementation

  • Create a new package org.fipro.inverter.http
  • Create a new class InverterServlet
@Component(
    service=Servlet.class,
    property= "osgi.http.whiteboard.servlet.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class InverterServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Reference
    private StringInverter inverter;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        String input = req.getParameter("value");
        if (input == null) {
            throw new IllegalArgumentException("input can not be null");
        }
        String output = inverter.invert(input);

        resp.setContentType("text/html");
        resp.getWriter().write(
            "<html><body>Result is " + output + "</body></html>");
        }

}

Let’s look at the implementation:

  1. It is a typical Servlet implementation that extends javax.servlet.http.HttpServlet
  2. It is also an OSGi Declarative Service that is registered as service of type javax.servlet.Servlet
  3. The service has PROTOTYPE scope
  4. A special property osgi.http.whiteboard.servlet.pattern is set. This configures the context path of the Servlet.
  5. It references the StringInverter OSGi service from the previous tutorial via field reference. And yes since Eclipse Oxygen this is also supported in Equinox (I wrote about this here).

PDE – Launch the example

Before explaining the details further, launch the example to see if our servlet is available via standard web browser. For this we create a launch configuration, so we can start directly from the IDE.

  • Select the menu entry Run -> Run Configurations‚Ķ
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Inverter Http
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.inverter.api
      • org.fipro.inverter.http
      • org.fipro.inverter.provider
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.apache.felix.gogo.shell
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.services
      • org.eclipse.osgi.util
    • Equinox Http Service and Http Whiteboard
      • org.eclipse.equinox.http.jetty
      • org.eclipse.equinox.http.servlet
    • Jetty
      • javax.servlet
      • org.eclipse.jetty.continuation
      • org.eclipse.jetty.http
      • org.eclipse.jetty.io
      • org.eclipse.jetty.security
      • org.eclipse.jetty.server
      • org.eclipse.jetty.servlet
      • org.eclipse.jetty.util
  • Ensure that Default Auto-Start is set to true
  • Switch to the Arguments tab
    • Add¬†-Dorg.osgi.service.http.port=8080 to the VM arguments
  • Click Run

Note:
If you include the above bundles in an Eclipse RCP application, ensure that you auto-start the org.eclipse.equinox.http.jetty bundle to automatically start the Jetty server. This can be done on the Configuration tab of the Product Configuration Editor.

If you now open a browser and go to the URL http://localhost:8080/invert?value=Eclipse you should get a response with the inverted output.

Bndtools – Launch the example

  • Open the launch.bndrun file in the org.fipro.inverter.http project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.inverter.http
    • org.fipro.inverter.provider
    • org.apache.felix.http.jetty
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Add¬†-Dorg.osgi.service.http.port=8080 to the JVM Arguments
  • Click Run OSGi

Http Service & Http Whiteboard

Now why is this simply working? We only implemented a servlet and provided it as OSGi DS. And it is “magically” available via web interface. The answer to this is the OSGi Http Service Specification and the Http Whiteboard Specification. The OSGi Compendium Specification R6 contains the Http Service Specification Version 1.2 (Chapter 102 – Page 45) and the Http Whiteboard Specification Version 1.0 (Chapter 140 – Page 1067).

The purpose of the Http Service is to provide access to services on the internet or other networks for example by using a standard web browser. This can be done by registering servlets or resources to the Http Service. Without going too much into detail, the implementation is similar to an embedded web server, which is the reason why the default implementations in Equinox and Felix are based on Jetty.

To register servlets and resources to the Http Service you know the Http Service API very well and you need to retrieve the Http Service and directly operate on it. As this is not every convenient, the Http Whiteboard Specification was introduced. This allows to register servlets and resources via the Whiteboard Pattern, without the need to know the Http Service API in detail. I always think about the whiteboard pattern as a “don’t call us, we will call you” pattern. That means you don’t need to register servlets on the Http Service directly, you will provide it as a service to the service registry, and the Http Whiteboard implementation will take it and register it to the Http Service.

Via Http Whiteboard it is possible to register:

  • Servlets
  • Servlet Filters
  • Resources
  • Servlet Listeners

I will show some examples to be able to play around with the Http Whiteboard service.

Register Servlets

An example on how to register a servlet via Http Whiteboard is shown above. The main points are:

  • The servlet needs to be registered as OSGi service of type¬†javax.servlet.Servlet.
  • The component property¬†osgi.http.whiteboard.servlet.pattern needs to be set to specify the request mappings.
  • The service scope should be PROTOTYPE.

For registering servlets the following component properties are supported. (see OSGi Compendium Specification Release 6 РTable 140.4):

Component Property Description
osgi.http.whiteboard.servlet.asyncSupported Declares whether the servlet supports the asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.servlet.errorPage Register the servlet as an error page for the error code and/or exception specified; the value may be a fully qualified exception type name or a three-digit HTTP status code in the range 400-599. Special values 4xx and 5xx can be used to match value ranges. Any value not being a three-digit number is assumed to be a fully qualified exception class name.
osgi.http.whiteboard.servlet.name The name of the servlet. This name is used as the value of the javax.servlet.ServletConfig.getServletName()
method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.servlet.pattern Registration pattern(s) for the servlet.
servlet.init.* Properties starting with this prefix are provided as init parameters to the javax.servlet.Servlet.init(ServletConfig) method. The servlet.init. prefix is removed from the parameter name.

The Http Whiteboard service needs to call javax.servlet.Servlet.init(ServletConfig) to initialize the servlet before it starts to serve requests, and when it is not needed anymore javax.servlet.Servlet.destroy() to shut down the servlet. If more than one Http Whiteboard implementation is available in a runtime, the init() and destroy() calls would be executed multiple times, which violates the Servlet specification. It is therefore recommended to use the PROTOTYPE scope for servlets to ensure that every Http Whiteboard implementation gets its own service instance.

Note:
In a controlled runtime, like an RCP application that is delivered with one Http Whiteboard implementation and that does not support installing bundles at runtime, the usage of the PROTOTYPE scope is not required. Actually such a runtime ensures that the servlet is only instantiated and initialized once. But if possible it is recommended that the PROTOTYPE scope is used.

To register a servlet as an error page, the service property osgi.http.whiteboard.servlet.errorPage needs to be set. The value can be either a three-digit  HTTP error code, the special codes 4xx or 5xx to specify a range or error codes, or a fully qualified exception class name. The service property osgi.http.whiteboard.servlet.pattern is not required for servlets that provide error pages.

The following snippet shows an error page servlet that deals with IllegalArgumentExceptions and the HTTP error code 500. It can be tested by calling the inverter servlet without a query parameter.

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.errorPage=java.lang.IllegalArgumentException",
        "osgi.http.whiteboard.servlet.errorPage=500"
    },
    scope=ServiceScope.PROTOTYPE)
public class ErrorServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write(
        "<html><body>You need to provide an input!</body></html>");
    }
}

Register Filters

Via servlet filters it is possible to intercept servlet invocations. They are used to modify the ServletRequest and ServletResponse to perform common tasks before and after the servlet invocation.

The example below shows a servlet filter that adds a simple header and footer on each request to the servlet with the /invert pattern:

@Component(
    property = "osgi.http.whiteboard.filter.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class SimpleServletFilter implements Filter {

    @Override
    public void init(FilterConfig filterConfig)
            throws ServletException { }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
            throws IOException, ServletException {
        response.setContentType("text/html");
        response.getWriter().write("<b>Inverter Servlet</b><p>");
        chain.doFilter(request, response);
        response.getWriter().write("</p><i>Powered by fipro</i>");
    }

    @Override
    public void destroy() { }

}

To register a servlet filter the following criteria must match:

  • It needs to be registered as OSGi service of type¬†javax.servlet.Filter.
  • One of the given component properties¬†needs to be set:
    • osgi.http.whiteboard.filter.pattern
    • osgi.http.whiteboard.filter.regex
    • osgi.http.whiteboard.filter.servlet
  • The service scope should be PROTOTYPE.

For registering servlet filters the following service properties are supported. (see OSGi Compendium Specification Release 6 РTable 140.5):

Service Property Description
osgi.http.whiteboard.filter.asyncSupported Declares whether the servlet filter supports asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.filter.dispatcher Select the dispatcher configuration when the
servlet filter should be called. Allowed string values are REQUEST, ASYNC, ERROR, INCLUDE, and FORWARD. The default for a filter is REQUEST.
osgi.http.whiteboard.filter.name The name of a servlet filter. This name is used as the value of the FilterConfig.getFilterName() method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.filter.pattern Apply this servlet filter to the specified URL path patterns. The format of the patterns is specified in the servlet specification.
osgi.http.whiteboard.filter.regex Apply this servlet filter to the specified URL paths. The paths are specified as regular expressions following the syntax defined in the java.util.regex.Pattern class.
osgi.http.whiteboard.filter.servlet Apply this servlet filter to the referenced servlet(s) by name.
filter.init.* Properties starting with this prefix are passed as init parameters to the Filter.init() method. The filter.init. prefix is removed from the parameter name.

Register Resources

It is also possible to register a service that informs the Http Whiteboard service about static resources like HTML files, images, CSS- or Javascript-files. For this a simple service can be registered that only needs to have the following two mandatory service properties set:

Service Property Description
osgi.http.whiteboard.resource.pattern The pattern(s) to be used to serve resources. As defined by the [4] Java Servlet 3.1 Specification in section 12.2, Specification of Mappings.This property marks the service as a resource service.
osgi.http.whiteboard.resource.prefix The prefix used to map a requested resource to the bundle’s entries. If the request’s path info is not null, it is appended to this prefix. The resulting
string is passed to the getResource(String) method of the associated Servlet Context Helper.

The service does not need to implement any specific interface or function. All required information is provided via the component properties.

To create a resource service follow these steps:

  • Create a folder resources in the project org.fipro.inverter.http
  • Add an image in that folder, e.g. eclipse_logo.png
  • PDE – Add the resources folder in the¬†build.properties
  • Bndtools – Add the following line to the bnd.bnd file on the Source tab
    -includeresource: resources=resources
  • Create resource service
@Component(
    service = ResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/files/*",
        "osgi.http.whiteboard.resource.prefix=/resources"})
public class ResourceService { }

After starting the application the static resources located in the resources folder are available via the /files path in the URL, e.g. http://localhost:8080/files/eclipse_logo.png

Note:
While writing this blog post I came across a very nasty issue. Because I initially registered the servlet filter for the /* pattern, the simple header and footer where always added. This also caused setting the content type, that didn’t match the content type of the image of course. And so the static content was never shown correctly. So if you want to use servlet filters to add common headers and footers, you need to take care of the pattern so the servlet filter is not applied to static resources.

Register Servlet Listeners

It is also possible to register different servlet listeners as whiteboard services. The following listeners are supported according to the servlet specification:

  • ServletContextListener – Receive notifications when Servlet Contexts are initialized and destroyed.
  • ServletContextAttributeListener – Receive notifications for Servlet Context attribute changes.
  • ServletRequestListener – Receive notifications for servlet requests coming in and being destroyed.
  • ServletRequestAttributeListener – Receive notifications when servlet Request attributes change.
  • HttpSessionListener – Receive notifications when Http Sessions are created or destroyed.
  • HttpSessionAttributeListener – Receive notifications when Http Session attributes change.
  • HttpSessionIdListener – Receive notifications when Http Session ID changes.

There is only one component property needed to be set so the Http Whiteboard implementation is handling the listener.

Service Property Description
osgi.http.whiteboard.listener When set to true this listener service is handled by the Http Whiteboard implementation. When not set or set to false the service is ignored. Any other value is invalid.

The following example shows a simple ServletRequestListener that prints out the client address on the console for each request (borrowed from the OSGi Compendium Specification):

@Component(property = "osgi.http.whiteboard.listener=true")
public class SimpleServletRequestListener
    implements ServletRequestListener {

    public void requestInitialized(ServletRequestEvent sre) {
        System.out.println("Request initialized for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

    public void requestDestroyed(ServletRequestEvent sre) {
        System.out.println("Request destroyed for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

}

Servlet Context and Common Whiteboard Properties

The ServletContext is specified in the servlet specification and provided to the servlets at runtime by the container. By default there is one ServletContext and without additional information the servlets are registered to that default ServletContext via the Http Whiteboard implementation. This could lead to scenarios where different bundles provide servlets for the same request mapping. In that case the service.ranking will be inspected to decide which servlet should be delivered. If¬†the servlets belong to different applications, it is possible to specify different contexts. This can be done by registering a custom ServletContextHelper as whiteboard service and associate the servlets to the corresponding context. The ServletContextHelper can be used to customize the behavior of the ServletContext (e.g. handle security, provide resources, …) and to support multiple web-applications via different context paths.

A custom ServletContextHelper it needs to be registered as service of type ServletContextHelper and needs to have the following two service properties set:

  • osgi.http.whiteboard.context.name
  • osgi.http.whiteboard.context.path
Service Property Description
osgi.http.whiteboard.context.name Name of the Servlet Context Helper. This name can be referred to by Whiteboard services via the osgi.http.whiteboard.context.select property. The syntax of the name is the same as the syntax for a Bundle Symbolic Name. The default Servlet Context Helper is named default. To override the
default, register a custom ServletContextHelper service with the name default. If multiple Servlet Context Helper services are registered with the same name, the one with the highest Service Ranking is used. In case of a tie, the service with the lowest service ID wins. In other words, the normal OSGi service ranking applies.
osgi.http.whiteboard.context.path Additional prefix to the context path for servlets. This property is mandatory. Valid characters are specified in IETF RFC 3986, section 3.3. The context path of the default Servlet Context Helper is /. A custom default Servlet Context Helper may use an alternative path.
context.init.* Properties starting with this prefix are provided as init parameters through the ServletContext.getInitParameter() and ServletContext.getInitParameterNames() methods. The context.init. prefix is removed from the parameter name.

The following example will register a ServletContextHelper for the context path /eclipse and will retrieve resources from http://www.eclipse.org. It is registered with BUNDLE service scope to ensure that every bundle gets its own instance, which is for example important to resolve resources from the correct bundle.

Note:
Create it in a new package org.fipro.inverter.http.eclipse within the org.fipro.inverter.http project, as we will need to create some additional resources to show how this example actually works.

@Component(
    service = ServletContextHelper.class,
    scope = ServiceScope.BUNDLE,
    property = {
        "osgi.http.whiteboard.context.name=eclipse",
        "osgi.http.whiteboard.context.path=/eclipse" })
public class EclipseServletContextHelper extends ServletContextHelper {

    public URL getResource(String name) {
        // remove the path from the name
        name = name.replace("/eclipse", "");
        try {
            return new URL("http://www.eclipse.org/" + name);
        } catch (MalformedURLException e) {
            return null;
        }
    }
}

Note:
With PDE remember to add org.osgi.service.http.context to the Imported Packages. With Bndtools remember to add the new package to the Private Packages in the bnd.bnd file on the Contents tab.

To associate servlets, servlet filter, resources and listeners to a ServletContextHelper, they share common service properties (see OSGi Compendium Specification Release 6 – Table 140.3) additional to the service specific properties:

Service Property Description
osgi.http.whiteboard.context.select An LDAP-style filter to select the associated ServletContextHelper service to use. Any service property of the Servlet Context Helper can be filtered on. If this property is missing the default Servlet Context Helper is used. For example, to select a Servlet Context Helper with name myCTX provide the following value:
(osgi.http.whiteboard.context.name=myCTX)To select all Servlet Context Helpers provide the following value:
(osgi.http.whiteboard.context.name=*)
osgi.http.whiteboard.target The value of this service property is an LDAP style filter expression to select the Http Whiteboard implementation(s) to handle this Whiteboard service. The LDAP filter is used to match HttpServiceRuntime services. Each Http Whiteboard implementation exposes exactly one HttpServiceRuntime service. This property is used to associate the Whiteboard service with the Http Whiteboard implementation that registered the HttpServiceRuntime service. If this property is not specified, all Http Whiteboard implementations can handle the service.

The following example will register a servlet only for the introduced /eclipse context:

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.pattern=/image",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"
    },
    scope=ServiceScope.PROTOTYPE)
public class ImageServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write("Show an image from www.eclipse.org");
        resp.getWriter().write(
            "<p><img src='img/nattable/images/FeatureScreenShot.png'/></p>");
    }

}

And to make this work in combination with the introduced ServletContextHelper we need to additionally register the resources for the /img context, which is also only assigned to the /eclipse context:

@Component(
    service = EclipseImageResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/img/*",
        "osgi.http.whiteboard.resource.prefix=/eclipse",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"})
public class EclipseImageResourceService { }

If you start the application and browse to http://localhost:8080/eclipse/image you will see an output from the servlet together with an image that is loaded from http://www.eclipse.org.

Note:
The component properties and predefined¬†values are available via¬†org.osgi.service.http.whiteboard.HttpWhiteboardConstants. So you don’t need to remember them all and can also retrieve some additional information about the properties via the corresponding Javadoc.

The sources for this tutorial are hosted on GitHub in the already existing projects:

 

Posted in Dirk Fauth, Eclipse, Java, OSGi | 5 Comments