Sunday, February 18, 2018

Using Backbone Model to post its attributes together with files as multipart form data

Backbone is a tiny and very flexible library for REST-based front end such as single page applications. But I did not find many examples of how to save a model containing both text values and selected files. As the data contains files, it has to be encoded as multipart form data including binary files and the text attributes in json format. A possible short Javascript code in the Backbone model is quite simple:

selectedFiles: [], // the photos to be uploaded
saveMultipart: function () {
    var formData = new FormData();
    selectedFiles.forEach(function (photo) {
        formData.append('photo', photo, photo.originalNameSize.nameWithoutExtension);
    formData.append('dataObject', JSON.stringify(this.toJSON( )));
    var options = {
        data: formData,
        contentType: false
    };, options);
// other useful functions

To submit the model together with the files stored in selectedFiles, method saveMultipart should be called instead of the usual save.

In a JAX-RS-based REST backend the json is extracted from the part arbitrarily named here dataObject and parsed into java classes, whereas the files from the remaining parts are processed in some other way.

A working sample application that is not at all Backbone-based but includes also an example of such a Backbone model-based data submission is stored here. It will be described in more detail in a post about client-side image resizing and uploading them to REST resources.

Saturday, February 17, 2018

Conversion between Date and LocalDateTime

I do not know why the conversion between Date and LocalDate is so complicated and ugly. But I have to do it increasingly often. So this note is on how to convert between Date, LocalDate and milliseconds.

public static void main(String[] args) {
    // convert from Date to LocalDateTime
    Date d = new Date();
    LocalDateTime ld = LocalDateTime.ofInstant(d.toInstant(), ZoneId.systemDefault());

    // convert from LocalDateTime to Date  
    Date d2 = Date.from(ld.atZone(ZoneId.systemDefault()).toInstant());
    System.out.println("date : " + d);
    System.out.println("ldate: " + ld);
    System.out.println("date2: " + d2);

    // compare milliseconds
    System.out.println("millis : " + d.getTime());
    System.out.println("lmillis: " + ld.atZone(ZoneId.systemDefault()).toInstant().toEpochMilli());

The output is:

date : Sat Feb 17 14:41:00 CET 2018
ldate: 2018-02-17T14:41:00.941
date2: Sat Feb 17 14:41:00 CET 2018

millis : 1518874860941
lmillis: 1518874860941

Thursday, February 1, 2018

CORS filter for JAX-RS

It should be registered as a singleton in the Application subclass.

public class CORSResponseFilter implements ContainerResponseFilter {

    Logger logger = LoggerFactory.getLogger(getClass().getName());

    public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext) throws IOException {
        MultivaluedMap headers = responseContext.getHeaders();

        String origin = requestContext.getHeaderString("Origin");
        if (origin != null) {
            headers.add("Access-Control-Allow-Origin", requestContext.getHeaderString("Origin"));

        headers.add("Access-Control-Allow-Methods", "GET, POST, DELETE, PUT");
        headers.add("Access-Control-Allow-Headers", "X-Requested-With, Content-Type, X-Codingpedia");
        headers.add("Access-Control-Allow-Credentials", true);

RESTEasy also provides a CORS filter class. I do not know why it is not only response but also request filter. It is used as any other filter but needs configuration of all the headers to be added.

CorsFilter filter = new CorsFilter();

Configuring Jackson object mapper in RESTEasy

While transforming between Java classes and JSON, Jackson library considers both its own annotations and the conventional JAXB annotations. The final result maybe not obvious. Let's consider a sample class from a sample application:

public class MyBean {

    String firstName, lastName, fullName;

    public MyBean(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;

    public MyBean() {

    @XmlElement(name = "jaxbFirstName")
    public String getFirstName() {
        return firstName;

    public void setFirstName(String firstName) {
        this.firstName = firstName;

    public String getLastName() {
        return lastName;

    public void setLastName(String lastName) {
        this.lastName = lastName;

    @XmlElement(name = "jaxbFullName")
    public String getFullName() {
        return firstName + " " + lastName;

    public void setFullName(String fullName) {
        this.fullName = fullName;

Jackson set up by RESTEasy prefers its own annotations over the JAXB ones. Note, the default object mapper ignores JAXB annotations (see below). The default output of an object mapper will be:

{"jaxbFirstName":"John","jacksonLastName":"Smith","jacksonFullName":"John Smith"}
Configuring Jackson used by JAX-RS

To configure Jackson, one has to provide his own configured instance by means of a context provider implementing ContextResolver interface. The provider produces an ObjectMapper instance (according to the authors it can be reused) that is to be used by JAX-RS. The following class from another sample application provides a object mapper that produces nicely formatted JSON.

public class MyObjectMapperProvider implements ContextResolver {

    ObjectMapper objectMapper = createObjectMapper();

    public ObjectMapper getContext(final Class type) {
        return objectMapper;

    ObjectMapper createObjectMapper() {
        ObjectMapper objectMapper = new ObjectMapper();
        return objectMapper;

And the custom provider has to be registered as a singleton:

public class MyApplication extends Application {

    public MyApplication() {
        singletons = new HashSet<Object>() {
                add(new MyObjectMapperProvider());
        resources = new HashSet<Class<?>>() {

    Set<Object> singletons;
    Set<Class<?>> resources;

    // note, it is called twice during RESTEasy initialization, 
    public Set<Class<?>> getClasses() {
        return resources;

    // note, it is called twice during RESTEasy initialization, 
    public Set<Object> getSingletons() {
        return singletons;

The json received from the service is formatted now:

  "firstName" : "John",
  "jacksonLastName" : "Smith",
  "jacksonFullName" : "John Smith"

Note, unlike the default Jackson object mapper in RESTEasy, the default Jackson object mapper (created as above ObjectMapper objectMapper = new ObjectMapper() ) does not recognize JAXB annotations.

Enabling JAXB annotations in Jackson object mapper

The customized object mapper instance has to be further configured in the context provider shown above:

    ObjectMapper createObjectMapper() {
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.enable(SerializationFeature.INDENT_OUTPUT).registerModule(new JaxbAnnotationModule());
        return objectMapper;

Now JAXB annotations are priveleged over Jackson ones in the produced JSON:

  "jaxbFirstName" : "John",
  "jacksonLastName" : "Smith",
  "jaxbFullName" : "John Smith"
Disabling unconventional Jackson annotations

The customized object mapper instance has to be further configured in the context provider shown above:

    ObjectMapper createObjectMapper() {
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.enable(SerializationFeature.INDENT_OUTPUT).setAnnotationIntrospector(new JaxbAnnotationIntrospector());;
        return objectMapper;

Now Jackson are ignored in the produced JSON:

  "lastName" : "Smith",
  "jaxbFirstName" : "John",
  "jaxbFullName" : "John Smith"
Ignore empty properties during serialization

Another usefull setting feature preventing nulls and empty collections from being included into resulting json.

public class MyObjectMapperProvider implements ContextResolver {
    static ObjectMapper objectMapper = createObjectMapper();

    public ObjectMapper getContext(final Class type) {
        return objectMapper;
    static ObjectMapper createObjectMapper() {
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.enable(SerializationFeature.INDENT_OUTPUT).setAnnotationIntrospector(new JaxbAnnotationIntrospector()).setSerializationInclusion(JsonInclude.Include.NON_EMPTY);
        return objectMapper;
    public static ObjectMapper getObjectMapper() {
        return objectMapper;

Sign in with Google into a web application using the server flow.

This post is based on the Google documentation on Google's OAuth 2.0 authentication, where OpenID Connect seems to be the most pertinent and comprehensive section. But overall the documentation is quite confusing. So I summarize it here. A sample Java web application is in GitHub.

First, obtain OAuth 2.0 credentials and set redirect URIs in the Google API Console:

Authentication comes down to obtaining an id token via HTTPS from Google. The most commonly used approaches for authenticating a user Google documentation calls the server/basic flow and the implicit flow:

  • The server/basic flow allows the back-end server of an application to identify the user.
  • The implicit flow is when a client-side JavaScript app accesses APIs directly and not via its back-end server.

The major difference is that in implicit flow tokens are sent as url hash, whereas in server flow tokens are sent as url parameters. Also unlike the implicit flow, the server flow requires client secret. Here I illustrate the server flow for authentication. The implicit flow using Google API Javascript library I demonstrated in a previous post.

When a user tries to sign in with Google, the application has to:

  1. Send an authentication request with the appropriate parameters to Google authorization_endpoint.
    • client_id from the API Console.
    • response_type should be code, which launches a Basic flow. If the value is token id_token or id_token token, launches an Implicit flow, requiring the use of Javascript at the redirect URI to retrieve tokens from the URI #fragment.
    • nonce A random value generated by your app that enables replay protection.
    • scope should be openid email. The scope value must begin with the string openid and then include profile or email or both.
    • redirect_uri the url to which browser will be redirected by Google after the user completes the authorization flow. The url must exactly match one of the redirect_uri values listed in the API Console. Even trailing slash / matters.
    • state should include the value of the anti-forgery unique session token, as well as any other information needed to recover the context when the user returns to your application, e.g., the starting URL.

    A sample URL from the link, which is supposed to be a button, my sample application:

    Google handles the user authentication and user consent. After a user signs in to Google, the browser is redirected to the indicated url with two appended parameters:


    Note, if a user has one Gmail account and is logged in, the user will not see any Google consent page and will be automatically redirected. But if the user has several accounts or is logged out, he has to choose one or log in on the Google page.

    If the user approves the access request, an authorization code is added to redirect_uri. Otherwise, the response contains an error message. Either authorization code or error message appear on the query string.

  2. Confirm that the state received from Google matches the state value sent in the original request.
  3. Exchange the authorization code for an access token and ID token.

    The response includes a one-time code parameter that can be exchanged for an access token and ID token. For that, the server sends POST request to the token_endpoint. The request must include the following parameters in the POST body:

    • code the received authorization code
    • client_id from the API Console
    • client_secret from the API Console
    • redirect_uri specified in the API Console
    • grant_type equals authorization_code

    A sample request by my sample application:


    A successful response includes a JSON with fields:

    • access_token A token that can be sent to a Google API.
    • id_token containing the information about the user
    • expires_in The remaining lifetime of the access token.
    • token_type always has the value Bearer.

    The response to the request above was:

  4. Obtain user information from the ID token

    An ID Token is a JWT (JSON Web Token) - a signed Base64-encoded JSON object. Since it is received directly from Google over HTTPS it does not need to be validated. The encoded JSON contains the following fields:

    • email The user's email address provided only if your scope included email
    • profile The URL of the user's profile page provided when scope included profile
    • name The user's full name, in a displayable form provided when scope included profile
    • nonce The value of the nonce supplied by your app in the authentication request. You should enforce protection against replay attacks by ensuring it is presented only once.

    A simple Java code to extract the email from an id token:

    public JsonObject decodeIdToken(String idToken) {
        String secondStr = idToken.split("\\.")[1];
        byte[] payloadBytes = Base64.getDecoder().decode(secondStr);
        String json = new String(payloadBytes);
        JsonReader jsonReader = Json.createReader(new StringReader(json));
        return jsonReader.readObject();

    The id token above was decoded by this method into:

  5. Authenticate the user in your application.

To keep my sample application simple, all those steps are done in a servlet. If any check fails, a error message is displayed with a link for sign in. If user successfully logs in, his email is displayed. The email suffices for authentication in the back end.

@WebServlet(name = "MyAuthServlet", urlPatterns = {"/server"})
public class MyAuthServlet extends HttpServlet {

    OpenId openId = new OpenId();

    protected void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        String code = request.getParameter(CODE);
        HttpSession session = request.getSession();
        String receivedState = request.getParameter(STATE); 
        String savedState = (String) session.getAttribute(STATE);
        String newState = openId.getState();
        session.setAttribute(STATE, newState);
        String savedNonce = (String) session.getAttribute(NONCE);
        String newNonce = openId.getNonce();
        session.setAttribute(NONCE, newNonce);

        try (PrintWriter out = response.getWriter()) {
            if (code != null) {
                if (savedState.equals(receivedState)) {
                    String idToken = openId.exchangeCodeForToken(code);
                    if (idToken != null) {
                        JsonObject json = openId.decodeIdToken(idToken);
                        String receivedNonce = json.getString(NONCE);
                        if (savedNonce.equals(receivedNonce)) {
                            String email = json.getString(EMAIL);
                            out.println("<p>Hello " + email + "</p>");
                        } else {
                            out.println("Nonces differ");
                    } else {
                        out.println("Id token is missing");
                } else {
                    out.println("States are different");
            } else {
                out.println("<p>Code is null</p>");
            out.println("<a href='" + openId.getUrl(newState, newNonce) + "'>Click to sign in</a>");

Wednesday, January 31, 2018

POST with HttpUrlConnection

A shot note on POST requests:

public InputStream post(String url, String params) throws IOException {
    URL u = new URL(url);
    HttpURLConnection con = (HttpURLConnection) u.openConnection();
    try (OutputStreamWriter out = new OutputStreamWriter(con.getOutputStream())) {

    return con.getInputStream();

GET request much simpler:

    String getAmazonHostName() throws  IOException {
        URL url = new URL("");
        try (BufferedReader in = new BufferedReader( new InputStreamReader(url.openStream()))) {
            String inputLine = in.readLine();
            System.out.println("amazon public hostname: " + inputLine);
            return inputLine;

Tuesday, January 30, 2018

Switching off automatic discovery of resource classes and providers in JAX-RS by explicitely registering them

Switching on/off automatic discovery of resource classes and providers in JAX-RS

The automatic discovery may complicate things when provider classes are included in the libraries used by an application or preinstalled in a server, for example Jackson-related jar in Wildfly. So I prefer to switch off every features I am not aware of. The JAX-RS specification states:

  • When an Application subclass is present in the archive, if both Application.getClasses and Application.getSingletons return an empty collection then all root resource classes and providers packaged in the web application MUST be included and the JAX-RS implementation is REQUIRED to discover them automatically by scanning a .war file as described above.
  • If either getClasses or getSingletons returns a non-empty collection then only those classes or singletons returned MUST be included in the published JAX-RS application.

So, essentially if methods getClasses and getSingletons are not overriden the resource classes and providers are discovered automatically. Let's use two root resource classes to illustrate the rule. The full illustration code is available in Github.

public class MyRegisteredResource {

    public String getBook() {
        return "Hello Registered World!";

public class MyUnregisteredResource {

    public String getBook() {
        return "Hello Unregistered World!";

Both resources operate if the Application class is empty:

public class MyApplication extends Application {

If I override getClasses method, only the resource class returned by the method will function:

public class MyApplication extends Application {

    public Set<Class<?>> getClasses() {
        return new HashSet<Class<?>>() {
In which method to register a class?

Quotes from the JAX-RS specification on the Lifecyle of providers and resource classes :

  • By default a new resource class instance is created for each request to that resource.
  • By default a single instance of each provider class is instantiated for each JAX-RS application.
  • By default, just like all the other providers, a single instance of each filter or entity interceptor is instantiated for each JAX-RS application.

So the root resource classes should be returned by getClasses, whereas providers, including filters, by getSingletons method.

Getting the standard servlet defined types such as HttpServletRequest in ContainerRequestFilter

The worst feature of JAX-RS filters is there is not straightforward way to access HttpServletRequest instance. The reference to HttpServletRequest can be injected into managed classes using @Context annotation. However, according to the specification the filters have to be instantiated singletons. That means that injection will not work.

If you want to access in a filter any of the standard servlet arguments such as HttpServletRequest, HttpServletResponse, ServletConfig, ServletContext, the filter will have to be registered in getClasses, so that its instance is created and injected for each request. Otherwise injection is impossible and without it there is no way to access the servlet-defined types.

Monday, January 29, 2018

Aligning horizontally and vertically a div with absolute position and unknown size inside a div

When the size of the div with absolute position is unknown, the simplest solution is using translate function. Obviously, the container is not static.

.absolute1 {
    background-color: antiquewhite;     
    top: 50%;
    left: 50%;
    transform: translate(-50%, -50%);

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Element with absolute position and unknown size

Element with absolute position and unknown size

Element with absolute position and unknown size

By the way, many authors suggest a solution that works only for divs with height and width set:

    background-color: antiquewhite;    
    top: 0; 
    left: 0; 
    bottom: 0; 
    right: 0;
    margin: auto;

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Relative Container, Relative Container, Relative Container, Relative Container, Relative Container

Element with absolute position and known size

Element with absolute position and known size

Element with absolute position and known size

Saturday, January 27, 2018

Enabling SSL in Wildfly using a free certificate from Let's Encrypt

Let’s Encrypt is a free Certificate Authority. To enable HTTPS on a website, one needs to get a certificate from a Certificate Authority. Let’s Encrypt recommends to use Certbot - a tool that validates your ownership over the target domain and fetches the certificates.

Intalling Certbot on CentOS7
sudo yum install epel-release
sudo yum install certbot
Validating a domain to fetch a certificate

Adapted from the official documentation:

sudo certbot certonly --manual --preferred-challenges http -d -d

Certbot asks to create two files so that they are accessible at the specified urls. The output is like:

Create a file containing just this data:


And make it available on your web server at this URL:

--Press Enter to Continue--

Create a file containing just this data:


And make it available on your web server at this URL:

--Press Enter to Continue--

 - Congratulations! Your certificate and chain have been saved at:
   Your key file has been saved at:
   Your cert will expire on 2018-04-26. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"

During this process I just created the requested files in a war deployed into the Wildfly:

So the certificate was successfully downloaded. Strangely, a normal user could not access the certificates because of the permissions on the containing folders. So I adjusted the permissions:

sudo chmod 755 /etc/letsencrypt/archive
sudo chmod 755 /etc/letsencrypt/live
cat /etc/letsencrypt/live/

The following files were created:

  • privkey.pem - Private key for the certificate.
  • fullchain.pem - The server certificate followed by intermediate certificates that web browsers use to validate the server certificate.
Importing the private key with the fetches certificate into a Java key store (jks)

Wilfly 10 accepts only jks. So the fullchain.pem has to be imported into jks. However, keytool can import a certificate or an entire keystore, but does not import a private key separated from the paired public key with the certificate. Therefore, a private key has to be combined with the certificate in a acceptable PKCS12 keystore with openssl command. Then the keystore can be imported into jks. The keystore will be created in /opt/SSLCertificates/

cd /opt/SSLCertificates/
openssl pkcs12 -export -in /etc/letsencrypt/live/ -inkey /etc/letsencrypt/live/ -out keystore.p12 -name wildfly

You need to enter a password changeit for the keystrore to be created. File keystore.p12 is created.

keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore keystore.jks -srckeystore keystore.p12 -srcstoretype PKCS12 -srcstorepass changeit -v

Jks keystore.jks is created.

Configuring SSL in Wildfly

Stop the server and edit standalone.xml so that it contains:

<security-realm name="ApplicationRealm">
            <keystore path="/opt/SSLCertificates/keystore.jks" keystore-password="changeit" alias="wildfly" key-password="changeit"/>
        <local default-user="$local" allowed-users="*" skip-group-loading="true"/>
        <properties path="" relative-to="jboss.server.config.dir"/>
        <properties path="" relative-to="jboss.server.config.dir"/>

Start the server. Well, that's it, we're done.

Automatic certificate renewal

The only problem with the Let's Encrypt certificates is that they last for 90 days.

Comming soon, essentially schedule in cron a script with the commands above. seems that certbot has to be run with hooks, so that renew just repeats that command

Thursday, January 25, 2018

Login into a web application with Facebook via redirect

Like Google, the Facebook documentation on login includes two options: using Facebook Javascript SDK and without it. I used the second option, manually building a login flow, to create a Javascript-free login flow for the back-end.

In my simple web application the flow is as follows:

  • When the login button is click, the user is redirected to the facebook page. The login button is inside an anchor tag:

    <a href="|1462070876

    The url contains the following parameters:

    • client_id from the app's dashboard.
    • redirect_uri will receive the response from the Facebook login. The uri must be whitelisted in the App Dashboard.
    • state maintains state between the request and callback. It will be appended unchanged to the redirect_uri.
    • response_type:
      • code - the response data is included as URL parameters and contains code parameter. The data is accessible to the back-end.
      • token - the response data is included as a URL fragment and contains an access token. The response hash can be accesses only on the client by javascript.
    • scope - a list of Permissions to request from the person using your app. Note, even if you request the email permission it is not guaranteed you will get an email address. For example, if someone signed up for Facebook with a phone number instead of an email address, the email field may be empty.
  • After a login attempt, the browser is redirected to the redirect_uri with appended response paremeters code and state:


    The state values in the original request and the response are the same. The code has to be included in a GET request to another endpoint. Additionally, client_id, redirect_uri used in the initial request, and client_secret from the App Dashboard are required.

    The response is a Json containing an access_token.


    That the difference from Google sign in. In Google additionally id_token with the user's detail is received. In Facebook an additional request is required.

  • Using the received access token, the sever-side code makes yet another GET request retrieves a user email from the Graph API. The /me node is a special endpoint that translates to the user_id of the person whose access token is currently being used to make the API calls. Access tokens are portable. Graph API calls can be made from clients or from your server on behalf of clients. The calls to the Graph API are better secured by adding appsecret_proof parameter.

    The response is a json:


    The retrieved email is used to log the user into the web application.

In App Dashboard I used such security settings:

The option below allows only the API calls that either include appsecret_proof or are made from the same device the token was issued.

Wednesday, January 24, 2018

Generating MD5, SHA-1, SHA-256, SHA-384, SHA-512 message digests

Just a note within sight. To generate hashes using any possible algorithms, I use digest method of a Java class:

public class MessageHash {

    static String DEFAULT_ALGORITHM = "SHA-1"; // MD5, SHA-1, SHA-256, SHA-384, SHA-512

    static String digest(String input, String algorithm) throws NoSuchAlgorithmException {
        MessageDigest md = MessageDigest.getInstance(algorithm);
        return HexConverter.bytesToHex(md.digest(input.getBytes()));

    public static String digest(String input) {
        try {
            return digest(input, DEFAULT_ALGORITHM);
        } catch (NoSuchAlgorithmException ex) {
            throw new RuntimeException("This is impossible");

HmacSHA256 - sha256 hash using a key (for appsecret_proof in Facebook)

Unlike Google, in Facebook access tokens are portable - they can be used without client or app id. To kind of protect or rather label them, all Graph API calls from a server (only server) should secured by adding a parameter appsecret_proof set to the sha256 hash of the access token generated using the app secret as the key.

Here is an example of how to do it on a Java server:

public class Sha256Digest {

    Mac mac;
    Sha256Digest() throws UnsupportedEncodingException, NoSuchAlgorithmException, InvalidKeyException {

    Sha256Digest(String key) throws UnsupportedEncodingException, NoSuchAlgorithmException, InvalidKeyException {
        SecretKeySpec sk = new SecretKeySpec(key.getBytes(StandardCharsets.UTF_8.toString()), "HmacSHA256");
        mac = Mac.getInstance("HmacSHA256");

    String hash(String msg) throws UnsupportedEncodingException {
        return HexConverter.bytesToHex(mac.doFinal(msg.getBytes(StandardCharsets.UTF_8.toString())));

    public static void main(String[] args) throws Exception {
        System.out.println(new Sha256Digest().hash("Test"));

For converting an array of bytes into a string of hexadecimal values I use an additional class:

public class HexConverter {

    private final static char[] HEXARRAY = "0123456789abcdef".toCharArray();

    public static String bytesToHex(byte[] bytes) {
        char[] hexChars = new char[bytes.length * 2];
        for (int j = 0; j < bytes.length; j++) {
            int v = bytes[j] & 0xFF;
            hexChars[j * 2] = HEXARRAY[v >>> 4];
            hexChars[j * 2 + 1] = HEXARRAY[v & 0x0F];
        return new String(hexChars);

The resulting string can be included in Facebook calls like:

To prevent API calls lacking the proof, Require App Secret switch should be activated in the application settings. Only allow calls from a server and require app secret or app secret proof for all API calls.

Using JSON-P to parse heterogeneous JSON in HTTP responses

Suppose you need to query Facebook Graph API. The responses to your HTTP requests have JSON format. A very convinient Java API for JSON Processing helps to parse and query the heterogeneous JSON responses.

For example, I try to get an email of the user whose access token was obtained after the user's login into my application. For this, I access url like:

The reponse is a JSON with some hexadecimal digits encoding @ character:


To easily execute an HTTP request, parse the response and get the decoded email property I use:

String readUserEmailFromGraphAPI(String token) throws IOException {
    try (JsonReader jsonReader = Json.createReader(
            new InputStreamReader(
                    new URL("" + token + "&debug=all&fields=email&format=json&method=get&pretty=0")
                            .openStream()))) {
        JsonObject obj = jsonReader.readObject();
        return obj.getString("email");

How to get a request url hash on the back end server. Reconstructing the full request url in a servlet.

It is impossible, the browser does not include the hash into the request url sent to the server:

Just a note for myself on what values of the request path can be extracted from HttpServletRequest in a servlet:

The full request path can be reconstructed by a function like:

request.getRequestURL() + (request.getQueryString() != null ? ("?" + request.getQueryString()) : "")

Monday, January 22, 2018

How to activate gzip compression of selected content types in Tomcat or Wildfly

Another note for myself. To enable gzip compression in tomcat add an additional attribute to Connector tag in CATALINA_HOME/conf/server.xml:

<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="443" compressibleMimeType="application/javascript,text/css,application/json" compression="on"/>

Wildfly is not so well documented as Tomcat. So this note assembled from pieces of information saves time. Essentially one needs to enable and configure gzipFilter using Undertow predicates. Edit the default configuration file standalone.xml:

<subsystem xmlns="urn:jboss:domain:undertow:3.1">
    <buffer-cache name="default"/>
    <server name="default-server">
        <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/>
        <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/>
        <host name="default-host" alias="localhost">
            <location name="/" handler="welcome-content"/>
            <access-log pattern="%h %t "%r" %s "%{i,User-Agent}"" prefix="myaccess."/>
            <filter-ref name="gzipfilter" predicate="regex[pattern='text/html|text/css|application/javascript|application/json',value=%{o,Content-Type}] and max-content-size[value=1024]"/>
    <servlet-container name="default">
        <persistent-sessions path="sessions" relative-to="jboss.server.temp.dir"/>
        <file name="welcome-content" path="${jboss.home.dir}/welcome-content"/>
        <gzip name="gzipfilter"/>

All the possible predicates are listed in Undertow documentation. Some people use url-based predicates like:

<filter-ref name="gzipFilter" predicate="path-suffix['.css'] or path-suffix['.js']" />

Alternatively, one can use a custom gzip compression servlet filter that can be more easily configured to target some specific output. A working example is in GitHub. I keed this sample only because it works well and its GIPOutputStream potentially could be replaced by some other stream to for example encrypt the output or produce hashes.

Google Sign in into a website using redirect ux_mode

Google Javascript client library used for sign in is built on the OpenID Connect protocol, which is straightforward. The library uses the implicit flow whereby tokens are passed in url hash. It is not a good option for server side authentication. It differs from a less complicated basic/server flow in which tokens are passed as url parameters. The server flow I describe in a separate post.

Google Sign-In for Websites documentation provides only examples where users sign in via a Google popup. I adapted their code so that another redirect, which is another consent flow option, is used. I also added a primitive backend code that process the ID token. In my sample web application saved to GitHub, the entire consent flow happens in one window without any popups because the initialization is launched with following parameters:

            client_id: clientId,
            fetch_basic_profile: false, 
            scope: 'email',
            ux_mode: 'redirect', 
            redirect_uri: 'http://localhost:8080/test/' 

The application can be deployed to Tomcat or anywhere, but first a client id should be generated in google API console and copied to Constants class.

For the unauthenticated users the welcome page displays only the standard Google Sign-In button that meets the strict Google branding guidelines.

On clicking the button the browser is redirected to Google authentication page.

If the user has only one account in Google and he is already signed in, he is immediatly redirected by to the original page. Otherwise, the user has to select with what account to sign in and then upon authentication, the user is redirected back to the original page. To imitate a complete process of authentication, the page forwards the received from google ID token to the REST resource in the Java backend. The backend process the id, and sends back a JSON with the user's email. So for the authenticated users the only page displays their email received from the Java backend and a link for signing out.

Thursday, January 18, 2018

Resizing selected pictures a browser before uploading them to a REST resource in a backend

The sample application uploads multipart data comprising data from several text inputs together with several photo files select in file type input to a JAX-RS rest resource. Note, multipart data is not mentioned in JAX-RS specification. So the back end uses RESTEasy-specific features. Before uploading, the files are resized in the browser. Then, they are scaled down to thumbnails in the back end.

The web application is adapted for Wildfly, but it works as well with Tomcat if the scope of RESTEasy-related dependencies is changed from provided to the default by removing it.

How to style a file input

The input will accept multiple but only images. One cannot change much the file type input. A workaround is to use a label tag and hide the input with css. Note, the ugly styling here serves merely to demonstrate that styling is possible.

<label id='dropbox' for='fileInput'><img src="imgs/File-Upload-icon.png"/>Select photos</label>
<input id='fileInput' type="file" accept="image/*" multiple />

A sample css:

input[type=file] {
    display: none;
label img {
    max-height: 1.5em;

label {
    border: 1px solid;
    display: inline-block;
    padding: 0.3em;
Resizing selected files using canvas and its function onBlob

The unique resized files are stored in an array:

var selectedFiles = []; // the array with the unique resized files that will be uploaded

When new pictures are selected using the file input, a change event listener is invoked:

$('input[type=file]').change(function () {
function resizeAndShowThumbs(files) {
    for (var c = 0; c < files.length; c++) {
        var file = files[c];
        if (file.type.startsWith("image/") && isFileNotYetIncluded(file)) {
            resize(file, showThumb);
function isFileNotYetIncluded(file) {
    for (var c = 0; c < selectedFiles.length; c++) {
        if (selectedFiles[c].originalNameSize.equals(file)) { // file has name and size read-only properties
            return false;
    return true;

The event listener calls the resize function only if a file is not yet included in the array. The files are identified by their names and initial sizes. After a file is resized the callback showThumb is called.

function showThumb(file) {
    $previewList.append('<li><p>' + + '</p><img src="' + URL.createObjectURL(file)
            + '"  onload="window.URL.revokeObjectURL(this.src);"/></li>');

The resized picture have jpeg compression. The problem with resizing is that sometimes a resized jpeg-compressed file is has a bigger size than the source file with bigger dimensions. So the file with smaller size is selected between the source and resized files. On the back-end the pictures are converted into thumbnails using ImageIO class, which accepts only jpg, bmp, gif, png formats. In the unlikely case of the source file having an unacceptable format, the resized jpeg file will be uploaded even if it is bigger.

var MAX_SIZE = 1200, MIME = 'image/jpeg', JPEG_QUALITY = 0.95;
// the files types accepted by java ImageIO
var acceptableTypes = ["image/gif", "image/png", "image/jpeg", "image/bmp"]; 

function size(size) {
    var i = Math.floor(Math.log(size) / Math.log(1024));
    return (size / Math.pow(1024, i)).toFixed(2) * 1 + ['b', 'kb', 'Mb'][i];

function resizePhoto(file, callback) {
    var image = new Image();
    image.onload = function ( ) {
        var canvas = document.createElement('canvas');
        var width = this.width;
        var height = this.height;

        if (width > height) {
            if (width > MAX_SIZE) {
                height *= MAX_SIZE / width;
                width = MAX_SIZE;
        } else {
            if (height > MAX_SIZE) {
                width *= MAX_SIZE / height;
                height = MAX_SIZE;

        canvas.width = width;
        canvas.height = height;
        canvas.getContext('2d').drawImage(image, 0, 0, width, height);
        canvas.toBlob(callback.bind(null, this.width, this.height, width, height), MIME, JPEG_QUALITY);
    image.src = URL.createObjectURL(file);

function chooseSmallerFile(file, resizedFile) {
    if (file.size > resizedFile.size) {
        console.log('the resized file is smaller');
        return resizedFile;
    } else {
        // resized is bigger than the original
        // however, java ImageIO supports only  jpg, bmp, gif, png, which perferctly match mime types, the front-end should send only those types
        // if the file type is none of image/gif, image/png, image/jpeg, image/bmp use the bigger resized file
        console.warn('resized is bigger the the original');
        if (acceptableTypes.indexOf(file.type) >= 0) {
            return file;
        } else {
            console.warn('but the source file type is unacceptable: ' + file.type);
            return  resizedFile;

 function resize(file, callback) {
    resizePhoto(file, function (originalWidth, originalHeight, resizedWidth, resizedHeight, resizedFile) {
        console.log('filename=' + + '; size=' + size(file.size) + '=>' + size(resizedFile.size)
                + '; dimensions=' + originalWidth + '/' + originalHeight + '=>' + resizedWidth + '/' + resizedHeight);
        var smallerFile = chooseSmallerFile(file, resizedFile);
        smallerFile.originalNameSize = new NameAndSize(, file.size); // name is erased in the resized file. the name and size are used to select unique files

The resizing code produces in the console lots of file size related debug messages. For example, when many pictures coming from different sources are selected:

The console messages indicate that sometimes it is cheaper to upload the original file with the bigger dimensions:

Dragging and dropping photos

Instead of clicking the file input label, one can drop on it the files dragged from any file browser. To implement drag and drop, only few lines are required:

$('#dropbox').on("dragenter", onDragEnter).on("dragover", onDragOver).on("drop", onDrop);

function onDragEnter(e) {

function onDragOver(e) {

function onDrop(e) {

How the resized photos together with values from other inputs can be posted as multipart form data to a REST resource is described in a separate post, because this one would be to long.

Tuesday, January 16, 2018

Scheduling execution of scripts that access the target files via relative paths and disabling emails with the output

It is another note for myself. Under Linux task execution is easy to schedule with cron service. The service can execute script as the indicated user. crond service reads /etc/crontab: once a minute. If crontab file has been modified, cron service does not need to be restarted. Any output from the executed script is mailed to user, e.g. root, whose name is assigned to MAILTO environment variable in the crontab. If the recipient is not you but your admin, he might not be happy with lot of spam. To disable repetitive emails with the output of executed jobs add

&> /dev/null
to the end of the each scheduled command.

A sample /etc/crontab:


# For details see man 4 crontabs

# Example of job definition:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name  command to be executed
1  23  *  *  *  test  $MYDIR/ &> /dev/null

Suppose, to be more portable, a script to be executed refers to the target files in some nested or sibling folders only via relative paths so that if the file path of the scheduled script changes nothing except the invoking line in the crontab has to be adjusted.

For example, the scheduled script is located in Tomcat bin folder and it simply deletes the outdated log files in Tomcat log folder. If the Tomcat is moved to some other location, to ensure that the automatic task is successfully executed, you one needs to adjust only the path to the script in crontab file, and nothing inside the script itself.

The parent folder path can be determined with a command:

script_parent_folder_path=$(dirname "$0")

A sample scheduled script deleting Tomcat log files that are older than one day:

script_parent_folder_path =$(dirname "$0")
find $ script_parent_folder_path/../logs/ \( -name "*.log" -or -name "*.txt" \) -type f -mtime +1 -exec rm -f {} \;

Monday, January 8, 2018

Displaying all SQL commands executed by MySQL Connector/J driver in a buggy or Hibernate-based application

Activating hibernate loggers

While developing an application using JPA to access a database it is really useful to see how inefficient and numerous the executed SQL statements are. In fact, if you use any relations in entities you can be surprised to learn how many SQL statements are executed by Hibernate or Eclipselink to load an entity with relations. According to Hibernate documentation, the SQL statements can be displayed by enabling org.hibernate.SQL logger. It is enough to add a line into

However, the logged statements will be incomplete with question marks in place of any values. For example, the output can be similar to:

update users set date_format=? where user_id=?
delete from users where user_id=?

To see the hidden by default bind parameters, one needs to enable additional loggers:

But then the log becomes immense due to predominantly irrelevant output and thus quite illegible. So the point is - there is no standard way in Hibernate to see the executed SQL statements clean and complete. But there is an easy and universal workaround.

Using a customized MySQL logger

Even without JPA, while installing or debugging a poorly documented Java application, it helps to know what SQL commands fail or produce unexpected results. Recently, I have been installing and customizing such an application. Fortunately, it is an open source application and its code can be easily modified. Exposing the failing SQL statements helped me to make undocumented adjustments in the underlying MySQL database so that the application gradually started to function.

The SQL statements processed by MySQL driver can be displayed by adding property profileSQL to the connection URL.


The default logger included in the SQL driver will be used to produce the output. The problem is that the output will include not only the executed SQL statements but also several times as many lines with irrelevant content such as pointless diagnostic messages, timestamps or empty space. Overall the output will be illegible. To record only SQL statements, I composed a customized logger class that filters out all the pollution.

package com.mysql.jdbc.log;

import java.util.Date;

import com.mysql.jdbc.profiler.ProfilerEvent;
import java.text.DateFormat;
import java.text.SimpleDateFormat;

 public class MyStandardLogger implements Log {
    public MyStandardLogger(String name) {
        this(name, false);
    public MyStandardLogger(String name, boolean logLocationInfo) {
    public boolean isDebugEnabled() {
        return true;
    public boolean isErrorEnabled() {
        return true;
    public boolean isFatalEnabled() {
        return true;
    public boolean isInfoEnabled() {
        return true;
    public boolean isTraceEnabled() {
        return true;
    public boolean isWarnEnabled() {
        return true;
    public void logDebug(Object message) {
        logInternal( message );
    public void logDebug(Object message, Throwable exception) {
        logInternal( message );
    public void logError(Object message) {
        logInternal( message );
    public void logError(Object message, Throwable exception) {
        logInternal( message );
    public void logFatal(Object message) {
        logInternal( message );
    public void logFatal(Object message, Throwable exception) {
        logInternal( message );
    public void logInfo(Object message) {
        logInternal( message );
    public void logInfo(Object message, Throwable exception) {
        logInternal( message );
    public void logTrace(Object message) {
        logInternal( message );
    public void logTrace(Object message, Throwable exception) {
        logInternal( message );
    public void logWarn(Object message) {
        logInternal(  message );
    public void logWarn(Object message, Throwable exception) {
        logInternal( message );
    DateFormat df = new SimpleDateFormat("HH:mm:ss.SSS");

    protected void logInternal(Object msg) {
        if (msg instanceof ProfilerEvent) {
            ProfilerEvent evt = (ProfilerEvent) msg;
            String evtMessage = evt.getMessage();

            if (evtMessage != null) {
                System.out.println(">SQL: " + df.format(new Date())+"\t"+evtMessage);

The jar containing this class must be placed into the application class path. I put it into the same folder as the MySQL driver - CATALINA_HOME/lib.

In any ordinary application one would have only one place with the connection string. But to debug the application I needed to see the SQL statements received by JDBC driver from Connection created by DriverManger, and DataSource classes obtained from Tomcat or some Spring connection pools. So in some java class I modified the connection string:

String url ="jdbc:mysql://" + host + "/" + database +
                        "?user=" + userName + "&password=" + password +

In a Spring application context configuration xml one cannot use & sign, so the connections string looked like:

<bean id="businessDataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
    <property name="driverClassName" value="${db.driver}"/>
    <property name="url" value="${db.connection_string}${db.portal_db_name}?zeroDateTimeBehavior=convertToNull&amp;useSSL=false&amp;profileSQL=true&amp;logger=com.mysql.jdbc.log.MyStandardLogger"/>
    <property name="username" value="${db.user}"/>
    <property name="password" value="${db.password}"/>

And in the Tomcat context.xml the URL was specifed like:

<Resource name="jdbc/cbioportal" auth="Container" type="javax.sql.DataSource" maxActive="100" maxIdle="30" maxWait="10000"
        username="cbio_user" password="pass" driverClassName="com.mysql.jdbc.Driver"
        validationQuery="SELECT 1"
Another version of MySQL logger passing SQL statements to the included slf4-compatible logger

Wildfly is different from other servers in a few respects. I have not tried to understand why, but the output from System.out.println() is not always saved to the server log. So used a similar class to log SQL statements. The jar was added as a dependency for MySQL driver. I will describe the unusual Wildfly-specific deployment of database drivers that must be installed before a dependent datasource is created in a later post.

package com.mysql.jdbc.log;

import com.mysql.jdbc.profiler.ProfilerEvent;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class MySlf4JLogger extends StandardLogger {

    Logger logger = LoggerFactory.getLogger(getClass().getName());

    public MySlf4JLogger(String name) {
        super(name, false);

    public MySlf4JLogger(String name, boolean logLocationInfo) {
        super(name, logLocationInfo);

    DateFormat df = new SimpleDateFormat("HH:mm:ss.SSS");

    protected void logInternal(int level, Object msg, Throwable exception) {
        if (msg instanceof ProfilerEvent) {
            ProfilerEvent evt = (ProfilerEvent) msg;
            String str = evt.getMessage();
            if (str != null) {
Registering the logger of SQL statements in persistence.xml

This technique will nicely expose complete SQL statements with either Hibernate or Eclipselink. For example, how I use the logger in my persistence.xml used by JUnit tests.

 <persistence-unit name="JavaApplication316PUTEST" transaction-type="RESOURCE_LOCAL">
         <property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/wildfly?useSSL=false&amp;profileSQL=true&amp;logger=com.mysql.jdbc.log.MySlf4JLogger"/>
         <property name="javax.persistence.jdbc.user" value="wildfly"/>
         <property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/>
         <property name="javax.persistence.jdbc.password" value="1234"/>
         <property name="javax.persistence.schema-generation.database.action" value="none"/>

Friday, January 5, 2018

Using Microsoft .pfx certificate to enable SSL in Tomcat

To enable SSL one needs to specify a keystore with the keys to be used to secure connections. Several types of certificates and keystores exist. For Java applications the easiest option is Java keystore generated by the Java keytool. The setup of Java keystore is well documented in the Tomcat documentation. To import a .pfx certificate generated by Microsoft tools, one first needs convert it into a certificate acceptable by Java keystore. I do not do it routinely, so I make here a note that might be also useful for others.

  1. Generate a keystore in a new folder for it:
    mkdir /data/keystore/
    cd /data/keystore/
    keytool -genkey -alias tomcat -keyalg RSA
  2. Upload a .pfx certificate (e.g lvn00021v.pfx) to the created folder
  3. Execute two commands to extract the certified keys (Note, you will need to enter the password for the source keystore):
    openssl pkcs12 -in lvn00021v.pfx -nocerts -nodes -out key.pem
    openssl pkcs12 -in lvn00021v.pfx -nokeys -out cert.pem
  4. While executing the next command to export a keystore, enter the password for the new keystore changeit:
    openssl pkcs12 -export -in cert.pem -inkey key.pem -out server.p12 -name tomcat -CAfile ca.crt -caname root
  5. Import the exported keystore using the same password changeit, which is default for Tomcat:
    keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore keystore -srckeystore server.p12 -srcstoretype PKCS12 -srcstorepass changeit –v
  6. To disable any not secured access to all the Tomcat hosted applications, add the following lines to the end of CATALINA_HOME/conf/web.xml:
    <web-resource-name>Protected Context</web-resource-name>
    <!-- auth-constraint goes here if you requre authentication -->
  7. Modify CATALINA_HOME/conf/server.xml:
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
               maxThreads="150" SSLEnabled="true" compressibleMimeType="application/javascript,text/css,application/json" compression="on">
            <Certificate certificateKeystoreFile="/data/keystore/keystore"   type="RSA" />