Quantcast
Channel: C2B2 Blog
Viewing all 223 articles
Browse latest View live

JAXLondon 2013 Impressions

$
0
0
This was my third year in a row speaking at JAXLondon. As the conference is in London my diary normally fills up with meetings during the conference and I have to dive in and out for my talk and miss most of the sessions. This year however I managed to spend Tuesday afternoon and all day Wednesday at the conference so I thought I'd give you my impressions of some of the sessions I saw.

JAXLondon is a great relaxed conference with plenty of opportunities to talk with speakers over coffee, food or even beer. My only criticism is that most of the talks tend to be by vendors or open source project leads demoing technologies whereas it would be nice to see more "real world" talks like;

"We did this with this technology and this worked and this sucked."




Java EE 7 Platform: Boosting Productivity and Embracing HTML5 - Arun Gupta (Red Hat)

This was the first session I attended on Tuesday afternoon and it was a great whizz through all the new features of JEE7. The main points to take away were;

The key focus for JEE7 is HTML5, WebSockets, Developer productivity with POJOs and meeting enterprise demands with new apis. With the coolest new features being;

1) Websocket support
2) Batch processing
3) JSON support
4) Concurrency utilities with the correct JEE contexts for security, classloading etc.
5) JMS 2.0
6) Transactional annotations for POJOs (EJBs without EJBs)
7) JAX-RS 2.0
8) Default Resources
9) More annotated POJOS
10) Faces flow

JEE7 is now an extremely powerful framework for rapidly building modern applications wthout having to incorporate and manage loads of boiler plate libraries into your war files. Also nowadays there's no XML in sight. Couple with NetBuilder programming in JEE7 is a joy.

Arun is always a great speaker and RedHat have acquired themselves a great advocate for JEE. Checkout Arun's collection of JEE7 samples at  http://github.com/arun-gupta/javaee7-samples. Arun has also been filing bugs against WildFly when he can[t get these samples working which will hopefully improve the quality of WildFly.

From Java Code to Java Heap: Understanding the Memory Usage of Your App - Chris Bailey (IBM)

This was an interesting talk which I was surprised to see on the agenda as it didn't seem "sexy" enough for a conference. It was a great back to basics talk about poor practice in the heap usage of minimally filled Java collections and Java strings. Chris demonstrated how to use MAT to work out how much memory you are wasting in your applications and presented some results for an application with nearly 50% wasted heap just due to poor programming practices.

JVM Support for Multitenant Applications - Steve Poole (IBM)

Another IBM talk this time from Steve Poole, obviously a guy that knows a lot about JVMs. The talk was poorly attended I thought given that there is a fair bit of hype about multitenancy. In traditional IBM fashion the demo was a bit more screencast than real demo but there's some interesting technology here. Essentially a multi-tenant JVM is a JVM that can host multiple application with each application thinking it has got the JVM to itself. Essentially virtualisation for the JVM. So virtual JVMs on real JVMs potentially on VM hypervisors, as if your head didn't hurt enough already. 

We have some experience with multi-tenancy through working with Waratek although I think the multitenancy space is only really of much interest to large scale Java hosting providers such as PAAS providers where there are some real economies of scale.

Past, Present and Future of Data Processing in Apache Hadoop - Chris Harris (Horton Works)

Chris and I go back a few years since his days at Red Hat so it was good to catch up with him and catch up with the direction Hadoop is taking. C2B2 have some experience working with Hadoop and it's an area we track closely, even if we don't market our capabilities too strongly yet. Chris gave a great summary of what's arrived in the new Hadoop 2.0 GA release. 

The key point to take away for me is that Hadoop 2.0 essentially builds a whole pile of middleware on top of HDFS that supports other processing models over and above standard map reduce jobs. The introduction of YARN as "Yet Another Resource Negotiator" now enables more interactive workloads to enable the move away from pure batch applications. This is a very interesting development as it further blurs the line between in-memory data grids and Big Data stores like Hadoop. With YARN we now have application containers (JVMs) managed by node managers on each storage node. With Hadoop 2.0 we essentially now have a big middleware infrastructure to manage.

Perhaps it's time for C2B2 to classify Hadoop as middleware!

Run Your Java Code on Cloud Foundry - Andy Piper (Pivotal)

Andy did a slick demonstration of Cloud Foundry, the Pivotal PAAS environment. I have to admit PAAS environments at this stage in their development leave me a little cold. There's a lot of demos pushing trivial web applications into the cloud that use a simple database. It is slick and fast but not rocket science and I always have the sneaking suspicion that life just isn't that simple in complex applications with complex dependencies between multiple components.

Streams and Things - Darach Ennis (Ubiquiti Networks)

Darach's a clever guy and he can talk! This was a whizz bang keynote to demonstrate the "Internet of Things". Darach demonstrated flying a quadcopter via his Mac, possibly using Scratch (but that wasn't clear) it was good whizz bang entertainment. Not sure I learnt anything but good fun after lunch.

Vert.x 2 - Tim Fox (RedHat)


We have some customers running Vert.x in production and it's a neat framework. The talk actually was again pretty thinly attended which I was surprised about giving the attention Vert.x has had in the press. Tim showed some neat demos of Vert.x and emphasised that it's a light-weight asynch programming platform which can be implemented in many languages. We are also hoping to get Tim to present Vert.x for us at JBUG. Tim's always a very knowledgeable presenter and he gave a great over view of Vert.x. From my point of view I'm still trying to understand where to position Vert.x architecturally with customers. Essentially vert.x is a lightweight non-persistent event bus where little pieces of processing (verticles), written in many languages, execute in response to events received on topics they subscribe to. Vert.x also provides a lightweight http server and the event bus can be extended to web browsers over websockets. Neat but revolutionary?


WebLogic 12c Does WebSockets - Getting Started

$
0
0

With the release of WebLogic 12.1.2 websocket support has come to WebLogic. In this blog post we'll show you how to write a simple websockets echo example just to get you started.

Unfortunately the api released with WebLogic is not the JEE7 JSR356 api, which I suspect will come when WebLogic gets JEE7 compliance. On the plus side it's a pretty simple api.

In this blog I'm using NetBeans 7.4 and I've started off by creating a basic web project and I've called it Echo.



Next I create a new Java class called EchoSocket in the echo project.


To make this class a websocket endpoint in WebLogic I add the annotation @WebSocket to the class and ensure we extend WebSocketAdapter.

Adding the @WebSocket annotation means that WebLogic will create a websocket endpoint for this class. However we now need to add some parameters to the annotation to specify the url pattern the websocket should serve so our final annotation looks like.




@WebSocket (
        pathPatterns = {"/echo/*"}
)


There are other additional parameters you can specifiy for this annotation including dispatchPolicy if you wish to use a WebLogic specific dispatch policy for the threads hadnling the requests.

Now turning to the code, we need to override methods on the WebSocketAdapter class to provide our functionality. In particular to echo back the message we need to override onMessage and within this method return the message we receive.


The code for this is shown below;

    @Override
    public void onMessage(WebSocketConnection connection, String payload) {
        try {
            connection.send("Echo from WebLogic " + payload);
        } catch (IOException ex) {
            Logger.getLogger(Echo.class.getName()).log(Level.SEVERE, null, ex);
        } catch (IllegalStateException ex) {
            Logger.getLogger(Echo.class.getName()).log(Level.SEVERE, null, ex);
        }
    }


This is all we need to do on the server side. You can now package that up as a war and deploy to WebLogic.



To test your websocket handler without having to write any javascript browse to http://www.websocket.org/echo.html and in the location field type your WebLogic URL which in my case is ws://127.0.0.1:7001/echo/echo


Click Connect and then Send you should see your response echoed back on the screen.

Congratulations yuou've just written your first WebSocket application with WebLogic 12.1.2.

The full code for the Echo class is below;

package uk.co.c2b2.demo.weblogic.websockets;

import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
import weblogic.websocket.WebSocketAdapter;
import weblogic.websocket.WebSocketConnection;
import weblogic.websocket.annotation.WebSocket;

/**
 *
 * @author steve
 */
@WebSocket (
        pathPatterns = {"/echo/*"}
)
public class Echo extends WebSocketAdapter {

    @Override
    public void onMessage(WebSocketConnection connection, String payload) {
        try {
            connection.send("Echo from WebLogic " + payload);
        } catch (IOException ex) {
            Logger.getLogger(Echo.class.getName()).log(Level.SEVERE, null, ex);
        } catch (IllegalStateException ex) {
            Logger.getLogger(Echo.class.getName()).log(Level.SEVERE, null, ex);
        }
    }
}



If you enjoyed this post, why not subscribe to our feed?

Enter your email address:




London GlassFish User Group - 'Code-driven Introduction to Java EE7' with Arun Gupta

$
0
0

The second London GlassFish User Group event, organised by C2B2 together with JaxLondon 2013 was a part of the conference Community Night – ‘a beer, wine and code fuelled ‘taster’ of the main conference, organised in conjunction with UK user groups and sponsors’. 

GlassFish and JEE enthusiasts got together on Tuesday evening, at the end of a second day of the conference, to listen to Arun Gupta- one of the most influential people in the Java EE and GlassFish world, previously Oracle’s GlassFish and JEE Evangelist, now working for Red Hat as a Director of Developer Advocacy.

During the event, our London GlassFish community members had a chance to see Arun’s ‘Code-driven Introduction to Java EE7’ presentation, ask him questions and have a chat with the expert himself and other community members during the networking break with some beers and pizza.

For those of you who missed the event we've published the video recording of Arun’s talk (join London GUG on Metetup so you won’t miss any future ones!) : 









Presentation:


The Java EE 7 platform focuses on Boosting Productivity and Embracing HTML5. JAX-RS 2 adds a new Client API to invoke the RESTful endpoints. JMS 2 introduces a new simplified API to align with improvements in the Java language. Long awaited Batch Processing API and Concurrency Utilities are now part of platform offering richer functionality. A new API to build WebSocket driven applications and JSON parsing and generation is now included in the platform. JavaServer Faces has added support for HTML5 forms. Several other improvements are available in this latest version of the platform. Together these APIs will allow you to be more productive by simplifying enterprise development. This session will provide an introduction to the Java EE 7 platform. The attendees will learn the design patterns of building an application using Java EE 7.

Bio: 

 

Arun Gupta is Director of Developer Advocacy at Red Hat and focuses on JBoss Middleware. As a founding member of the Java EE team at Sun Microsystems, he spread the love for technology all around the world. At Oracle, he led a cross-functional team to drive the global launch of the Java EE 7 platform through strategy, planning, and execution of content, marketing campaigns, and program. After authoring ~1400 blogs at blogs.oracle.com/arungupta on different Java technologies, he continues to promote Red Hat technologies and products at blog.arungupta.me. Arun has extensive speaking experience in 35+ countries on myriad topics. An author of a best-selling book, an avid runner, and a globe trotter, he is easily accessible at @arungupta


About London GlassFish User Group

 

London GlassFish User Group (GUG) is here to distribute GlassFish related knowledge and provide a meeting place for GlassFish users to get information, share resources and solutions, increase networking, expand GlassFish Technology expertise, and above all - drink beer, eat pizza and have fun.
As a user group we encourage people to volunteer to talk about their experiences with any GlassFish Community project. We aim to create the GlassFish community and to share real world experiences with other members of the community. London GUG is organised and sponsored by C2B2 Consulting.



Dominika Tasarz
C2B2 Marketing Manager & London GUG Co-organiser

Oracle Dropping Commercial Support of GlassFish : My View

$
0
0
Oracle have just announced that commercial support for GlassFish 4 will not be available from Oracle. In light of this announcement I thought I would put together some thoughts about how I see this development.

I think the key word in this announcement is "commercial", nowhere does Oracle announce the "death of GlassFish" in contrary Oracle reaffirm;
GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition
so GlassFish is not about to go away soon. In a similar fashion RedHat do not provide commercial support for WildFly and only provide commercial support for JBoss EAP. Admittedly JBoss EAP and WildFly are much closer together than GlassFish and WebLogic but WildFly and JBoss EAP are absolutely NOT the same thing.

The key going forward to the viability of GlassFish as a production platform is how the GlassFish community develops;

  1. How often does the community release binary builds?
  2. How open is the community to bug fixes?
  3. How much engineering resource does Oracle commit to GlassFish?
At this stage we just don't know the answers to these questions. 

If the GlassFish open source project continues on it's current trajectory without a commercial support offering then I don't see much of a problem. Oracle just have to work harder to sell migration paths to WebLogic in the same way as RedHat have to sell migration paths from WildFly to JBoss EAP.

In the meantime C2B2 continues to offer support for your operational JEE applications running on GlassFish and we will endeavour to work with the community to get any bugs fixed. The key difference is we can no longer back our Expert Support with a support contract from Oracle for patches and fixes for any release greater than 3.x.




MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 12

$
0
0
MIDDLEWARE INSIGHT
at the Hub of the Middleware Industry
delivered to you by C2B2 Consulting - The Leading Independent Middleware Experts
Dear Middleware User,

Welcome to another issue of Middleware Insight - a newsletter that brings you only the most recent, most important and most interesting news from the middleware industry.

If you are an active member of Java community and you follow Java-related discussions on social media, you probably know that the last couple of days were all about GlassFish, as Oracle has recently announced that the Commercial Support for GlassFish 4 will no longer be available. In this issue you will find a number of links to some interesting articles and opinion pieces about this big announcement, written by the various industry experts - read on to find out more!

In addition to the GlassFish support news, we also give you our usual overview of what vendors are up to; bringing you links to the articles about WebLogic and WebSockets, JBoss Wildfly and JBoss Fuse 6.1, GemFire and more; as well as some interesting videos such as ‘Java Middleware Surgery’ webinar and ‘'Code-driven introduction to Java EE 7' by Arun Gupta.  We’ve also added a new section covering the news related to Amazon Web Services.

If you have any questions, feedback or suggestions about the newsletter – please don’t hesitate to contact us at info@c2b2.co.uk
Many thanks
C2B2 Consulting Team


OPEN SOURCE & JAVA
Commercial Support for GlassFish 4 No Longer Available
Java EE and GlassFish Server Roadmap Update – read the main announcement on The Aquarium blog
Oracle Dropping Commercial Support of GlassFish – read the article by Steve Millidge
GlassFish became a killer appserver, now it is just great: Oracle drops commercial support for GlassFish, read the article by Adam Bien 
R.I.P. GlassFish - Thanks for all the fish – read the article by Markus Eisele 
Want a support contract for GlassFish 4.0? Tough luck, says Oracle, read more on The Register
Oh Lord, won’t you buy me a Mercedes Benz” (or RIP GlassFish), read the article by Antonio Goncalves 
JaxLondon
JAXLondon 2013 Impressions by Steve Millidge, read more here
"Has Java peaked?": What we learnt at JAX London day one, read more on the Jaxenter website 
"We have a responsibility to make IoT accessible": What we learnt at JAX London day two, read more on the Jaxenter website 
Luxembourg JUG and JAX London Report by Arun Gupta, read more on the blog 

Other Java & OpenSource News
'Java Middleware Surgery' Interactive Webinar - now available to watch on demand
Tomcat 7.0.47 Released, find out more and download here  
Apache Hadoop release 2.2.0 available, find out more and download here 
Apache ActiveMQ 5.9: One of the Strongest ActiveMQ Releases, read more on the Dzone website 
10 ways to contribute to an open source project without writing code, read the article by Heiko Rupp here 
I crashed the New Zealand stock exchange: 13 terrifyingly true developer horror stories, read more in the Jaxenter Halloween feature 
Making sweet music with the latest issue of JAX Magazine, download the magazine here 
JavaOne 2013 - Part 2, read the blog post by Jaromir Hamala
London GlassFish User Group event: 'Code-driven introduction to Java EE 7' with Arun Gupta, watch the video here  
JMS @Data Grid? Hacking the Glassfish messaging for fun & profit, see the presentation slides here
51 holes plugged in latest Java security update, read more on the Jaxenter website 

ORACLE
WebLogic 12c Does WebSockets - Getting Started, read more on the C2B2 Blog 
Oracle OpenWorld 2013 Summary, read more on the SOA Community blog
Oracle delivers a contradictory verdict on open-source, read the article by Lucy Carey   
UKOUG Application Server & Middleware SIG Meeting 2013 – read the event overview and watch the SOA Suite 11g presentation slides by Matt Brasier

JBOSS & RED HAT
Getting Started with WildFly (TechTip #1), read more on Arun Gupta’s blog
A sneak peek at what’s coming in JBoss Fuse 6.1, read more on James Strachan’s blog 
Wildfly 8 and JSR 236, read more on Eduardo Martins’ blog
Arun Gupta on his move to Red Hat, watch the video here  
JBoss BPM Suite - get rocking with the all new Mortgage Demo, read more on Eric Schabell’s blog 
Red Hat JBoss A-MQ and the IoT, read the article by Kenneth Peeples 
Enhancing your JBoss Integration with JBoss BRMS in Practice, read more on Eric Schabell’s blog
Red Hat Partner Conference 2013 Impressions, read the blog post by Matt Brasier 


VMware & PIVOTAL
Pivotal’s 2 New Big Data Advancements: GemFire XD and Data Dispatch, read more on the Pivotal Blog 
Adding Years to Your RDBMS by Scaling with Spring and NoSQL, read more on the Pivotal Blog
New Release: RabbitMQ 3.2.0 with Federated Queues, find out more here 


SOA 
SOA strategies for handling big data, beating latency, watch the interview with Jaromir Hamala on SearchSOA
What you need to consider before modernizing legacy apps, read the article by Maxine Giza
Solution architects' evolving role and Agile development methodology, read the article by Crystal Bedell 


AMAZON WEB SERVICES
AWS breaks billion-dollar barrier, crushes competition, read more on Jaxenter 
Start to finish, ADF Essentials deployment on Amazon EC2, read the article here  
JMS-style selectors on Amazon SQS with Apache Camel, read the article by Christian Posta 
C2B2 Contributes to UCAS' Success – read the case study here 
C2B2 Develop Their Partnership with Amazon Web Services, find out more  

EVENTS 
Devoxx – 11-15 November in Antwerp, Belgium - find out more and register here
DOAG 2013 - Conference + Exhibition, 19-21 November on Nuremberg, Germany – C2B2 is Speaking find out more and register here 
jDays 2013 - 26-27 November in Gothenburg - C2B2 is Speaking - find out more and register here  
UKOUG Tech13 Conference, 1-4 December in Manchester - C2B2 is Speaking - find out more and register here
London JBoss User Group, 4th December in London – ‘What’s New in Infinispan 6.0’ by Mircea Markus – find out more and register here 
Oracle Coherence 12c: Free Hands-on Technical Workshop, 12th December in London,  find out more and register here 

http://www.c2b2.co.uk/current_openings

SearchSOA Video Interviews With Jaromir Hamala

$
0
0
SearchSOA
SOA strategies for handling big data, beating latency

Increased demand for big data management capabilities are spurring advances in Java middleware and middleware in general, according to consultant Jaromir Hamala, middleware consultant for C2B2 Consulting Ltd. In this SearchSOA video interview, he gives strategies for handling big data and avoiding latency.


Latency, garbage collecting, SOA today

In this SearchSOA video interview, Jaromir Hamala, middleware consultant for C2B2 Consulting Ltd., in the U.K, gives advice on strategies for modernizing middleware, strategies for garbage collecting, as well as the causes of and cures for latency. He also discusses the future of service-oriented architecture. At JavaOne 2013 in San Francisco, he was a presenter in the session on the GlassFish Community.

How to control *all* JTA properties in JBoss AS 7 / EAP 6

$
0
0
JBoss AS 7 and therefore EAP 6 comes with a great simplification of its configuration. Depending on your setup you can use either standalone.xml or domain.xml to configure everything! One file to rule them all! Well, almost all.

If you need fine-grained tuning you might end-up in a situation where the configuration option is not exposed via JBoss' subsystem configuration and you have to use a different approach. For some reason I had to set the xaTransactionTimeoutEnabled property of JTA configuration bean to false. This property is true by default and it means when a resource is enlisted within a transaction the JBoss Transaction Manager calls XAResource::setTransactionTimeout() passing the global JTA timeout value to it. Some (broken) resource managers really doesn't like it and it may result in a heuristic ending of the transaction. Unfortunately I couldn't find a way how to control this property via the transactions subsystem, so I had to find an alternative or hack if you like.



It's surprisingly easy: 

1. Create a jta.xml file with following content:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
  <entry key="com.arjuna.ats.jta.xaTransactionTimeoutEnabled">false</entry>
</properties>
The file must be readable for a JBoss server

2. Create a property with key "com.arjuna.ats.arjuna.common.propertiesFile", value will be the path to the jta.xml file created in the previous step. I've created my property on a server group level, but I assume it would work on any level.

3. Restart the server

4. Profit!


Be aware that most of the properties will be overwritten by the JBoss Transactions subsystem. You should really treat this as a last resort when the subsystem does not provide a better configuration mechanism. You can look here to get an idea how the configuration via subsystem is working.


Poison Null Byte and the importance of security updates

$
0
0
Let's start with a small quiz. Can we trust that this code snippet will open files with the .jpg extension only?

As the saying goes: Any headline which ends in a question mark can be answered by the word NO. How is that even possible? The code has various flaws, but appears like it's always checking if the path ends with the .jpg extension, so what can go wrong? Well, there is a mismatch between Java and C in the way they handle strings. In Java the java.lang.String class is implemented as an array of char where the length of the string is determined by the length of the array. It's slightly more complicated in older updates of Java, but the point is that the content of the array is not used to determine the length of the string. So, how do strings work in C? In C a string is just a pointer to a continuous block of memory where the null character indicates the end of the string.

So, what happens when evil Bob passes us this input?

String path = "top_secret.nsa\0pwned.jpg";

The expression filename.endsWith(".jpg") will return true as the input string really ends with .jpg. Assuming the file really exists and is readable for the user, the code file creates an instance of FileInputStream. This is where the Java - C mismatch kicks in: FileInputStream will eventually call the open() function from C standard library and pass it a pointer to a filename. How does the C standard library see the filename? Well, it starts reading it from the start, character by character until it reads the byte 0. The byte 0 is seen as a string terminator from the C point of view and therefore the rest of the string will be ignored! The C library opens the file called top_secret.nsa and returns its' file handler. Now the Java code will use the handler to read the content of the top_secret.nsa file which was never meant to be revealed! This attack is known as a Null Byte Attack or Null Byte Poisoning. The attack is not specific to Java, but from my experience most Java developers are not familiar with it.



The snippet of  code shown above is indeed very naive, but the Null Byte Attack can be more sophisticated and can surprise you where you wouldn't expect it. I guess everyone knows the Apache Commons FileUpload library - it's used virtually in all applications where users submit files via HTTP. It contains a serialisable class DiskFileItem which may create a file during deserialization. The name of the file is generated, but its location (directory) and content is determined by the serialized data. It means you can Null-Poison the directory name and the randomly generated part will be ignored!

Demo




If you happen to have an application which deserialises data submitted by a user, you have a problem. Now again, it might sound silly: Who on earth lets users insert arbitrary data to be deserialised?! I know at least one well-known and commonly used server side application which does that - I have reported it to its vendor, so hopefully it will be fixed soon.


And now the good part: Java 7u40+ has built-in protection against this attack. It calls this method before opening a file:

So, what's the moral of this story? There have been quite a lot of security updates in Java in the last year. The common wisdom says 'security issues are on the client-side', but as you can see your server-side can benefit from the latest updates as well! So, don't wait for anything and update your server side Java VM! If you are still using Oracle Java SE 6 and you don't have a support contract with Oracle then you should migrate off as soon as possible.


PS: The DiskFileItem class has been fixed in the trunk a few months ago.

References:
https://access.redhat.com/security/cve/CVE-2013-2186
http://hakipedia.com/index.php/Poison_Null_Byte



C2B2 at DOAG German Oracle User Group Conference 2013

$
0
0
Mike Croft, Matt Brasier and Steve Millidge were speaking at DOAG - German Oracle User Group Conference in Nuremberg last week (19-21 November). See below for the details of their talks and presentations slides.

GlassFish 4 on Ubuntu Touch: Adventures in Hacking JEE on a phone by Mike Croft & Steve Millidge

 

Mobile devices get more and more powerful, JEE servers get faster and smaller. This convergence means it is now possible to run GlassFish 4 on a phone platform. In this session we'll show you how to get GlassFish up and running on Ubuntu Touch and communicating with a back end GlassFish server cluster to enable you to combine full JEE on client device and server.

The presentation will be mainly live demo of using GlassFish 4 on a phone communicating with a cluster of GlassFish servers in Amazon EC2.There will be slides to talk through the demo architecture and code snippets showing the latest JEE7 websocket apis being used to communicate device to server.

We will first install Java and GlassFish on the phone. We will then demonstrate developing a quick HTML 5 JEE web application to retrieve data from a GlassFish cluster using WebSockets. We'll deploy this to the phone and demonstrate it running on Ubuntu Touch.


GlassFish 4 on Ubuntu Touch: Adventures in Hacking JEE on a phone from C2B2 Consulting

Oracle Coherence and WebLogic 12c Web Sockets: Delivering Real Time Push at Scale by Steve Millidge

 

The real time web is coming with Websockets in HTML 5. The big question is how to deliver event driven architectures for websockets at scale. This session delivered by a member of the JSR 347 Data Grids expert group provides an insight on how combining Oracle Coherence with the new Websockets support in WebLogic 12c can deliver enterprise scale push to web devices. The session first provides an introduction to websockets and delves into typical Oracle Coherence architectures and how they deliver linear scalability and high availability. We then look at the event capabilities inherent in Oracle Coherence that when hooked up to the new WebLogic 12c Web Sockets server can deliver Coherence grid updates in real time to HTML 5 devices.

The presentation will be a mixture of of animated graphical slides depicting how WebLogic Web Sockets and Oracle Coherence work, combined with code snippets. We will then provide a demo hosted on amazon EC2 of the described architecture for delegates to browse to and interact with to show the capabilities of websockets on their devices. Demos will again use Oracle Coherence and WebLogic 12c.

 

Oracle Coherence & WebLogic 12c Web Sockets: Delivering Real Time Push at Scale from C2B2 Consulting


Through the JMX Window Hands-on Labs by Matt Brasier

 

This lab will demonstrate the depth and breadth of the information available via the JMX API. We will use the JVM tools to peer deep into the workings of the JVM and understand how to identify and solve common performance bottlenecks. Attendees will get hands-on experience of using tools like VisualVM and Jstat to interrogate the JVM, and how to interpret the data returned.


This lab will be a hands-on session that allows attendees to understand the power available to them in some of the overlooked core JVM tools (jstack, jstat, visualvm). The session will use a combination of slides and examples that the attendees can code-along with on their own laptops, and will focus primarily on how the tools can be used to identify performance bottlenecks, although we will also look at how you can expose your own application MBeans and use these to monitor the application. In my work as a Java performance consultant, I have found that JMX and the basic JVM tooling that uses it, is not understood by developers, so this session is about raising awareness of these tools and allowing developers to get inside the JVM and their application to understand how it works (and write better code). In my experience, once developers understand the power of JMX and VisualVM, they find it very interesting, and often the best way to demonstrate it is by allowing people to work with it. The fact that the base JDK is all that is required to demonstrate this means that there are few pre-requisites for this session, and because it is based on a low level technology, it is of interest to people working on all aspects of Java. I think this could make popular talk which will help developers understand the magic and power behind the Java Virtual Machine.


 

Oracle SOA Suite Performance Tuning by Matt Brasier


When Oracle SOA Suite is underpinning your business integration it better be fast. Performance problems in SOA Suite can therefore have major effects across your enterprise architecture. In this talk we will review the tools available to you to detect, triage, diagnose and fix performance problems in production Oracle SOA suite environments. We will demonstrate tools for deep dive investigation into the performance envelope of SOA suite through all layers of the stack via JVM, WebLogic, Oracle Service Bus through to BPEL and BPM.


UK Oracle User Group Tech13 Conference Impressions

$
0
0

I had the honor of presenting at the UK Oracle User Group's Tech13 conference last week in Manchester. This year the UKOUG decided to do something slightly different, and rather than one conference that tries to cover every Oracle technology, they split them across two conferences. The Tech13 conference was therefore focussed on the Oracle database and Fusion middleware components of the stack (with Apps13 focussed on Oracle apps). Doing it this way meant the conference promised to be suitably technical, and that there should be plenty interesting sessions for everyone attending.

The venue was Manchester Central exhibition centre, which is conveniently located in the center of Manchester (fancy that!). This made getting there easy, even traveling late in the evening, and there were plenty hotels within a couple of minutes walk. The conference ran from Monday to Wednesday (with a pre-conference day on Sunday), but due to existing customer commitments I was only able to take one day out of my schedule to present a talk on Oracle SOA Suite performance tuning. The talk was the same one given at DOAG a couple of weeks ago, although slightly longer to bring it from 45 minutes to a full hour (see the bottom of the page for my presentation slides).

My slot was in the second stream of the day, and was pretty well attended, approximately 30-40 people, although the room could have easily held 150. The presentation went down well, people asked sensible questions (which is always a good sign that they have been listening), so many questions that I didn't get time to answer them all.

I then scanned the programme to find other talks worth attending, as this conference was quite focussed, that meant there was actually a lot of choice for interesting looking talks, and I had to choose between trying to find out about new technologies, or learning more about technologies I already know. In the end I attended talks on ADF tuning, cloud provisioning of fusion middleware, and the Oracle database appliance (and running WebLogic on it). I rounded out my day with a roundtable discussion about Oracle SOA suite, which focussed on the difficulties that organisations have in recruiting the correct people to maintain a SOA suite infrastructure, and the fact that it often gets lumped onto the DBA team because it has Oracle in the name.

Overall this conference seemed well organised, well attended and had a good range of technical talks from good speakers. It would have been nice to be able to spend the full three days there. The schedule was certainly packed and there was lots to choose from, if anything it was a little too packed, and because of the varying length of talks it was often hard to see everything I wanted without leaving talks half way through. I am not sure if I would change this though, as it would probably mean fixing the lengths of all sessions and being less flexible about who can talk about what, and one of the great things about Tech13 was the depth and breadth of the talks available.



MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 13

$
0
0














C2B2_Christmas_Banner

MIDDLEWARE INSIGHT
At the Hub of the Middleware Industry

Welcome to the December issue of Middleware Insight - a newsletter that brings you only the most recent, most important and most interesting news from the middleware industry.
As the Christmas Holiday season is coming, we would like to take this opportunity to wish you a very Happy Holiday. May the New Year be filled with much joy, happiness and success!  
If you have any questions, feedback or suggestions about the newsletter – please don’t hesitate to contact us at info@c2b2.co.uk
Many thanks
C2B2 Consulting Team

JAVA & OPEN SOURCE
Tomcat worm puts servers under attacker’s remote control, read more on Jaxenter 
Are configuration files documentation?...Yes!!! TomEE + Asciidoctor example, read the article by Alex Soto 
How-to: Custom error pages in Tomcat with Spring MVC by Rafal Borowiec, read more here
Apache redraws battle lines against Oracle licensing, read more on the Jaxenter website
Oracle GlassFish, or Why You Should Think About Open Source Again, read the article by Lukas Eder 
London GlassFish User Group January Event - 'Come and Play! with Java EE 7’ by Antonio Goncalves, find out more and register here
Ten fun Raspberry Pi projects: JAXenter’s pick of the crop, read more 
Poison Null Byte and the importance of security updates, read the article by Jaromir Hamala 
Diabolical Java Performance Tuning, read the article by Ben Evans and Martijn Verburg 
Optimized WebSocket broadcast, read the blog post by Pavel Bucek 
Hunting Memory Leaks in Javaread the article by Jose Ferreira de Souza Filho  
Marcus Lagergren: The JVM is dead! Long live the polyglot VM, read more on Jaxenter 
Arun Gupta: Java EE 7 Platform - Boosting productivity and embracing HTML5, watch the video here 
‘Non-functional benefits of scaling your web tier using Data Grids’ - see the JDays Conference presentation slides by Mark Addy 
Devoxx 2013
Devoxx 2013 Presentations are now available to watch online via Parleys 
Devoxx '13 recap - read more on the JBoss Community website 
Coding at Internet of Things (IoT) Hack Fest, read more on the Oracle blog  
Devoxx 2013: A Retrospective, read the article by Steve Schols 
Back from Devoxx, read the blog post by José Paumard  
A Shortened Visit to Devoxx 2013, read the article by Peter Pilgrim 


RED HAT & JBOSS
JBoss EAP 6.2 is now available: RBAC, patching, administrative audit logging, see more here 
Tim Fox: Introducing Vert.x 2.0 - Taking polyglot application development to the next level, watch the JaxLondon conference presentation here 
How to control *all* JTA properties in JBoss AS 7 / EAP 6read the blog post by Jaromir Hamala 
What's New in Infinispan 6.0 by Mircea Markus– watch the last London JBoss User Group presentation  
JBoss Fuse 6.1 + HawtIO Part I, read more on the Java Coge Geeks website  
JBoss Drools unit testing with junit-drools, read the article by Maciej Walkowiak  
Add Apache Camel and Spring as JBoss modules in WildFlyread the article by Adrianos Dadis 
CVE-2013-4810: a(nother) Hack that Needn’t Happen, read the article by Justin Pittman 
5 Predictions for 2014 from Red Hat’s CTO, read more on the Red Hat website  

ORACLE
Oracle evangelist: “GlassFish Open Source Edition is not dead”, read more on Jaxenter.com  
UK Oracle User Group Tech13 Conference Impressions, read the blog post by Matt Brasier 
DOAG German Oracle User Group Conference 2013 – see presentation slides by C2B2 
Oracle Invites Community to Weigh-In on Java EE 8, find out more 
RESTful GlassFish Monitoring and Management, read the article by Adam Bien 

 VMWARE & PIVOTAL
Pivotal aims to level development playing field with new release, read more on Jaxenter.com 
Have you seen Spring lately? - read more on the Pivotal blog  
WebSocket architecture in Spring Framework 4.0 - read more on the Pivotal blog 
Pivotal + Capgemini partner in a big way on Big Data, another signal for Enterprises that Big Data is ready for them, read more here 

AMAZON WEB SERVICES
Amazon re:invent roundup, read the article by Chris Swan on the InfoQ website 
Where are my AWS Logs? Read the blog post by Jeff Wharton
How to dynamically pick up logs when scaling your Amazon Web Services EC2 environment, read the article by Benoit Gaudin  
Building Distributed Workflow Applications on Amazon with Camel, read more on Bilgin Ibryam’s blog  




C2B2_Hiring

 

SIGN UP TO THE C2B2 NEWSLETTER HERE

Java Middleware 2014 Predictions - Steve Millidge for The Server Side (by TechTarget)

$
0
0
It's around this time of year that people ask you to make predictions for what'll happen in the middleware market in the New Year. There's something about Christmas time that makes you throw away your usual caution and allow yourself to make predictions about technology futures, something that is fraught with danger and the potential for personal humiliation a year from now.

Well, here go my predictions for Java middleware in 2014:

1. One of the major events in Java middleware in 2013 was the launch of Java Enterprise Edition (Java EE) 7 by the Java Community Process stewarded by Oracle. Java EE 7 is a major release of the Java EE platform and further drives forward the initiatives on simplification. Currently, only GlassFish 4 and Tmax's Jeus 8 are Java EE 7-compatible application servers. My prediction for 2014 is that all the major vendors of Java EE application servers will ship Java EE 7-compatible versions of their products, and developers will start their first Java EE 7 applications. I also predict that we will see greater traction for Java EE 7 in new developments, and even the migration of some Spring applications onto Java EE 7.

Click here to read the full article on The Server Side


5 Reasons to use a Java Data Grid in your application

$
0
0
In this post we explore 5 reasons to use a Java Data Grid for caching Java objects in-memory in your applications. In a later post we will explore some of the other data grid capabilities, beyond data storage, that can revolutionize your Java architectures, like on-grid computation and events.

Memory is Fast

Java Data Grids store Java objects in memory. Memory access is fast with low latency. So if access to data storage either disk or database is the primary bottleneck in your application then using a data grid as an in-memory cache in front of your storage tier will give you a performance boost.

Scale out your Application Shared State

If you need to share state across JVMs to scale out your application then using a Java Data Grid rather than a database will increase your scalability. A typical shared state architecture is shown below, the application server tier stores shared Java objects in the data grid  and these objects are available to all application server nodes in your architecture. 

Separating the data grid tier from the application server tier has a number of advantages;
  • Applications can be redeployed and restarted without losing the shared state
  • Data Grid JVMs and Application JVMs can be tuned separately
  • State can be shared across multiple different applications.
  • Each tier can be scaled horizontally separately depending on work load
Typical use cases for shared state include; PCI compliant storage of card security codes; In-game state in online games; web session data; prices and catalogues in ecommerce. Anything that needs low latency access can be stored in the shared data grid.

High Availability for In-Memory Data

As well as low latency access and scaling out shared state. Java Data Grids also provide high availability for your in-memory data. When storing Java objects in a data grid a primary object is stored in one of the Data Grid JVMs and secondary back up copies of the object are stored in different Data Grid JVM node, ensuring that if you lose a node then you don't lose any data. 

Clients of the data grid do not need to know where data is to access it so high availability is transparent to your application.

Scale Out In-Memory Data Volumes

Java objects, in data grids, aren't fully replicated across all Data Grid JVMs but are stored as a primary object and a secondary object. This means the more Data Grid JVM nodes we add the more JVM heap we have for storing Java objects in-memory (and remember memory is fast). 
For example if we build a Data Grid with 20 JVMs each with 4Gb free heap (after per JVM overhead) we could theoretically store 80Gb (4 times 20) of shared Java objects. If we assume we have 1 duplicate for high availability this cuts our storage in half so we can store 40Gb (.5 time 4 times 20 ) of Java Objects in memory. 

Native Integration with JPA

Java Data Grids have native integration with JPA frameworks like TopLink and Hibernate whereby the Data Grid can act as a second level cache between JPA and the database. This can give a large performance boost to your database driven application if latency associated with database access is a key performance bottleneck.



Who’s pulling the strings when it comes to provisioning IT systems, Puppet or Chef?

$
0
0

As IT systems become larger, and require more frequent updates to leverage new business opportunities and maintain a competitive edge; the time to deploy and outage caused by these updates needs to be kept to a minimum.  One of the areas that can improve delivery time and reduce outage is to automate the deployment of systems.

The current approach to provisioning applications particularly to Linux environments uses a combination of custom shell scripts, RPMs and ssh to configure and deploy applications to multiple host environments.  This presents a major challenge when managing an ever increasing number of servers as manually driven administration results in a myriad of machine configurations, which all are required to comply with internal policies for security, performance and legal standards.

Server proliferation is a reality that will affect most companies in the future with the introduction of cloud based technologies, which will lead to systems requiring servers, physical or virtual, numbering in the thousands if not tens of thousands.  The management of servers on this scale cannot be achieved effectively using a manual process.  With an increasing number of servers, the demands on the system administrators will increase as more time is spent on manual configuration and management of servers.  To meet these challenges an automated approach is required which allows the software landscape of the server estate to be modelled and stored in a central repository, all under version control, which can be deployed on demand to a large number of servers, in a repeatable, consistent and reliable manner.

Puppet (Puppet Labs) and Chef (Opscode) are the two main products available that have been adopted and endorsed by a number of global companies for automating deployment to their large scale IT environments.  Hence both are well established and have a number of reference sites that can be used to demonstrate the capability of the products.

There isn’t too much to choose from between Puppet and Chef, they both provide similar functionality and either one would provide a good solution.  Puppet is more System Admin friendly due to its straight forwardness; Chef on the other hand has a programmer’s type style more suited to a Dev Ops approach to provisioning systems.  Both of these products are available as Open Source or licensed products, offering enhanced features and support as an incentive to buy the products.  Puppet has been out in the market longer than Chef so currently has a larger customer base; although Chef is gaining market share with the release of version 11 which has been rewritten in Erlang for increased performance and scaling.  Both are supported by a large community base where a large number of common reusable resources are available, helping to reduce the amount of code that needs to be written.

Puppet and Chef both have architectures that scale up, and use a client-server model, where the client performs the bulk of the processing on the node they run on, only communicating with the server to retrieve the data objects and code they require to apply the configuration changes to the node.  In both cases the communication is based on HTTPS and secured using certificates to authenticate and authorize the connections between the client and the server.

Puppet uses a Domain Specific Language to define the resources and configuration.  It provides dependency management, so the order in which resources are defined is not important.  On the positive side the language is intuitive to learn and use, but as the language is proprietary has the disadvantage that it’s not as flexible as using a procedural language.  Chef on the other hand defines its cookbooks and recipes in a procedural manner using Ruby.  This offers a richer, more flexible and powerful environment to create the cookbooks and recipes, but has the downside that the order of the code is written is important in order to define dependencies.

Chef has the edge over Puppet in terms of architecture providing a simpler and more cohesive integration of the components making up the system.  Chef has been developed by people from a Dev Ops background, so some of the concerns associated with development have been addressed.  These are shown in the Knife tool which provides a better environment for writing and managing the resources and versioning for cookbooks in the Chef server’s repository.  Chef stores cookbooks in the Chef server’s repository against a version.   A node via an Environment, can reference specific versions of a cookbook in its run list, allowing for a more versatile approach.  Puppet on the other hand doesn’t support the concept of versioning of its modules and applies on the latest versions loaded from the manifest path configured on the Puppet Master, so relies on Version Control Software to provide versioning.

Puppet stores the configuration data as files on the Puppet master, which for large complex systems could prove to be cumbersome and hard to manage.  Chef stores the configuration data as attributes in cookbooks, roles, environments and data bags, with rules for precedence.  The attributes can be created using the Chef console or as JSON files, which are then uploaded to the Chef server.  Hence Chef provides a more flexible approach to defining, storing, retrieving and managing configuration data, which is very important when managing large complex infrastructures.

Alan Fryer
C2B2 Principal Consultant

Connecting JBoss WildFly 7 to ActiveMQ 5.9

$
0
0

We recently had the same question from a number of customers - how could they create a bridge between the HornetQ JMS implementation running in JBoss WildFly 7, and a stand-alone ActiveMQ server. ActiveMQ has always been a solid choice as a stand-alone message broker, and with Red Hat having purchased Fusesource and now peddling their JBoss AMQ version of ActiveMQ, this question is more relevant than ever.

It is of course possible to not use bridging, and just directly expose the JMS queues from ActiveMQ into JBoss, but this has the dis-advantage that if the ActiveMQ server goes down for any reason, JMS producers running in JBoss will start to fail, and consumers will lose their connections and need to reconnet. A much better architecture is to have producers inside JBoss enqueue to a local (HornetQ) queue, and then bridge these messages to the external ActiveMQ broker. With this architecture then producers can continue to enqueue messages while ActiveMQ is down, and when it comes back the messages will be transferred from HornetQ to ActiveMQ.


The steps to configure a bridge are actually quite simple. We need to do the following things:

1. Download the ActiveMQ resource adapter archive

2. Install and configure the resource adapter in WildFly 7

3. Create a local JMS queue in the embedded HornetQ instance in WildFly 7

4. Create a JMS bridge between the local queue and the remote ActiveMQ queue.

To get started, I downloaded the ActiveMQ resource adapter from http://repo1.maven.org/maven2/org/apache/activemq/activemq-rar/

The next step is that we need to create a JBoss module for the activemq resource adapter with the following commands:

mkdir modules/system/layers/base/org/activemq/main
cd modules/system/layers/base/org/activemq/main
unzip ~/activemq-rar-5.9.0.rar
 
This will create a directory in the modules heirarchy with the necessary structure, and extract the activemq resource adapter files into it. JBoss only supports expanded resource adapters as modules, so we extract the contents of the archive. We also need to create a module.xml with the following contents, in the activemq/main directory. This will tell JBoss which jar files should be loaded and which classes should not be shared with other modules (We don't want to clash with other implementations of any of the libraries).
<module xmlns="urn:jboss:module:1.1" name="org.apache.activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <resources>
    <resource-root path="."/>
    <resource-root path="activemq-broker-5.9.0.jar"/>
    <resource-root path="activemq-client-5.9.0.jar"/>
    <resource-root path="activemq-jms-pool-5.9.0.jar"/>
    <resource-root path="activemq-kahadb-store-5.9.0.jar"/>
    <resource-root path="activemq-openwire-legacy-5.9.0.jar"/>
    <resource-root path="activemq-pool-5.9.0.jar"/>
    <resource-root path="activemq-protobuf-1.1.jar"/>
    <resource-root path="activemq-ra-5.9.0.jar"/>
    <resource-root path="activemq-spring-5.9.0.jar"/>
    <resource-root path="aopalliance-1.0.jar"/>
    <resource-root path="commons-pool-1.6.jar"/>
    <resource-root path="commons-logging-1.1.3.jar"/>
    <resource-root path="hawtbuf-1.9.jar"/>
    <resource-root path="spring-aop-3.2.4.RELEASE.jar"/>
    <resource-root path="spring-beans-3.2.4.RELEASE.jar"/>
    <resource-root path="spring-context-3.2.4.RELEASE.jar"/>
    <resource-root path="spring-core-3.2.4.RELEASE.jar"/>
    <resource-root path="spring-expression-3.2.4.RELEASE.jar"/>
    <resource-root path="xbean-spring-3.14.jar"/>
  </resources>
  <exports>
    <exclude path="org/springframework/**"/>
    <exclude path="org/apache/xbean/**"/>
    <exclude path="org/apache/commons/**"/>
    <exclude path="org/aopalliance/**"/>
    <exclude path="org/fusesource/**"/>
  </exports>
  <dependencies>
    <module name="javax.api"/>
    <module name="org.slf4j"/>
    <module name="javax.resource.api"/>
    <module name="javax.jms.api"/>
    <module name="javax.management.j2ee.api"/>
  </dependencies>
</module>


To configure the module we need to edit the JBoss configuration file, for this I started with standalone-full.xml as it already has HornetQ configured (which saves quite a lot of effort to add it). We need to add the activeMQ resource adapter, which we do by changing the line
<subsystem xmlns="urn:jboss:domain:resource-adapters:1.0"/>
to
        <subsystem xmlns="urn:jboss:domain:resource-adapters:1.1">
            <resource-adapters>
                <resource-adapter id="activemq-rar.rar">
                    <module slot="main" id=" org.apache.activemq "/>
                    <transaction-support>NoTransaction</transaction-support>
                    <config-property name="ServerUrl">
                        tcp://localhost:61616
                    </config-property>
                    <connection-definitions>
                        <connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name=" java:/AMQConnectionFactory " enabled="true" use-java-context="true" pool-name="AMQConnectionFactory"/>
                    </connection-definitions>
                    <admin-objects>
                                         <admin-object class-name="org.apache.activemq.command.ActiveMQQueue" jndi-name=" queue/JMSBridgeTargetQ " use-java-context="true" pool-name="target_queue">
                                            <config-property name="PhysicalName">
                                                JMSBridgeTargetQ
                                            </config-property>
                                        </admin-object>
                                    </admin-objects>
                </resource-adapter>
            </resource-adapters>
        </subsystem>

This creates a resource adapter that uses the org.apache.activemq module we created earlier and connects to the remote ActiveMQ server running on tcp://localhost:61616. It registers a connection factory called java:AMQConnectionFactory that will allow us to connect to the remote server, and creates a local JNDI entry of queue/JMSBridgeTargetQ that will bind to the ActiveMQ queue called JMSBridgeTargetQ.

The next step is to configure the bridge and the local queue. We edit the hornetq subsystem to add a JMS bridge after the definition of the hornetQ server.
  
.
.
.           
            </hornetq-server>
            <jms-bridge name="simple-jms-bridge">
                <source>
                    <connection-factory name=" ConnectionFactory "/>
                    <destination name=" queue/JMSBridgeSourceQ "/>
                </source>
                <target>
                    <connection-factory name=" AMQConnectionFactory "/>
                    <destination name=" queue/JMSBridgeTargetQ "/>
                </target>
                <quality-of-service>AT_MOST_ONCE</quality-of-service>
                <failure-retry-interval>1000</failure-retry-interval>
                <max-retries>-1</max-retries>
                <max-batch-size>10</max-batch-size>
                <max-batch-time>100</max-batch-time>
            </jms-bridge> .
.
.

This creates a bridge that will use the connection factory called ConnectionFactory to consume from the local queue with the JNDI name queue/JMSBridgeSourceQ. It will then use the connection factory called AMQConnectionFactory (which is created by our resource adapter) to send the messages to the queue with the JNDI name queue/JMSBridgeTargetQ. This is mapped by our resource adapter to the remote ActiveMQ queue. We also need to create a local queue called JMSBridgeSourceQ in the jms-destinations section of the configuration.
  
                <jms-destinations>
                    <jms-queue name="JMSBridgeSourceQueue">
                        <entry name=" java:/queue/JMSBridgeSourceQ "/>
                        <entry name="java:jboss/exported/jms/queue/JMSBridgeSourceQ "/>
                        <durable>true</durable>
                    </jms-queue>
                </jms-destinations>


This queue has two JNDI names, to allow it to be accessed both internally (by the bridge) and externally (by our client).

To create the correspoding destination on the ActiveMQ side, we start ActiveMQ using the bin/activemq start command, and use the ActiveMQ hawtio console (http://localhost:8161/hawtio) to create a new JMS queue by browsing to ActiveMQ -> Broker -> Localhost -> Queue and selecting Create. Name the Queue JMSBridgeTargetQ in this example



This is all the configuration necessary. We should be able to start the WildFly server and see that the bridge works and is connected to ActiveMQ

.
.
.
13:43:11,959 INFO  [org.jboss.as.remoting] (MSC service thread 1-2) JBAS017100: Listening on 127.0.0.1:9999
13:43:12,447 INFO  [org.jboss.as.remoting] (MSC service thread 1-2) JBAS017100: Listening on 127.0.0.1:4447
13:43:12,452 INFO  [org.jboss.as.connector.deployers.RaXmlDeployer] (MSC service thread 1-2) IJ020001: Required license terms for file:/home/matt/jboss-as-7.2.0.Final/modules/system/layers/base/org/apache/activemq/main/./
13:43:12,511 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-2) JBAS010406: Registered connection factory java:/AMQConnectionFactory
13:43:12,522 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-2) JBAS010405: Registered admin object at java:/queue/JMSBridgeTargetQ
13:43:12,538 INFO  [org.hornetq.core.server] (MSC service thread 1-1) HQ221024: Started Netty Acceptor version 3.6.2.Final-c0d783c 127.0.0.1:5455 for CORE protocol
13:43:12,550 INFO  [org.jboss.as.connector.deployers.RaXmlDeployer] (MSC service thread 1-2) IJ020002: Deployed: file:/home/matt/jboss-as-7.2.0.Final/modules/system/layers/base/org/apache/activemq/main/./
13:43:12,555 INFO  [org.hornetq.core.server] (MSC service thread 1-1) HQ221024: Started Netty Acceptor version 3.6.2.Final-c0d783c 127.0.0.1:5445 for CORE protocol
13:43:12,558 INFO  [org.hornetq.core.server] (MSC service thread 1-1) HQ221009: Server is now live
13:43:12,561 INFO  [org.hornetq.core.server] (MSC service thread 1-1) HQ221003: HornetQ Server version 2.3.0.CR1 (buzzzzz!, 122) [1ef84f49-88d8-11e3-a2ac-f9239574df9d]
13:43:12,584 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]
13:43:12,600 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-1) JBAS010401: Bound JCA ConnectionFactory [java:/AMQConnectionFactory]
13:43:12,611 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-1) JBAS010401: Bound JCA AdminObject [java:/queue/JMSBridgeTargetQ]
13:43:12,683 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 58) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory
13:43:12,686 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221005: trying to deploy queue jms.queue.JMSBridgeSourceQueue
13:43:12,733 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 60) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/queue/JMSBridgeSourceQ
13:43:12,738 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 60) JBAS011601: Bound messaging object to jndi name java:/queue/JMSBridgeSourceQ
13:43:12,747 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 59) JBAS011601: Bound messaging object to jndi name java:/ConnectionFactory
13:43:12,916 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-2) JBAS010406: Registered connection factory java:/JmsXA
13:43:12,968 INFO  [org.hornetq.ra] (MSC service thread 1-2) HornetQ resource adaptor started
13:43:12,969 INFO  [org.jboss.as.connector.services.resourceadapters.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-2) IJ020002: Deployed: file://RaActivatorhornetq-ra
13:43:12,974 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-1) JBAS010401: Bound JCA ConnectionFactory [java:/JmsXA]
13:43:13,286 INFO  [org.jboss.messaging] (ServerService Thread Pool -- 58) JBAS011610: Started JMS Bridge simple-jms-bridge
13:43:13,439 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management
13:43:13,440 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
13:43:13,440 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.2.0.Final "Janus" started in 9912ms - Started 169 of 228 services (58 services are passive or on-demand)


We can now place messages on the JMSBridgeSourceQ in JBoss, and they will end up on the JMSBridgeTargetQ in ActiveMQ.

It is possible to reverse the direction of the bridge to have messages flow the other way, although bridging for consumers is not as vital as bridging with producers. With a consumer it is usually preferable to consume messages directly from the mapped JNDI name created by the resource adapter (in this case queue/JMSBridgeTargetQ), rather than bridging the messages to a local queue and consuming from there. 




MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 14

$
0
0
            

            

 

MIDDLEWARE INSIGHT

At the Hub of the Middleware Industry

 


Featured News

 

5 Reasons to use a Java Data Grid in your application read more
January Oracle Fusion Middleware Patches Released - read more

 



JAVA / OPEN SOURCE

Java & JEE
Top Java Technical Articles of 2013 - read more on the Oracle blog
Java Middleware 2014 Predictions - Steve Millidge for The Server Side - read the article 
Java Magazine: Big Data - read the magazine here  
Survey Sez: Java EE 7 Adoption Looking Pretty Good! - read more  
JavaOne 2013 on Parleys.com: the Top 15 Most-Viewed Sessions - watch here
GlassFish
London GlassFish User Group - 'Come and Play! with Java EE 7' with A.Goncalves - watch the presentation video
Using updated release of Jersey, Tyrus, Weld, ... in GlassFish 4 - read more on The Aquarium 
WildFly 8 vs. GlassFish 4 - Which Application Server to Choose - read the blog post by Markus Eisele
On the deck but still flapping: GlassFish soldiers on - read more on Jaxenter.com


ORACLE

Patches
Oracle spoils your day with NEARLY 150 patches- read more on The Register 
Mammoth batch of Oracle patches released today - read more on Jaxenter.com
Other
Top 10 solution documents for Weblogic Server J2EE - read more on the Oracle Blog
How to Avoid the Perils of Patchwork Cloud Integration - read more on the SOA & BPM Community Blog
Chalk Talk with John: How Does SOA Add Value to Your Enterprise? By John Brunswick - read more on the Oracle Blog 
Cameron Purdy's answer to: Is it really the time to ditch Java for a more secured programming Language? - see more on Quora 

JBOSS & RED HAT

Connecting JBoss WildFly 7 to ActiveMQ 5.9 - read the blog post by Matt Brasier
Red Hat: We CAN be IaaSed about OpenStack cloud - read more on The Register 
Role Based Access Control in WildFly 8 (Tech Tip #12) - read more on Arun Gupta's Blog
Red Hat unveils JBoss Data Grid 6.2 - find out more
JBoss Forge 2.0.0.CR2 is now available! - read more and download here
Red Hat lifts lid on high-performance garbage collector - read more on Jaxenter.com
Inside the JBoss AS 7 modularity - read the article by Dane Marcelo
CentOS makes friends with Red Hat - read the article by Elliot Bentley
JBoss Data Grid Webinar Series - register or watch on demand  
London JBUG next event: 'Extending WildFly' by Tomaz Cerar, 12th of February - find out more and register here


SPRING SOURCE & PIVOTAL

Spring @Async and exception handling - read more on Dzone
Using Spring Integration and Batch Together - read more on Dzone
Migrating from Spring Framework 3.2 to 4.0.1 - read more on the SpringSource Blog
Webinar: Introduction to Spring Framework 4.0 - watch the video here
Webinar: Intro to Apache Tomcat 8 - find out more and register here
Pivotal’s Top 6 Predictions for the Developer in 2014 - read more on the Pivotal Blog
C2B2_Hiring



An Introduction to Connection Pools in Glassfish

$
0
0
Introduction

In this blog post I will be taking an introductory look at connection pools. To begin with I will look at answering some basic questions:

What is a connection pool?
Why are they needed?
How do they work?

I will then take a look at how to configure them in Glassfish and take a look at some best practice settings.

What is a connection pool?

So, firstly, what is a connection pool?

A connection pool is a store of database connections that can be used and (most importantly) re-used to connect to a database.

Why are they needed?

Database connections are expensive to create and also to maintain. The reasons for this are many but include:


  • Establishing a network connection to the database server
  • Parsing the connection string information
  • Performing user authentication
  • Initialising the database connection in the database
  • Establishing transactional contexts


Now, if you have a web application with a single user you can simply create a database connection at the start of the user session and then close it at the end. However, this is a highly unlikely scenario!

Now, imagine a more realistic scenario where your web application will be accessed by hundreds or thousands of users. If each user's session creates a database connection, firstly your users will experience a delay whilst that connection is set up and secondly the overall performance of your system will deteriorate.

So, in answer to the question why are they needed - they improve both the performance and scalability of your system.

How do they work?

Rather than creating a new connection each time one is needed a pool of connections is created when your application server is started. These connections can then be used and re-used. When a new connection is required the pool is searched for an available connection. If one is available it is returned to the requester. If one is not available then the request is either queued or a new connection is established depending on how many connections are already in the pool and how the pool is configured. Once the connection is finished with, rather than closing it the connection is returned to the connection pool for use by the next requester.

OK, that's the theory out of the way. So, how do they work in practice?

Connection Pools in Glassfish

For this practical demonstration I will be using the following:

Operating System - Ubuntu 12
Java Version - 1.7.0_05
App Server - Glassfish 4.0
Database - MySQL 5.5
JDBC Driver - MySQL connector 5.1.25 (Available from http://dev.mysql.com/downloads/connector/j/)

If you are using different versions of any of the above then the results may differ.

Installation

Firstly, extract the JDK and Glassfish zip files to a directory of your choosing. For the purposes of this demo I installed them in my home directory:

/home/andy

For convenience add the Java and Glassfish bin directories to your PATH and set the JAVA_HOME environment variable to the directory where you unzipped Java.

On my machine they are set to the following:

PATH=/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/andy/jdk1.7.0_05/bin:/home/andy/glassfish4/bin
JAVA_HOME=/home/andy/jdk1.7.0_05

Finally we need to install MySQL

On Ubuntu we can use the Advanced Packaging Tool with the following command:

sudo apt-get install mysql-server

You will be asked to set a password for the root user. Remember this as we will be using it later.

Next up extract the MySQL JDBC driver and copy the jar file (in my case mysql-connector-java-5.1.25-bin.jar) to the following directory:

glassfish4/glassfish/domains/domain1/lib/ext

Creating the connection pool - From the console

Firstly, start up Glassfish. This can be done from a terminal window with the following command:

asadmin start-domain

Once the server has started you can access the console at http://localhost:4848

In the left hand panel go to Resources - JDBC-JDBC Connection Pools

Click New and enter the following values:


  • Pool Name - test-pool
  • Resource Type - javax.sql.DataSource
  • Driver Vendor - MySql


Click Next and then click Finish on the next screen, accepting the default values.

Testing the connection

Click on the connection pool name (test-pool).
Click the Ping button at the top of the screen.

You should see a message stating Ping Succeeded.

Creating the connection pool - From the command line

You can also create a connection pool using the asadmin command line tool with the following (substituting your password for the test one) :

asadmin create-jdbc-connection-pool --datasourceclassname com.mysql.jdbc.jdbc2.optional.MysqlDataSource --restype javax.sql.DataSource --property user=root:password=test:DatabaseName=test:ServerName=localhost:port=3306 test-pool

To test the connection from the command line run the following command:

asadmin ping-connection-pool test-pool

OK, so now we have created our connection pool I'm going to look at a few best practices regarding how to configure it.

Connection Pool Sizing

Connection pools should be sized to cater for the maximum number of concurrent connections.

The maximum size should be set in accordance with the maximum number of client requests your system can process. If your application receives 100 requests and each of those requires a database connection then if your connection pool is anything less than 100 some of those requests will have to wait for a connection to either be created or become available.

The minimum size of the connection pool ensures that a number of connections to the database are always established - this means that if you have a pool with a minimum size of 10 and you receive 10 requests then all can retrieve a database connection without waiting for the pool to create a new connection.

There is always a trade off with setting these values as the minimum value requires that those connections are maintained regardless of system load and the maximum value could potentially require a large number of concurrent database connections.

These values will be different for everyone. There are no magic numbers so it's a case of understanding your application, what your expected load (both steady and worst case) will be, monitoring to see if this changes and setting values accordingly.

Setting min/max sizes - via the console

Click on the connection pool name and under Pool Settings you will find Initial and Minimum Pool Size and Maximum Pool Size. Set these to your required sizes.

Setting min/max sizes - via the command line

To set the initial & minimum pool size:

asadmin set resources.jdbc-connection-pool.test-pool.steady-pool-size=10

To set the maximum pool size:

asadmin set resources.jdbc-connection-pool.test-pool.max-pool-size=200

Connection Validation

Connection validation ensures that connections aren't assigned to your application after the connection has already gone stale.

Connection validation is always a trade-off between how sure you want to be that a connection is valid and the performance impact from validation. There will be a negative performance impact by having to return an invalid connection by your application and borrow a new one, so finding the right balance is key.

Before using a connection from the pool a simple query is sent to test the connection. If their is an issue with the connection it is removed from the pool and another one used. The issue here is that if you have an issue such as the database being down and you have a large number of connections then each of those connections will be tested and removed.

In order to avoid this you can set connection validation so that if a connection fails all connections are closed.

Connection Validation - via the console

Click on the name of the pool
Select the advanced tab
Scroll down to Connection Validation and select the following settings:


  • Connection Validation required
  • Validation method - custom-validation
  • Validation class name - MySQLConnectionValidation


From the same screen you can also set whether to close all connections on failure.

Connection Validation - via the command line

To turn on connection validation :

asadmin set resources.jdbc-connection-pool.test-pool.connection-validation-method=custom-validation

asadmin set resources.jdbc-connection-pool.test-pool.validation-classname=org.glassfish.api.jdbc.validation.MySQLConnectionValidation

asadmin set resources.jdbc-connection-pool.test-pool.is-connection-validation-required=true

You can also set whether to close all connections on failure with the following command:

asadmin set resources.jdbc-connection-pool.test-pool.fail-all-connections=true

Statement and Connection Leak Detection

Statement and Connection Leak Detection allows you to set time-outs so that if Statements or Connections haven't been closed by an application they can be logged and/or closed.

In testing I would recommend setting it so that leaks are simply logged but not closed. However, in production I would recommend that leaks are closed. If you have tested thoroughly enough then there shouldn't be any but if there are you don't want to leave them open. Monitoring software should be configured to alert on detected leaks and then further investigation can take place and fixes can be put in place.

By default these values are set to 0 meaning detection is turned off.

Setting Statement and Connection Leak Detection - via the console

Click on the name of the pool
Select the advanced tab
Scroll down to Connection Settings
Set the Connection Leak Timeout and Statement Leak Timeout values

Setting Statement and Connection Leak Detection - via the command line

You can set the time-out values with the following commands:

asadmin set resources.jdbc-connection-pool.test-pool.statement-leak-timeout-in-seconds=5
asadmin set resources.jdbc-connection-pool.test-pool.connection-leak-timeout-in-seconds=5

Once these values are set if connection or statement leaks are detected you will see messages similar to the ones below in the application log.

WARNING: A potential connection leak detected for connection pool test-pool. The stack trace of the thread is provided below :
WARNING: A potential statement leak detected for connection pool test-pool. The stack trace of the thread is provided below :

At this point you can go back to your development team and get them to investigate the root cause, or smack them round the head, depending on your management style.  ;o)

Conclusion

Well, that's it for this blog. We've looked at a brief overview of Connection Pools, how to create and configure them in Glassfish along with a few best practice settings to consider.

As with all server configuration settings you should always take a close look at your application's needs before making changes. You should always performance test and load test your application to ascertain the best settings, particularly before making changes in production. One size does not fit all! Once you have decided upon the optimal settings you should then monitor and re-evaluate regularly to ensure you are always running with the best settings.




Apache httpd 2.2 and not so sticky sessions

$
0
0
While I was troubleshooting a misbehaving application for one of our customer I found out the session stickiness was not working as expected on the Apache acting as a load balancer. I consider Apache to be pretty solid product so I was quite surprised by the finding.

Scenario

Let's assume I have a web application deployed on two application servers: backend1 & backend2. Both servers are listening on the port 8080. I want to access the application under a single URL and I use the Apache balance load between the servers. The backend servers don't support session replication therefore I would like to use sticky sessions.

The simplistic configuration could look like this:

ProxyPass / balancer://mycluster stickysession=lbcookie
<Proxy balancer://mycluster>
  BalancerMember http://<backend1>:8080 route=01
  BalancerMember http://<backend2>:8080 route=02
</Proxy>

It looks trivial: I define locations of the backend servers, enable the sticky sessions and specify the name of the cookie to be used to maintain the stickiness.

On the first request the cookie "lbcookie" is not present therefore Apache will choose the server itself. The application will process the request and include its nodeId (01 or 02) in the cookie sent with the response. Now all the subsequent request will contain the cookie and Apache should pick the backend1 or backend2 depending on the cookie value as specified by the route parameter. Except it doesn't!

Problem analysis

To quote the documentation:"The balancer extracts the value of the cookie and looks for a member worker with route equal to that value."

The documentation also says:"Some back-ends use a slightly different form of stickyness cookie, for instance Apache Tomcat. Tomcat adds the name of the Tomcat instance to the end of its session id cookie, separated with a dot (.) from the session id. Thus if the Apache web server finds a dot in the value of the stickyness cookie, it only uses the part behind the dot to search for the route."

It says it's looking for a member worker with route equals to the cookie value unless the cookie contains dot. Unfortunately it will not consider the content of the cookie at all if there is no dot in the cookie value! I believe it's caused by this line:


https://github.com/apache/httpd/blob/2.2.x/modules/proxy/mod_proxy_balancer.c#L287

It's searching a dot and if no dot is found than the *route is set to NULL. Therefore the condition on the line 289 is false and the content of the cookie is not taken into a consideration! It effectively disabled the session stickiness.

How to workaround it?

It should be clear by now: Changing the backend server configuration to return node.01 and node.02 will do the trick!

PS: The bug has been reported to httpd development team.


Hazelcast & Websockets

$
0
0
Over the past few years we have written many blogs demonstrating how Data Grid eventing frameworks can be hooked up to Websocket clients to deliver updates in real-time.  One Data Grid product we haven't looked at previously is Hazelcast and recently this looks to be gaining lots of traction in the market place and also interest from our customers.  Hazelcast has a developer friendly API and can be set up with the minimal amount of effort, it also boasts an almost identical set of features when running in client-server mode compared to the traditional embedded cache architecture.  As we have discussed before the advantages in terms of de-coupling, independent tuning, scalability etc offered by client-server architectures is one of the more attractive features offered by Data Grids.  We are going to setup and run a very basic Hazelcast Data Grid in client-server mode.  A background thread will generate some updates to fictitious Stock Market data held in the grid and we'll use Hazelcast's event listener framework to publish these events to our clients.


JSR-356 is available as part of the EE7 release and as a result vendors are incorporating this into their compliant Application and Web Server distributions.  Glassfish, Wildfly and Tomcat all come with implementations and this post is going to use Tomcat's Web Socket capabilities for hosting the Web Application detailed in this post.  JSR-356 Web Sockets have been available in Tomcat since the later releases of 7.0.x and are also present as standard in the new 8.0.x Web Container.  In this post we'll write a simple Web Application exposing a Server side Web Socket endpoint.  The Web Application will register an event listener on our Hazelcast Data Grid and listen for events.  Any Web Socket clients connecting to the application will receive push notification on these events and most importantly display these in an aesthetically pleasing graph.


Lastly JSR-356 also mandates the capability to define Web Socket Client endpoints as well as Server endpoints.  This allows "any" client application to define and create a Web Socket connection to a Server side endpoint.  We'll take a look at this too and show how we can connect a legacy Swing application to our Web Socket Application running to Tomcat to receive the same push events from the Data Grid.

All the code for this prototype can be found here, use the following command to clone the repository:

 git clone https://github.com/mark-addy/hazelcast-websocket-demo.git  

Build everything by executing the following command from the parent hazelcast-websocket-demo project root directory:

 mvn clean install -DskipTests  

Now we'll step through everything.

hazelcast-shared

The hazelcast-shared project contains a Java class for the cached Stock records held in the data grid and a class to hold the results of a call to get all the Stock records currently held in the grid.  All the other projects depend on this one.

hazelcast-cluster

I've cheated just a little bit in setting up our Hazelcast cluster.  Rather than running multiple JVM's I'm creating the cluster in a single Java process, populating the cluster with some made up Stock Market data and then starting a background thread to randomly update prices of those stocks in the cluster.  It's all set up as a JUnit test and this will run forever unless you send some input to the console!

To programmatically create a Hazelcast cluster node you just need to use the following syntax.

 Config config = new Config();  
HazelcastInstance instance = Hazelcast.newHazelcastInstance(config);

By default the first cluster node will attempt to bind to all interfaces and listen on port 5701.  Subsequent nodes search starting at 5701 for the next available port.

The Hazelcast cluster needs to be running for the Web Application to start-up so once you've installed this project run the class StockMapEntryListenerTest as a JUnit Test.

hazelcast-web

The Web Application contains a ServletContextListener which instantiates a singleton instance holding a instance of a "HazelcastClient" which we will use to connect to the cluster created in the previous step.  I'm doing this here so that the relatively expensive operation of creating a client connection to the server side cluster happens as part of the Web Application start up life-cycle.

 @WebListener  
public class StockServletContextListener implements ServletContextListener {
private static final Logger LOG = Logger.getLogger(StockServletContextListener.class.getName());
@Override
public void contextDestroyed(ServletContextEvent arg0) {
LOG.info("Servlet Context Destroyed - Shutting down Hazelcast Client");
ClientInstance.getInstance().getClient().shutdown();
}
@Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
LOG.info("Servlet Context Initialized - Creating Hazelcast Client");
ClientInstance.getInstance();
}
}

In the ClientInstance class you can see that the code to create a Hazelcast client using the vanilla settings is very straight forward.  All we need to pass into the client configuration is the IP address of at least one active member of the server side cluster.  Provided we have supplied a valid address the client will connect and also fail-over to another cluster member should the initial connection fail.

 public class ClientInstance {  
private final HazelcastInstance client;
private final IMap<String, StockRecord> stockMap;
private ClientInstance() {
ClientConfig clientConfig = new ClientConfig();
clientConfig.addAddress("127.0.0.1:5701");
client = HazelcastClient.newHazelcastClient(clientConfig);
stockMap = client.getMap("stock-map");
}
private static class ClientInstanceHolder {
private static final ClientInstance INSTANCE = new ClientInstance();
}
public static ClientInstance getInstance() {
return ClientInstanceHolder.INSTANCE;
}
public HazelcastInstance getClient() {
return client;
}
public IMap<String, StockRecord> getMap() {
return stockMap;
}
}

Note that the server side Hazelcast cluster must be up and running before attempting to deploy / start the Web Application, if not then initialization of the singleton will fail and the Web Application won't start.  This is because the default Hazelcast client implementation will try three times to connect to the cluster before failing.  You obviously have control over the number of attempts etc and you should look here  if you want to make your client more resilient to the absence of any available server side nodes to connect to.

Lastly we have the Web Socket server side endpoint.  Tomcat has now put its proprietary Web Socket implementation into maintenance mode and replaced it with a JSR-356 implementation, here is our JSR-356 code:

 @ServerEndpoint(value = "/websocket/stock")  
public class StockWebSocket implements EntryListener<String, StockRecord> {
private static final Log LOG = LogFactory.getLog(StockWebSocket.class);
private static final Set<StockWebSocket> connections = new CopyOnWriteArraySet<StockWebSocket>();
private Session session;
private String listenerId = null;
public StockWebSocket() {
listenerId = ClientInstance.getInstance().getMap().addEntryListener(this, true);
}
@OnOpen
public void start(Session session) {
this.session = session;
connections.add(this);
}
@OnClose
public void end() {
if (listenerId != null) {
ClientInstance.getInstance().getMap().removeEntryListener(listenerId);
}
connections.remove(this);
}
@OnMessage
public void incoming(String message) {
if (message.equals("open")) {
Set<String> keys = ClientInstance.getInstance().getMap().keySet();
StockResponse response = new StockResponse();
response.setStocks(keys);
send(this, ResponseSerializer.getInstance().serialize(response));
}
}
@OnError
public void onError(Throwable t) throws Throwable {
}
private static void send(StockWebSocket client, String message) {
try {
synchronized (client) {
client.session.getBasicRemote().sendText(message);
}
} catch (IOException e) {
LOG.debug("Failed to send message to client", e);
connections.remove(client);
try {
client.session.close();
} catch (IOException ioException) {
}
}
}
@Override
public void entryAdded(EntryEvent<String, StockRecord> event) {
}
@Override
public void entryRemoved(EntryEvent<String, StockRecord> event) {
}
@Override
public void entryUpdated(EntryEvent<String, StockRecord> event) {
send(this, ResponseSerializer.getInstance().serialize(event.getValue()));
}
@Override
public void entryEvicted(EntryEvent<String, StockRecord> event) {
}
}

The Web Socket implements com.hazelcast.core.EntryListener and therefore has access to the associated event callbacks.
  • When a client connects to the Web Socket we retrieve the singleton Hazelcast client instance and register ourself as a listener for all events in the "stock-map" cache.  
  • Clients send a simple "open" text message when they connect.  The Web Socket responds with an instance of StockResponse containing a Collection of all the Stock symbols currently in the Data Grid.  Ok, so in the real world we wouldn't really ask the Web Socket for a complete list of all the records in the Data Grid!
  • When the entryUpdated callback occurs, we serialize the updated cache value (StockRecord) to JSON and send this to the associated client.
  • When the client disconnects the Web Socket we retrieve the singleton Hazelcast client instance and remove ourself from the listeners.  
Hazelcast supports a number of strategies for EntryListeners, in this example we are simply listening for all events in the "stock-map".  Other options exist for subscribing only to events for a particular key or events for records filtered by a Predicate, see the docs for more information.

 String addEntryListener(EntryListener<K,V> listener,  
boolean includeValue)
String addEntryListener(EntryListener<K,V> listener,
K key,
boolean includeValue)
String addEntryListener(EntryListener<K,V> listener,
Predicate<K,V> predicate,
boolean includeValue)
String addEntryListener(EntryListener<K,V> listener,
Predicate<K,V> predicate,
K key,
boolean includeValue)

To run the Web Application you'll need an instance of Tomcat, I used Tomcat 8.0.1 and you can get a copy from here.  Once you've installed Tomcat, ensure that your hazelcast cluster is up and running before attempting to deploy the Web Application.
  • Build the Web Application (you should have done this right at the beginning)
  • Copy the build artifact into the $TOMCAT_HOME/webapps directory.
  • Start Tomcat (bin/startup.sh)
You should be able to access the Web Application at this URL:

http://localhost:8080/hazelcast-web/stockticker.jsp

Your browser will make a Web Socket connection to the Tomcat instance and send an "open" text message to retrieve all the Stock keys in the Data Grid.  The Java script in the browser then creates a chart "series" for each Stock and wait for updates to be pushed.  The screen should hopefully look similar to this after a few minutes:


We're using HighCharts for the cool graphs.

hazelcast-client

Java's Swing framework might not be as cool as HTML5 & Web Sockets but there are still a vast number of "legacy" Swing applications out in the real-world and we come across them regularly.

As already discussed JSR-356 permits Client Web Socket endpoints and Tomcat provides support for standalone clients to make Web Socket connections to server side endpoints.  This last project demonstrates this capability and allows us to write a Swing client that connects to the Tomcat Web Socket we have just created and render Stock price changes as they occur in a JFreeChart.

Traditionally Swing clients might use RMI or JMS to retrieve data from the server side but these methods can be problematic in environments with firewall restrictions so using Web Sockets might have some potential in certain use-cases.

Firstly we need to set up some dependencies for our client to make a Web Socket connection.  Below are the required libraries, taken from this projects POM:

<dependency>  
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-websocket-api</artifactId>
<version>8.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-websocket</artifactId>
<version>8.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-juli</artifactId>
<version>8.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-util</artifactId>
<version>8.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-coyote</artifactId>
<version>8.0.1</version>
</dependency>

The JSR-356 Client Web Socket endpoint looks remarkably similar to the server side version, and it is, so I'm not going to go into too much detail here.  The standalone client uses exactly the same steps as the browser to connect, retrieve the current Stock list and then receive the pushed update events.

 @ClientEndpoint  
public class StockWebSocketClient {
private StockClient demo;
private Session session;
public StockWebSocketClient(StockClient demo) throws DeploymentException, IOException {
this.demo = demo;
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
String uri = "ws://localhost:8080/hazelcast-web/websocket/stock";
container.connectToServer(this, URI.create(uri));
}
@OnOpen
public void onOpen(Session session) {
System.out.println("opened");
this.session = session;
send("open");
}
@OnError
public void onError(Throwable t) {
System.out.println("error " + t.getMessage());
t.printStackTrace();
}
@OnClose
public void onClose() {
System.out.println("close");
}
@OnMessage
public void onMessage(String message) {
System.out.println("message : " + message);
Object messageObject = ResponseSerializer.getInstance().deserialize(message);
if (messageObject instanceof StockRecord) {
demo.updatePrice((StockRecord)messageObject);
} else if (messageObject instanceof StockResponse) {
System.out.println("Stock Response received : " + Arrays.deepToString(((StockResponse)messageObject).getStocks().toArray()));
demo.renderGraph((StockResponse)messageObject);
}
}
private void send(String message) {
try {
synchronized (this) {
session.getBasicRemote().sendText(message);
}
} catch (IOException e) {
try {
session.close();
} catch (IOException ioException) {
}
}
}
}

Ensure that Tomcat is up and running then you can start the standalone client by running the main method found in the Controller class.  You should get a screen as shown below, with updates reflected in the plotted series:


Conclusion

Overall we have created the architecture shown in the image below, events from the Hazelcast grid are captured by each Web Socket connection established inside Tomcat allowing us to push to Browsers and standalone clients.





Wildfly 8.0.0.Final is Released

$
0
0


On Tuesday JBoss announced the release of WildFly 8.0.0.Final with full certification for the Java EE7 Web and Full profiles which makes JBoss the first of the "big" vendors to distribute a certified EE7 release.  This certainly represents a faster turn around than their corresponding EE6 delivery cycle which also incorporated a complete overhaul of the traditional JBoss Application server architecture.

Lets take a quick look at some of the features.

Listening Ports


JBoss 4.3 Port Listing
For those old and grey enough to remember earlier JBoss releases each service provided by the container would typically command its own listening port.  For those of you who are not old enough or chose not to remember I have captured the full port listing from JBoss 4.3 for posterity.  Either way you'll be pleased to here that Wildfly 8 takes advantage of the HTML upgrade feature, in the same way that Web Sockets work, to multiplex all application traffic over a single port; 8080 by default.

For management console access and CLI scripting WildFly retains the domain administration port, 9990 by default.  This ensures that admin traffic remains separate from the application and can therefore be controlled appropriately with firewall rules.

Web Server - Undertow


Previous incarnations of the JBoss / WildFly Web Container relied on an implementation based on Apache Tomcat's DNA.  This latest release see the Web implementation entirely replaced with Undertow a high performance web server with support for JSR-356 Web Sockets, blocking and non-blocking handlers.  Public independent benchmarks are favourable and some can be found here with claimed support for over a million connections.

Role Based Access Control and Audit


The concept of centralised domain management was introduced into JBoss Application Server back in the AS 7 releases, before the "WildFly" name was born.  However up until now there has been no capability to assign role based privileges to domain user accounts accessing the administration console or CLI tooling therefore anyone accessing these tools could effectively do anything they wanted.  This latest release introduces Role Based Access Control and integration with LDAP to manage users domain privileges.

Removed Services


Taking advantage of services whose support is no longer mandatory for EE7 certification, WildFly has removed JAX-RPC, CMP / EJB 2.1 Entities and JSR-88, the J2EE Application Deployment API.  No doubt this helped with the speed EE7 certification was reached but will also prevent potential users who are still running legacy Web Services and / or EJB Entity Beans from migrating directly to this release.

Ok, so that's just a short summary of the things you'll find or not find in WildFly 8.  For a full listing of all the changes take a look at the release notes here.

Viewing all 223 articles
Browse latest View live