Quantcast
Channel: C2B2 Blog
Viewing all 223 articles
Browse latest View live

MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 18

$
0
0


Featured News
What's New in Oracle SOA Suite 12c? - read more 
What's Happening with Java EE? - read more 


JAVA EE / OPEN SOURCE
What's Happening with Java EE? Short interview with David Delabassee, see here 
Java Magazine: The Java Virtual Machine, see more on the Oracle Blog 
It's time to begin JMS 2.1! Read more here 
Java EE Concurrency API Tutorial, read the article by Francesco Marchioni 
HornetQ and ActiveMQ: Messaging - the next generation, find out more on Jaxenter.com 
Spring Boot 1.1.4 supports the first stable Tomcat 8 release, read more on Jaxenter.com 
RxJava + Java8 + Java EE 7 + Arquillian = Bliss , read the article by Alex Soto 
The 5 Best New Features of the Raspberry Pi Model B+, read more on the Life Hacker website 
Spotlight on GlassFish 4.0.1: #2 Simplifying GF distributions , read more on the Aquarium blog 
Jersey SSE Capability in GlassFish 4.0.1 read the article by Abhishek Gupta 

ORACLE
SOA Suite 12c is available for download , find out more on the SOA Community Blog 
‘What's New in Oracle SOA Suite 12c?’ read the blog post by Andrew Pielage here  
‘What's New in Oracle SOA Suite 12c?’ Register for the C2B2 launch event in London on the 12th of September 
Oracle urges users to adhere 113 patches pronto, read more on Jaxenter.com 
Docker, Java EE 7, and Maven with WebLogic 12.1.3, read the article by Bruno Borges
'Testing Java EE Applications on WebLogic Using Arquillian' with Reza Rahman, join the Oracle Webcast on the 29th of July

JBOSS & RED HAT
Red Hat JBoss Data Grid 6.3 is now available! , read more on Arun Gupta’s Blog 
JBoss-Docker shipping continues with launch of microsite, read more on Jaxenter.com 
Your tests assume that JBoss is up and running, read the article by Antonio Goncalves  
Rule the World - Practical Rules & BPM Development join London JBUG Event on the 7th of August, fing out more and register here 
Red Hat JBoss BRMS & JBoss BPM Suite 6.0.2.GA released into the wild, read more on Eric Schabell’s blog  
Hibernate Hidden Gem: The Pooled-Lo Optimizer, read the article by Vlad Mihalcea 
Camel on JBoss EAP with Custom Modules, read the article by Christian Posta 
Red Hat JBoss Fuse - Getting Started, Home Loan Demo Part , read the article by Christina Lin 

  BIG DATA & DATA GRIDS
Processing on the Grid, read the article by Steve Millidge
James Governor In-Memory Data Grid: Less Disruptive NoSQL, see more on the Hazelcast Blog 
Designing a Data Architecture to Support both Fast and Big Data, read more on  Dzone 
Scaling Big Data fabrics, read the article by Mike Bushong 
Industry Analyst Insight on How Big Data is Bigger Than Data, read more on the Pivotal blog 


Getting the most out of WLDF Part 4: The Monitoring Dashboard

$
0
0
Read Part 1: "What is the WLDF?" here
Read Part 2: "Watches" here
Read Part 3: "Notifications" here

This is going to be a fairly short post, because there isn’t a huge amount to go into that we haven’t already covered!

The WLDF monitoring dashboard gives a visual representation of available metrics from WebLogic MBeans. If you know how to drag-and-drop, then you have all the technical ability you need.


In this blog post, I will refer to an annotated image with colour coded headings so you can see which part I’m talking about.


How do I find it?

On the console home page, look on the bottom right-hand corner for a link called “Monitoring Dashboard” and you’ll find yourself looking at something similar to the screen below (except without all the chart data!)

The annotations are explained below


Creating and populating a view


We start at the top left with the “View List” tab and the “Metric Browser” tab. The chances are, when you first go to the Monitoring Dashboard, the View List will be shown. Each view can have many charts on it; the example above is a view called “Session” with a single chart called “Sessions”. Examples of views with more charts can be seen in the Built-in Views. To keep my example simple, I stayed with one chart in a fresh view.

Once your view and charts have been created, use the green play button to start recording metric values for the selected view.You will need to press the play button on each view you want to record metrics on, and use the red stop button to stop recording for the selected view. The octagonal red button will stop all collections.


Adding metrics to a chart

After having created a new view and added a chart, you will need to add some metrics. Because there are so many to choose from, you first need to narrow down the ones you’re concerned about.

If you’ve no idea how to know which metrics are available, I covered that in part 2 of this series on watches. In that blog post, I found the open sessions count for a web application using the following string: (${com.bea:Name=AdminServer, Type=WebAppComponentRuntime//OpenSessionsCurrentCount} >= 1)

To find the same MBean in the Metric Browser, we need to make sure we’ve selected the right server first, as shown in the “com.bea:Name” segment of the string above. The “Type” corresponds to the “Types” section in the metric browser, so I’ve selected “WebAppComponent”.

That then gives me a list of instances, so I pick _/clusterjsp, since that’s the webapp I’m interested in, and then scroll to the OpenSessionsCurrentCount metric which I can drag and drop on the empty chart.


Modifying the chart

 Once you have some metrics added to the chart, and you’ve clicked the green play button so that metrics are captured, you might find that you actually don’t want a line chart at all! Perhaps your data is much more suited to a bar chart.

To change the chart type, click the dropdown arrow next to the pencil icon. Clicking the pencil icon will allow you to change the name of the chart.


Adjusting the chart

Using the same dropdown arrow, you can choose to zoom in or out on the chart, or show earlier or later values.

Another, more intuitive way is to use the “mini-map” chart in the bottom right corner. The same zoon in/out and earlier/later arrows are there, but the miniaturised view of the chart is interactive. Click and drag to highlight peaks and troughs and the chart will update to show you that period of time in more detail.


Interpreting the data

Finally, it’s important to be able to interpret the data. On my example chart, I’ve circled in blue where an event clearly happens. Hovering over any of the data points will tell you exactly what the value is, as I’ve shown with the HeapFreePercent metric (the red triangles)

We can see that there is a garbage collection event there because the red triangle data points which represent how much of the total heap is free gets down below 10% and then leaps up to about 33%.

What we can also see is that it was a young heap collection, thanks to the two half-moon shaped data points, in dark red and blue. The dark red line, which represents the total number of young heap collections since the server started, jumps up by 1; whereas the dark blue line, which represents old heap collections, stays at 0.

That’s really all there is to it! The key to making the dashboard really useful is spending the time to create meaningful charts, like a chart to monitor garbage collection and a chart to monitor a specific app.

There is a lot of power in being able to visualise data in this way. Consider the scenario: you set up a view for 10 of your critical apps. 1 – 5 might be on server A and 6-10 might be on server B. If you can see that all of apps 1 – 5 start to respond very slowly, you can look in a view of server A to see if there are any stop-the-world pauses during that time, or if the heap is running low on memory.




| View Mike Croft's profile on LinkedIn | Mike CroftonGoogle+

Alternative Logging Frameworks for Application Servers: GlassFish

$
0
0

Introduction

Sometimes the default logger just isn't enough...
Welcome to the first instalment in what will be a four-part series on configuring Application Servers to use alternative Logging Frameworks. The first in this series will cover GlassFish, and how to configure it to make use of Log4j, and SLF4J with Logback.


This blog was written using a 64-bit Linux Distro, GlassFish 4.0, Logback 1.2.2, SLF4J 1.7.7, Log4j 2.0, and web-app version 3.1, with all coding for the tests done in NetBeans 8.0.

Log4j

Assuming you have downloaded both GlassFish and Log4j, begin by copying the log4j-api-2.0.jar and log4j-core-2.0.jar JARs to your GlassFish’s lib directory: $GF_Home/glassfish/lib

Next we need to provide GlassFish with a config file for it to get the logging configuration from. We will take advantage of Log4j automatic configuration, and place the configuration file on the GlassFish domain’s classpath. To this end, create a file called log4j2.xml and place it in the domain’s classpath: $GF_Home/glassfish/domains/domain1/lib/classes

As a quick note, I am not just calling the config file log4j2.xml for clarity’s sake; it must be named this for automatic configuration to work. If Log4j cannot find an XML or JSON file with the name log4j2 on the classpath, it will fall back to a default configuration where log messages of level error are output to the console.

Inside the config file we just created, copy and paste the below to provide a simple test logging configuration:
<?xml version="1.0" encoding="UTF-8"?>  
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="trace">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration> 
This will print out log messages of any level called from the application to the server.log log file of the domain or instance that the application is running on, only printing the hour and minute that the message was logged at (probably bad practice, but it helps to prove that it's working).

And that’s it! This configuration will apply itself to all instances created under this domain, allowing you to use Log4j with all of your applications.

Testing it Out

If you need something to test this with: create a servlet; import the log4j-core-2.0.jar and log4j-api-2.0.jar JARs into the project; and import Log4j into the servlet:
 import org.apache.logging.log4j.*;  
Then declare and initialise a logger:
 private static Logger logger = LogManager.getLogger(TestServlet.class.getName());
Finally, add some log statements inside the processRequest method:
 protected void processRequest(HttpServletRequest request, HttpServletResponse response)  
throws ServletException, IOException
{
response.setContentType("text/html;charset=UTF-8");
try (PrintWriter out = response.getWriter())
{
logger.trace("Tracing");
logger.debug("Debugging!");
logger.info("Info - something may be about to go wrong...");
logger.warn("Warning! Upcoming error!");
logger.error("Aaaaaaaahhhhhhhhhhhhhh!!!!!!!!!!!");
}
}
Deploy and run the application (just create a simple web page that directs to it), and within the GlassFish server.log file you should see the following messages:
 [2014-07-29T10:58:10.511+0100] [glassfish 4.0] [INFO] [] [] [tid: _ThreadID=21 _ThreadName=Thread-3] [timeMillis: 1406627890511] [levelValue: 800] [[  
10:58 [http-listener-1(4)] TRACE testing.TestServlet - Tracing]]
[2014-07-29T10:58:10.511+0100] [glassfish 4.0] [INFO] [] [] [tid: _ThreadID=21 _ThreadName=Thread-3] [timeMillis: 1406627890511] [levelValue: 800] [[
10:58 [http-listener-1(4)] DEBUG testing.TestServlet - Debugging!]]
[2014-07-29T10:58:10.511+0100] [glassfish 4.0] [INFO] [] [] [tid: _ThreadID=21 _ThreadName=Thread-3] [timeMillis: 1406627890511] [levelValue: 800] [[
10:58 [http-listener-1(4)] INFO testing.TestServlet - Info - something may be about to go wrong...]]
[2014-07-29T10:58:10.511+0100] [glassfish 4.0] [INFO] [] [] [tid: _ThreadID=21 _ThreadName=Thread-3] [timeMillis: 1406627890511] [levelValue: 800] [[
10:58 [http-listener-1(4)] WARN testing.TestServlet - Warning! Upcoming error!]]
[2014-07-29T10:58:10.511+0100] [glassfish 4.0] [INFO] [] [] [tid: _ThreadID=21 _ThreadName=Thread-3] [timeMillis: 1406627890511] [levelValue: 800] [[
10:58 [http-listener-1(4)] ERROR testing.TestServlet - Aaaaaaaahhhhhhhhhhhhhh!!!!!!!!!!!]]
SLF4J with Logback

SLF4J is a façade, acting as the “middleman” between the actual logger and the application, and can be used with several logging frameworks (including the GlassFish default java.util.logging, and Log4j). The JAR files GlassFish will need for it to use SLF4J are (assuming you’re using the same version):
  • logback-classic-1.2.2.jar
  • logback-core-1.1.2.jar
  • slf4j-api-1.7.7.jar
  • jul-to-slf4j-1.7.7.jar
The first two JARs are available from Logback, whereas the last two are included in the download for SLF4J. Place these JARs into the endorsed directory, which can be found under the lib directory: $GF_Home/glassfish/lib/endorsed

Create and place a configuration file named logback.xml in the domain’s config directory: $GF_Home/glassfish/domains/domain1/config

As we are not using the same method of automatic configuration as we were with Log4j, the config file does not need to match a prescribed name, or be placed on the classpath, so you can name and place it as you like (we specify the name and location later). Copy and paste the following into this newly created config file:
<configuration debug="true">  
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>${com.sun.aas.instanceRoot}/logs/server.log</file>
<append>true</append>
<encoder>
<Pattern>%d{HH:mm:ss} [%thread] %-5level %logger{52} - %msg%n</Pattern>
</encoder>
</appender>
<root>
<level value="TRACE"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
Next is to create a bridge between the default java.util.logging logger, and SLF4J. In the logging.properties file (found under $GF_Home/glassfish/domains/domain1/config), under the “All attribute details” heading, do the following:
  • Change the handlers attribute to org.slf4j.bridge.SLF4JBridgeHandler
    • This is to bridge any java.util.logging log statements to SLF4J
  • Set com.sun.enterprise.server.logging.GFFileHandler.file to ${com.sun.aas.instanceRoot}/tmp/server.log
    • This is to stop there being any file conflicts from both GlassFish and SLF4J trying to write to the same file (only a few lines of code will be logged here when GlassFish is started as it won't have loaded SLF4J yet).
  • Add the following attributes:
    • com.sun.enterprise.server.logging.GFFileHandler.formatter=com.sun.enterprise.server.logging.UniformLogFormatter
    • com.sun.enterprise.server.logging.GFFileHandler.alarms=false
With our properties configured, we now need to point and instruct GlassFish to actually use them! Open up the admin console, navigate to the JVM Options tab of the JVM Settings page, under the server-config settings, and add the following options:
  • -Djava.util.logging.config.file=${com.sun.aas.instanceRoot}/config/logging.properties
  • -Dlogback.configurationFile=file:///${com.sun.aas.instanceRoot}/config/logback.xml
Restart GlassFish, and away you go! Unlike when using Log4j with GlassFish (directly), Logback will be used for almost all of the logging statements from GlassFish, not just those explicitly called from within the application. This convenience is not without cost though; the java.util.logging to SLF4J bridge has to translate the LogRecord objects used in java.util.logging to the SLF4J equivalent. This causes a 20% performance cost increase for enabled log statements, and a 6000% (!) increase for disabled log statements. A solution exists for the massive cost increase for disabled log statements however;add the highlighted line to the logback configuration file (logback.xml). This propagates changes made to the level of Logback loggers to the java.util.logging framework, meaning that only enabled log statements are sent over the SLF4J bridge (with the 20% cost increase):
<configuration debug="true">  
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator"/>
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
Note, that since we are specifying the location of the configuration and properties files as being under the instance root, the configuration detailed in this blog can only currently be used for the Admin Server; GlassFish will not properly propagate the settings to instances created under this domain. To use SLF4J for each of the instances created under this domain (using this current configuration), you must copy and paste both the logging.properties and logback.xml files to the config directory of each instance (or place them in a location accessible by all instances that does not rely on the instanceRoot variable), as well as add the JVM Options that we added to the configuration settings of each of the instances (or add them to the default-config so any new instances have them set automatically upon creation).

Testing it Out

As before, if you need something to test this out with: create a servlet; import the logback-classic-1.2.2.jar logback-core-1.1.2.jar, and slf4j-api-1.7.7.jar JARs into the project; and import SLF4J into the servlet:
 import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Declare and initialise a logger:
 private static Logger logger = LoggerFactory.getLogger(TestServlet.class.getName()); 
And add the logging messages to the processRequest method:
 protected void processRequest(HttpServletRequest request, HttpServletResponse response)  
throws ServletException, IOException
{
response.setContentType("text/html;charset=UTF-8");
try (PrintWriter out = response.getWriter())
{
logger.trace("Tracing");
logger.debug("Debugging!");
logger.info("Info - something may be about to go wrong...");
logger.warn("Warning! Upcoming error!");
logger.error("Aaaaaaaahhhhhhhhhhhhhh!!!!!!!!!!!");
}
}
Within the GlassFish server.log file, you should see not only our log messages, but the other debug messages as well:
11:24 [http-listener-1(1)] DEBUG org.glassfish.grizzly.websockets.WebSocketFilter - handleRead websocket: null content-size=0 headers=
HttpRequestPacket (
method=GET
url=/LogbackTest/TestServlet
query=testybutton=push+meh%21
protocol=HTTP/1.1
content-length=-1
headers=[
host=linux-njt2:8080
user-agent=Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0
accept=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-language=en-gb,en;q=0.5
accept-encoding=gzip, deflate
referer=http://linux-njt2:8080/LogbackTest/
connection=keep-alive]
)
11:24 [http-listener-1(1)] DEBUG org.glassfish.grizzly.filterchain.DefaultFilterChain - after execute filter. filter=org.glassfish.grizzly.extras.addons.WebSocketAddOnProvider$GlassfishWebSocketFilter@28d6271 context=FilterChainContext [connection=TCPNIOConnection{localSocketAddress={/127.0.0.1:8080}, peerSocketAddress={/127.0.0.1:43194}}, operation=READ, message=org.glassfish.grizzly.http.HttpContent@40e01f50, address=/127.0.0.1:43194] nextAction=org.glassfish.grizzly.filterchain.InvokeAction@4f86bdf5
11:24 [http-listener-1(1)] DEBUG org.glassfish.grizzly.filterchain.DefaultFilterChain - Execute filter. filter=org.glassfish.grizzly.http.server.HttpServerFilter@1039ea06 context=FilterChainContext [connection=TCPNIOConnection{localSocketAddress={/127.0.0.1:8080}, peerSocketAddress={/127.0.0.1:43194}}, operation=READ, message=org.glassfish.grizzly.http.HttpContent@40e01f50, address=/127.0.0.1:43194]
11:24 [http-listener-1(1)] TRACE testing.TestServlet - tracing
11:24 [http-listener-1(1)] DEBUG testing.TestServlet - Debugggggggggggggging!
11:24 [http-listener-1(1)] INFO testing.TestServlet - infoooooooo
11:24 [http-listener-1(1)] WARN testing.TestServlet - warnings!
11:24 [http-listener-1(1)] ERROR testing.TestServlet - aaaaaaaahhhhhhhhhhhhhh!!!!!!!!!!!
11:24 [http-listener-1(1)] TRACE org.glassfish.grizzly.ProcessorExecutor - executing connection (TCPNIOConnection{localSocketAddress={/127.0.0.1:8080}, peerSocketAddress={/127.0.0.1:43194}}). IOEvent=NONE processor=org.glassfish.grizzly.filterchain.DefaultFilterChain@73f5289
Wrapping Up 
That’s it for the first instalment in this logging series, showing you that, albeit with a bit of tinkering, GlassFish can be configured to use other logging frameworks, potentially affording you greater logging flexibility. While you wait for the next in the series (covering WebLogic), check out some of our other blogs on GlassFish, such as How to install GlassFish on Ubuntu Touch, or our Introduction to Connection Pools in GlassFish.

Securing JBoss EAP 6 - Implementing SSL

$
0
0
Security is one of the most important features while running a JBoss server in a production environment. Implementing SSL and securing communications is a must do, to avoid malicious use.

This blogs details the steps you could take to secure JBoss EAP 6 running in Domain mode. These are probably documented by RedHat but the documentation seems a bit scattered. The idea behind this blog is to put together everything in one place.

In Order to enhance security in JBoss EAP 6, SSL/encryption can be implemented for the following
  • Admin console access – enable https access for admin console
  • Domain Controller – Host controller communication – Communication between the main domain controller and all the other host controllers should be secured.
  • Jboss CLI – enable ssl for the command line interface

The below example uses a single keystore being both the key and truststore and also uses CA signed certificates. 

You could use self-signed certificates and/or separated keystores and truststores if required.
  1. Create the keystores (certificates for each of the servers)
    • keytool -genkeypair -alias testServer.prd -keyalg RSA -keysize 2048 -validity 730 -keystore testServer.prd.jks
  2. Generate a certificate signing request (CSR) for the Java keystore
      • keytool -certreq -alias testServer.prd -keystore testServer.prd.jks -file testServer.prd.csr
    1. Get the CSR signed by the Certificate Authorities
    2.  Import a root or intermediate CA certificate to the existing Java keystore
        • keytool -import -trustcacerts -alias root -file rootCA.crt -keystore testServer.prd.jks
      1. Import the signed primary certificate to the existing Java keystore.
          • Keytool -importcert -keystore testServer.prd.jks -trustcacerts -alias testServer.prd -file  testServer.prd.crt
        1. Repeat steps 1-6 for each of the servers.

        2. In order to establish trust between the master and slave hosts,
          1. Import the signed certificates of all the (slave) servers that the Domain Controller must trust onto the Domain Controllers Keystore
            • keytool -importcert -keystore testServer.prd.jks  -trustcacerts -alias slaveServer.prd -file slaveServers.prd.crt
            •  repeat step for all slave hosts.
          2. Import the signed certificate of the Domain controller onto the slave hosts
              •  keytool -importcert -keystore slaveServer.prd.jks  -trustcacerts -alias testServer.prd -file testServer.prd.crt
              • repeat steps for all slave hosts

              This has be to done because (as per RedHat’s Documentation)

              There is a problem with this methodology when trying to configure one way SSL between the servers, because there the HC's and the DC (depending on what action is being performed) switch roles (client, server). Because of this one way SSL configuration will not work and it is recommended that if you need SSL between these two endpoints that you configure two way SSL

              Once this is done, we now have signed certificates loaded onto the java keystore.

              In Jboss EAP 6 , the http-interface which provides access to the admin console, by default uses the ManagementRealm to provide file based authentication. (mgmt.-users.properties).The next step is to modify the configurations in the host.xml, to make the ManagementRealm use the certificates we created above.

              The host.xml should be modified to look like

              <management>
                      <security-realms>
                          <security-realm name="ManagementRealm">
                              <server-identities>
                                  <ssl protocol="TLSv1">
                                       <keystore path="testServer.prd.jks" relative-to="jboss.domain.config.dir" keystore-password="xxxx" alias="testServer.prd"/>
                                  </ssl>
                              </server-identities>
                              <authentication>
                                  <truststore path="testServer.prd.jks" relative-to="jboss.domain.config.dir" keystore-password="xxxx"/>
                                  <local default-user="$local"/>
                                  <properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/>
                              </authentication>
                          </security-realm>

              <management-interfaces>
                          <native-interface security-realm="ManagementRealm">
                              <socket interface="management" port="${jboss.management.native.port:9999}"/>
                          </native-interface>
                          <http-interface security-realm="ManagementRealm">
                              <socket interface="management" secure-port="9443"/>
                          </http-interface>
                      </management-interfaces>

              On the Slave hosts, In addition to the above configuration, the following needs to be changed

              <domain-controller>
                 <remote host="testServer" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>"
              </domain-controller>

              Once you make the above changes and restart the servers, you should be able to access the admin console via https.

              https://testServer.prd:9443/console

              Finally, in order to secure cli authentication

               Modify /opt/jboss/jboss-eap-6.1/bin/jboss-cli.xml for each server and add

              <ssl>
                     <alias>testServer.prd</alias>
                     <key-store>/opt/jboss/jboss-eap-6.1/domain/configuration/testServer.prd.jks</key-store>
                     <key-store-password>xxxx </key-store-password>
                     <trust-store>/opt/jboss/jboss-eap-6.1/domain/configuration/testServer.prd.jks</trust-store>
                     <trust-store-password>xxxx </trust-store-password>
                     <modify-trust-store>true</modify-trust-store>
                  </ssl>


              New Features and Changes in BPM 12c

              $
              0
              0

              Last week at the Oracle BPM 12c summer camp in Lisbon, I had a chance to deep-dive into world of Oracle BPM suite 12c which went GA at the end of June. In this blog, I will discuss which I believe are the most notable changes in the BPM 12c product, some of which also impact SOA suite 12c since the BPM suite shares some components with the SOA suite including the human workflow and business rules engine among others as we can see from the diagram below. Furthermore, both the BPEL and BPMN service engines share fundamentally the same codebase.





















              There have been a wide variety of changes in this new major release which will affect a number of different BPM project stakeholders including the bpm architect, process analyst and bpm developers. The key new features and changes in 12c which we will discuss are:

              • The new BAM server runtime architecture
              •  New developer features


              New BAM Server Architecture






















              There has been a number of notable changes to the architecture of the BAM server and its associated components in 12c in comparison to 11g. In 11g, we had an Active Data Cache component which acted as a cache for BAM data objects used by BAM dashboards. 
              In 12c, the ADC component has been replaced with Oracle coherence and the event engine in 11g has been further developed via the Continuous Query Service. Once data objects have been updated in the Persistence engine, this events data is passed to the Continuous Query Service (CQS), which is a query engine that has the ability to listen to a data stream. Every time a change occurs, the CQS investigates which queries are affected by the change, and related to the CQS which dashboards need to be updated, and pushes the information to the report caching engine which in turn pushes the result to relevant views which are then displayed in the associated dashboards. In 12c, the BAM composer and viewer now supports multiple browser types since the BAM front end components now use ADF rather than Microsoft VML which tied these BAM web components to Internet explorer in 11g. There have been further improvements to BAM Server which include the following:

              • Ability to display business data in over 30 different business view types including treemap, bubble, scatter and geo-map (preview only) view types.
              • Due to the underlying architectural changes noted above, the BAM server now supports active-active cluster mode.
              • Finer grained security is enabled: Query, View and Dashboard and row level security
              • There are numerous preassembled BPM process analytics dashboards which come out of the box when the BPM suite is deployed. Note you need to enable process metrics to be collected by modifying the mbean property DisableProcessMetrics to true in the Fusion middleware control console for the BAM server.



              New Features for Developers

              There have been a number of new features introduced in BPM 12c which will aid those involved in the technical development of BPM projects and those attempting to diagnose BAM runtime issues including:

              •  BPM Development Installer
              • JDeveloper Debugger Utilities
              •  Detailed Diagnostics Tools for BAM

              The 12c release provides users with a quickstart installer which allows one to install BPM 12c via a simplified installer. The installer contains an embedded java DB to minimize the memory utilized by the BPM runtime and also JDeveloper. JDeveloper also now includes an integrated debugger utility which allows one to debug at runtime bpm projects and their associated process graphical components. The standard debugger features such as being able to step in, step over, step out and resume are part of the debugger utility.
              ·       
              To allow BPM project stakeholders to diagnose project issues on the BAM server, 12c provides a comprehensive BAM diagnostics framework which allows one to diagnose different parts of the BAM server including diagnostics in the report cache, data control, composer and continuous query engine among others. We can enable diagnostics level to be enabled along with specific components by setting the mbean properties DiagnosticEnabled, DiagnosticLevel and DiagnosticComponents to appropriate values. One can also monitor viewsets and the performance of continuous query service using the BAM composer.

              In this blog, we have discussed some of the new features and changes which have been introduced as part of BPM 12c, however there are many other changes featured in this release including the introduction of user friendly business rules (verbal rules), integration of excel with the Business rules editor and the integration of some business architecture modelling features within bpm composer among others. For further details on BPM 12c, please visit http://www.oracle.com/technetwork/middleware/bpm/documentation/documentation-154306.html and  https://blogs.oracle.com/bpm/entry/oracle_bpm_12c_now_ga


              Alternative Logging Frameworks for Application Servers: WebLogic

              $
              0
              0
              Introduction
              Welcome to the second in our blog series of using alternative logging frameworks with application servers. This entry will focus on WebLogic, specifically 12c, and configuring it to use Log4j and SLF4J with Logback.

              If you missed the first part of this series, find it here: Part 1 - GlassFish

              As ever, a small disclaimer with the environment I used: 64-bit Linux Distro, WebLogic 12.1.3, Logback 1.2.2, SLF4J 1.7.7, Log4j 2.0.2, and web-app version 3.1, with all coding for the tests done in NetBeans 8.0.

              I'll be assuming you have created a basic WebLogic domain, so let's get started with Log4j.

              Log4j
              As with when configuring GlassFish to use Log4j, we must copy across the log4j-api-2.0.jar and log4j-core-2.0.jar files to the domain; download Log4j and copy these JARs to $MW_HOME/user_projects/domains/$domain_name/lib. From the $MW_HOME/wlserver/server/lib/ directory, copy the wllog4j.jar file over to the same domain lib directory as before. WebLogic automatically appends any JARs located in the domain's lib directory to the domain's classpath.

              With that done, create a Log4j2 properties file and place it in the domain's root directory ($MW_HOME/user_projects/domains/$domain_name). I named mine log4j2.xml, and you can find it below (it's the same one that I used in the last blog); it's a simple configuration that outputs log messages of any level to the console:
              <?xml version="1.0" encoding="UTF-8"?>  
              <Configuration status="WARN">
              <Appenders>
              <Console name="Console" target="SYSTEM_OUT">
              <PatternLayout pattern="%d{HH:mm} [%t] %-5level %logger{36} - %msg%n"/>
              </Console>
              </Appenders>
              <Loggers>
              <Root level="trace">
              <AppenderRef ref="Console"/>
              </Root>
              </Loggers>
              </Configuration>
              Configuration done! Simply tell WebLogic to use your configuration file by starting it with the following option:-Dlog4j.configurationFile=log4j2.xml

              Like this:
               ./startWebLogic -Dlog4j.configurationFile=log4j2.xml
                Testing
                To check that it's working, let's use the same test as in my previous blog (if it ain't broke...), namely deploying a simple servlet that makes a call at each log level. Create a simple Java web application in NetBeans (or the IDE of your choice), add a servlet to the project, and do the following:
                •  Import the two Log4j JARS, log4j-api-2.0.jar and log4j-core-2.0.jar, into the project, and import the package into the servlet:
                 import org.apache.logging.log4j.*; 
                •  Declare and initialise a logger:
                private static Logger logger = LogManager.getLogger(TestServlet.class.getName());
                • And alter the processRequest method to look like this:
                 protected void processRequest(HttpServletRequest request, HttpServletResponse response)  
                throws ServletException, IOException
                {
                response.setContentType("text/html;charset=UTF-8");
                try (PrintWriter out = response.getWriter())
                {
                logger.trace("Tracing a possible error");
                logger.debug("Debugging a possible error");
                logger.info("Gathering info on the possible error");
                logger.warn("Warning you about the possible error");
                logger.error("The error is indeed an error");
                }
                }
                This will print out log statements when a call is made to the servlet. Add a button to the home page of your web application, index.html, to afford you a simple means of calling the servlet:
                <html>
                <head>
                <title>Testing</title>
                <meta charset="UTF-8">
                <meta name="viewport" content="width=device-width, initial-scale=1.0">
                </head>
                <body>
                <form name="testForm" action="TestServlet">
                <input type="submit" value="push me!" name="testybutton" />
                </form>
                </body>
                </html>
                Build and deploy the application to WebLogic, and you should find the messages in the out log file of the server the application was run on, e.g. Server-0.out. For those of you new to WebLogic, these log files can be found at: $MW_HOME/user_projects/domains/$domain_name/servers/$server_name/logs

                SLF4J with Logback
                WebLogic comes bundled with a version of SLF4J, which can make things slightly difficult when attempting to use your own (likely more up to date) version of SLF4Jwith Logback. To do so, you have to essentially tell WebLogic to ignore its version of SLF4J, and use a version bundled with your application.

                To this end, let's use the same application that we used for Log4j, though with the following modifications:
                • Download SLF4J and Logback, and import the following three JARs into the project:
                  • slf4j-api
                  • logback-core
                  • logback-classic
                • Import the following two packages into the servlet
                 import org.slf4j.Logger;
                import org.slf4j.LoggerFactory;
                •  And alter the logger initialisation to this:
                private static Logger logger = LoggerFactory.getLogger(TestServlet.class.getName()); 
                With the main application configured for logback, we next need to create a weblogic.xml file under the WEB-INF directory of the project, and populate it with this:
                <?xml version="1.0" encoding="UTF-8"?>
                <wls:weblogic-web-app
                xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app"
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd
                http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd">

                <wls:container-descriptor>
                <wls:prefer-web-inf-classes>true</wls:prefer-web-inf-classes>
                </wls:container-descriptor>
                </wls:weblogic-web-app>
                As noted earlier, we needed to tell WebLogic to use the bundled version of SLF4J and its binding to logback. The prefer-web-inf-classes tag does just this, and you will later even evidence of this in the logs; we are not overwriting the pre-existing SLF4J binding, we are including another with the application and telling WebLogic to prioritise using that one.

                Build the project, but do not start WebLogic and deploy it yet; we still need to create a configuration file for WebLogic to use.

                Create a Logback properties file, logback.xml, and place it in the domain root ($MW_HOME/user_projects/domains/$domain_name). Here is a basic one that just prints to the console:
                <configuration>
                <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
                <encoder>
                <pattern>%d{HH:mm} [%thread] %-5level %logger{36} - %msg%n</pattern>
                </encoder>
                </appender>
                <root level="TRACE">
                <appender-ref ref="STDOUT"/>
                </root>
                </configuration>
                And with that, besides a start-up parameter, you're done! This will cause all log messages of any level to be printed to the server's out log file. Start WebLogic as below, and you're good to go!
                 ./startWebLogic -Dlogback.configurationFile=logback.xml 
                Wrapping Up
                And so concludes the second part in this logging series, hopefully giving you a starting point for using Log4j or Logback with WebLogic. Next up on the list is Wildfly, so stay tuned for that. Feel free to check out some of our other blogs on WebLogic in the meantime: 
                Weblogic - Dynamic Clustering in practice
                WebLogic 12c Does WebSockets - Getting Started
                Common WebLogic Problems



                Alternative Logging Frameworks for Application Servers: WildFly

                $
                0
                0
                Introduction
                Source: https://www.jboss.org/dms/wildfly_splash/splash_wildflylogo_small.png
                Welcome to the third part of our blog series on configuring application servers to use alternative logging frameworks. This time we turn our gaze to WildFly, the open source application server from JBoss. Again, we'll be configuring it to use Log4j2 and SLF4J with Logback.

                For those of you who are looking for parts 1 and 2 of this series, find them here:
                Part 1 - GlassFish
                Part 2 - WebLogic

                For reference, the environment used for this blog was: a 64-bit Linux VM, JDK 8u20, Log4j 2.0.2, Logback 1.1.2, SLF4J 1.7.7, WildFly 8.1, and building/programming done using NetBeans 8.0.

                As with the previous entries in this series, I'll be assuming you have set up WildFly before attempting to follow this blog. For a simple guide on this, check out Jboss' one on clustering; it's pretty easy to follow and will get you set up with a basic environment if you don't have one already.

                Following the norm of this series, let's begin with Log4j2.

                Log4J2
                Log4J2 is pretty easy to set up with WildFly on a per deployment basis; WildFly does not officially support using your own loggers on a domain wide basis, meaning that the configuration detailed in this blog will only apply to this application in particular (just as with when using SLF4J and Logback with WebLogic as described in my previous blog).

                As we are configuring this on a "per deployment" basis (or per application if you prefer), we do not need to import any jars into WildFly; we package them with the application. With this in mind, let's create a test application (if you've read the previous two blogs in this series, you'll recognise the program):
                • Create a Server in NetBeans that maps to your WildFly installation (you'll need the WildFly plugin: tools, plugins, Available Plugins, WildFly Application Server)
                • Create a Java Web application
                • Create a Servlet in this project.
                • Download Log4j and add the following two JARs into the project:
                  • log4j-api-2.0.2.jar
                  • log4j-core-2.0.2.jar
                •  Add the following import statement into your servlet:
                 import org.apache.logging.log4j.*; 
                •  Declare and initialise the logger:
                 private static Logger logger = LogManager.getLogger(TestServlet.class.getName());
                •  Edit the processRequest method to this:
                 protected void processRequest(HttpServletRequest request, HttpServletResponse response)  
                throws ServletException, IOException
                {
                response.setContentType("text/html;charset=UTF-8");
                try (PrintWriter out = response.getWriter())
                {
                logger.trace("Tracing!");
                logger.debug("Debugging!");
                logger.info("Information!");
                logger.warn("Warnings!");
                logger.error("Oh noes!");
                }
                }
                With the log messages done, edit the index.html page that NetBeans created for you to provide a button for us to call the servlet with:
                <html>
                <head>
                <title>Testing</title>
                <meta charset="UTF-8">
                <meta name="viewport" content="width=device-width, initial-scale=1.0">
                </head>
                <body>
                <form name="testForm" action="TestServlet">
                <input type="submit" value="push me!" name="testybutton" />
                </form>
                </body>
                </html>
                With the programming done, we need to create a configuration file for Log4j to use. Shaking it up a little from the previous blogs in the series, we'll create a configuration file, log4j2.xml, that prints log messages of any level to a file in my home directory (replace the /home/andrew directory with wherever you want the log file to be stored):
                • In the WEB-INF folder of your NetBeans project, create a folder called classes
                • Create an xml file in here named log4j2.xml, and populate it with this: 
                <?xml version="1.0" encoding="UTF-8"?>  
                <Configuration status="WARN">
                <Appenders>
                <File name="FileLogger" fileName="/home/andrew/wildfly.log">
                <PatternLayout pattern="%d{HH:mm} [%t] %-5level %logger{36} - %msg%n"/>
                </Console>
                </Appenders>
                <Loggers>
                <Root level="trace">
                <AppenderRef ref="FileLogger"/>
                </Root>
                </Loggers>
                </Configuration>
                 Done! Build your application and deploy it to WildFly. Open the application in your browser, click on the button, and you should find a file created in your home directory with your log messages in.

                SLF4J and Logback
                Just as before, to use our own SLF4J and Logback binding and configuration, we'll configure for a per deployment basis. First things first, let's get our application to use Logback:
                • Download SLF4J and Logback, and add the following JARs to your web application:
                  • slf4j-api
                  • logback-core
                  • logback-classic
                •  Remove the Log4j package and instead import:
                 import org.slf4j.Logger;
                import org.slf4j.LoggerFactory;
                •  Alter the logger initialisation to:
                 private static Logger logger = LoggerFactory.getLogger(TestServlet.class.getName());
                  The processRequest method does not require any additional changes on top of how we've already changed it, though feel free to alter the messages for fun; I changed the error level message to "oh dears" for clarity since my configuration had both applications writing to the same file (and if you're following this blog, so soon shall yours).

                  Create an xml file in the WEB-INF/classes directory called logback.xml, and fill it with this configuration (it performs the same function as the Log4j2 one):
                  <configuration>  
                  <appender name="FILE" class="ch.qos.logback.core.FileAppender">
                  <file>/home/andrew/wildfly.log</file>
                  <append>true</append>
                  <encoder>
                  <Pattern>%d{HH:mm} [%thread] %-5level %logger{52} - %msg%n</Pattern>
                  </encoder>
                  </appender>
                  <root>
                  <level value="TRACE"/>
                  <appender-ref ref="FILE"/>
                  </root>
                  </configuration>
                  Unlike configuring WildFly with Log4j2 however, we are not yet done. We need to disable the add-logging-api-dependencies setting to stop WildFly from adding its implicit logging dependencies to our deployment. To do this, we need to use the JBoss CLI.

                  Navigate to the bin directory of WildFly, and run the following command (replacing $ip and $port with your own specific values):
                   jboss-cli.sh -c controller=$ip:$port --gui
                  This will bring up a window, through which you can modify global WildFly settings. If youhadn't realised, WildFly will need to be running for you to run this command successfully. The setting we are after is in the logging subsystem. If you're in domain mode (like the guide I linked to at the top leads you to be in), you'll need to make the changes to the server group that you're deploying the application to.

                  Right click on the add-logging-api-dependencies setting, select write-attribute, and untick the value box. The cmd box at the top may catch your eye when you click OK as it fills with a command; this is the command that you would have to fill in if you were using the command line. Click submit in the top right hand corner, and the output should display "success".

                  Restart WildFly and lo! You're ready to deploy!

                  Wrapping Up
                  You should now have a grounding in how to use alternative logging with WildFly, affording you more logging flexibility in your work and endeavours.The final entry in this series will cover Apache Tomcat, so check back for it soon. In the meantime, read some of our other blogs on WildFly and JBoss Server:

                  WildFly 8.0.0.Final is Released
                  Securing JBoss EAP 6 - Implementing SSL
                  How to controll *all* JTA properties in JBoss AS 7 / EAP 6


                  Configuring RBAC in JBoss EAP and Wildfly - Part One

                  $
                  0
                  0
                  In this blog post I will look into the basics of configuring Role Based Access Control (RBAC) in EAP and Wildfly.

                  RBAC was introduced in EAP 6.2 and WildFly 8 so you will need either of those if you wish to use RBAC.

                  For the purposes of this blog I will be using the following:

                  OS - Ubuntu 14
                  Java - 1.7.0_67
                  JBoss - EAP 6.3

                  Although I'm using EAP these instructions should work just the same on Wildfly.

                  What is RBAC?

                  Role Based Access Control is designed to restrict system access by specifying permissions for management users. Each user with management access is given a role and that role defines what they can and cannot access.

                  In EAP 6.2+ and Wildfly 8+ there are seven predefined roles each of which has different permissions. Details on each of the roles can be found here:


                  In order to authenticate users one of the three standard authentication providers must be used. These are:

                  Local User - The local user is automatically added as a SuperUser so a user on the server machine has full access. This user should be removed in a production system and access locked down to named users.
                  Username/Password - using either the mgmt-users.properties file, or an LDAP server.
                  Client Certificate - using a trust store

                  For the purposes of this blog and to keep things simple we will use username/passwords and the mgmt-users.properties file

                  Why do we need RBAC?

                  The easiest way to show this is through a practical demo.

                  Configuration can be done either via the Management Console or via the Command Line Interface (CLI). However, only a limited set of tasks can be done via the management console whereas all tasks are available via the CLI. Therefore, for the purposes of this blog I will be doing all configuration via the CLI.

                  In our test scenario we have 4 users:

                  Andy - This user is the main sys-admin and therefore we want him to be able to access everything.
                  Bob - This user is a lead developer and therefore will need to be able to deploy apps and make changes to certain application resources. 
                  Clare & Dave - These users are standard developers and will need to be able to view application resources but should not be able to make changes.

                  First of all we will set up a number of users.

                  In order to do so we will use the add-user.sh script which can be found in:

                  <JBOSS_INSTALL_DIR>/bin

                  Create the following users in the stated groups. (Enter No for the final question for all users)

                  Andy - no group
                  Bob - lead-developers
                  Clare - standard-developers
                  Dave - standard-developers

                  In <JBOSS_INSTALL_DIR>/domain/configuration you will find a file called mgmt-users.properties.

                  At the bottom of this file you will see a list of the users we've created similar to this:

                  Andy=82153e0297590cceb14e7620ccd3b6ed
                  Bob=06a61e836d9d2d5be98517b468ab72cc
                  Clare=63a8ff615a122c56b1d47fc098ff5124
                  Dave=2df8d1e02e7f3d13dcea7f4b022d0165

                  In the same directory you will find a a file called mgmt-groups.properties, at the bottom of this file you will see a list of users and the groups they are in, like so:

                  Andy=
                  Bob=lead-developers
                  Clare=developers
                  Dave=developers

                  Now point a browser at http://localhost:9990 and log in as the user Dave. Navigate around and you will see you have full access to everything. 

                  This is precisely why RBAC is needed! Allowing all users to not only access the management console but to be able to access and alter anything is a recipe for disaster and guaranteed to cause issues further down the line. Often users don't understand the implications of the changes they have made, it may just be a quick fix to resolve an immediate issue but it may have long term consequences that are not noticed until much further down the line when the changes that were made have been forgotten about or are not documented. As someone who works in support we see these kind of issues on a regular basis and they can be difficult to track down with no audit trail and users not realising that the minor change they made to one part of the system is now causing a major issue in some other part of the system.

                  OK, so we now have our users set up but at the moment they have full access to everything. Next up we will configure these users and assign them to roles.

                  First of all start up the CLI.

                  Run the following command:

                  <JBOSS_INSTALL_DIR>/bin/jboss-cli.sh -c

                  Change directory to the authorisation node

                  cd /core-service=management/access=authorization

                  Running the following command lists the current role names and the standard role names along with two other attributes

                  ls -l

                  The two we are interested in here are permission-combination-policy and provider.

                  The permission-combination-policy defines how permissions are determined if a user is assigned more than one role. The default setting is permissive. This means that if a user is assigned to any role that allows a particular action then the user can perform that action.
                  The opposite of this is rejecting. This means that if a user is assigned to multiple roles then all those roles must permit an action for a user to be able to perform that action.

                  The other attribute of interest here is provider. This can be set to either simple (which is the default) or rbac. In simple mode all management users can access everything and make changes, as we have seen. In rbac mode users are assigned roles and each of those roles has difference privileges.

                  Switching on RBAC

                  OK, lets turn on RBAC...

                  Run the following commands to turn on RBAC

                  cd /core-service=management/access=authorization

                  :write-attribute(name=provider, value=rbac)

                  Restart JBoss

                  Now point a browser at http://localhost:9990 and try to log in as the user Andy (who should be able to access everything).

                  You should see the following message : 

                  Insufficient privileges to access this interface. 

                  This is because at the moment the user Andy isn't mapped to any role. Let's fix that now:

                  If you look in domain.xml in the management element you will see the following:

                  <access-control provider="rbac">
                      <role-mapping>
                          <role name="SuperUser">
                              <include>
                                  <user name="$local"/>
                              </include>
                          </role>
                      </role-mapping>
                  </access-control>

                  This shows that at the moment only the local user is mapped to the SuperUser role.

                  Mapping users and groups to roles

                  We need to map our users to the relevant roles to allow them access.

                  In order to do this we need the following command:

                  role-mapping=ROLENAME/include=ALIAS:add(name=USERNAME, type=USER)

                  Where rolename is one of the pre-configured roles, alias is a unique name for the mapping and user is the name of the user to map.

                  So, lets map the user Andy to the SuperUser role.

                  ./role-mapping=SuperUser/include=user-Andy:add(name=Andy, type=USER)

                  In domain.xml you will see that our user has been added to the SuperUser role:

                  <access-control provider="rbac">
                      <role-mapping>
                          <role name="SuperUser">
                              <include>
                                  <user name="$local"/>
                                  <user name="Andy"/>
                              </include>
                          </role>
                      </role-mapping>
                  </access-control>

                  Now point a browser at http://localhost:9990 you should now be able to log in as the user Andy and have full access to everything.

                  Next we need to add mappings for the other roles we want to use.

                  ./role-mapping=Deployer:add
                  ./role-mapping=Monitor:add

                  Now we need to give role mappings to all our other users. As we have them in groups we can assign the groups to roles, rather than mapping by user.

                  The command is basically the same as for a user but the type is GROUP rather than user.

                  Here we are mapping lead developers to the Deployer role and standard developers to the Monitor role.

                  ./role-mapping=Deployer/include=group-lead-devs:add(name=lead-developers, type=GROUP)
                  ./role-mapping=Monitor/include=group-standard-devs:add(name=developers, type=GROUP)

                  If you look in domain.xml you should now see the following showing that the user Andy is mapped to the SuperUser role and the two groups are mapped to the Deployer and Monitor roles.

                  <management>
                      <access-control provider="rbac">
                          <role-mapping>
                              <role name="SuperUser">
                                  <include>
                                      <user name="$local"/>
                                      <user name="Andy"/>
                                  </include>
                              </role>
                              <role name="Deployer">
                                  <include>
                                      <group alias="group-lead-devs" name="lead-developers"/>
                                  </include>
                              </role>
                              <role name="Monitor">
                                  <include>
                                      <group alias="group-standard-devs" name="developers"/>
                                  </include>
                              </role>
                          </role-mapping>
                      </access-control>
                  </management>

                  You can also view the role mappings in the admin console.

                  Click on the Administration tab.
                  Expand the Access Control item on the left and select Role Assignment.
                  Select the Users tab - this shows users that are mapped to roles.
                  Select the Groups tab and you will see the mapping between groups and roles.

                  Log in as the different users and see the differences between what you can and can't access.

                  Conclusion

                  So, that's it for Part One. We have switched on RBAC, set up a number of users and groups and mapped those users and groups to particular roles to give them different levels of access.

                  In Part Two of this blog I will look at constraints which allow more fine grained permission setting, scoped roles which allow you to set permissions on individual servers and audit logging which allows you to see who is accessing the management console and see what changes they are making.



                  C2B2 Tech Days

                  $
                  0
                  0
                  C2B2 Labs is a scheme in which the middleware expert consultants at C2B2 Consulting test, develop and broaden their industry knowledge in order to provide customers with the most relevant, up-to-date expertise available on the market.

                  The latest Tech Day took place recently and the consultants gathered at C2B2’s head office in Malvern to get their hands dirty with their latest technical project. 

                  Lead by Arvind Anandam, a Senior Consultant at C2B2, the agenda for the team was to work out what would be involved in getting RHQ to monitor WebLogic - over a batch of bacon sandwiches.

                  We caught up with Arvind to get a few more details...


                  What were you trying to achieve at the Tech Day?


                  The agenda for the Tech Day was to work out what would be involved in getting RHQ to monitor WebLogic. We were hoping this would then lead to some tasks being defined so that we can write our own RHQ plugin for monitoring WebLogic.


                  Did you get the outcome you were looking for?


                  We only had a limited number of people due to illnesses so we probably couldn't achieve as much as we had hoped to do. Here’s some of the activities we did manage to achieve:
                  • Setup RHQ on amazon ec2 instances
                  • Setup Weblogic on amazon ec2 instances
                  • Got RHQ monitoring the Weblogic with the default JMX plugin
                  • Tried adding some custom plugins – faced a few issues with this. Needs further investigation.
                  • Looked at the possibility of writing our own java plugin.
                  Still some work needs to be done. But we could definitely use some of the ground work we did as a starting point.


                  What do you think the benefit of C2B2 tech days are?


                  Tech Days give us the opportunity to meet face to face and share our skills and experiences. With different people we get different perspectives on different things. It provides the opportunity to learn from each other; gives us a chance to play around with different technologies that we might not necessarily get to do at a customer’s site.


                  So what do you do in your day to day role at C2B2?


                  At C2B2 we focus on a wide range of middleware products, ranging from application servers(WebLogic, JBoss, GlassFish, Tomcat), to in-memory data grids, messaging systems , cloud platforms etc., from different vendors including Oracle, Red Hat & JBoss, VMware & Pivotal, Terracotta, Apache and Amazon WS. Our expertise involves dealing with all the mentioned products from project inception to production operation.

                  If you would like to read more about Arvind’s expertise then you can look out for his blog posts on http://www.c2b2.co.uk/blog or check out any of the links below to view Arvind's most popular posts! 



                  For any questions about C2B2 Consulting or if you would like some more information, please contact us at marketing@c2b2.co.uk. We look forward to hearing from you!
                   Lauren Whiles

                  GlassFish is here to stay. We heard it from the Oracle.

                  $
                  0
                  0


                  We are at JavaOne 2014 and one of the key reasons for me to attend was to catch up on the future of GlassFish. So on Sunday I went along to the GlassFish community update at the Moscone Center to consult with the Oracle on the future of GlassFish.
                  The reason I go to JavaOne is to hear the definitive view on GlassFish and JavaEE futures from the people that make the decisions. There's no other conference you can say that about.

                  On the stand there were 4 Oracle guys who make the decisions on GlassFish.
                  John Clingan - Product Manager for JavaEE and GlassFish; Mike Lehman - Product Manager for Cloud Application Framework; Cameron Purdy - VP Development; Reza Rahman - Evangelist for JavaEE and GlassFish.

                  What I saw was that there is a roadmap for GlassFish out until JavaOne 2016 as JavaEE8 develops with GlassFish 5 being the reference implementation for JavaEE 8. GlassFish 5 will aim to be released as the final draft for JavaEE 8 hit the JCP.

                  Cameron spoke about GlassFish being a key Research and Development platform with much of the technology created in GlassFish to support the JavaEE specifications finding its way into WebLogic with GlassFish having a key role in the evolution of JavaEE far into the future. Many of the key JavaEE specification developers are working on GlassFish as part of their JSR work and that is a huge investment.

                  John reiterated that Quality, stability and security are still important. The team continue to work to ensure that GlassFish passes all the JSR Compatibility Test Suites and any issues will be fixed. In fact the key priorities for the recent 4.1 release were Java 8 support, stability and quality. Also much of the work invested into GlassFish for JavaEE 8 support will be shareable with WebLogic.

                  Mike spoke about how collaboration and community is core to GlassFish and JavaEE development. Much of the learning and innovation brought into GlassFish as part of JavaEE 8 is key to Oracle bringing JavaEE 8 compliance to WebLogic faster with shared componentry envisaged between the 2 application servers e.g. Eclispelink, Tyrus and Jersey all being shared components.

                  On the topic of community participation in GlassFish and JavaEE Cameron emphasised the success of the adopt a JSR programme and encouraged everybody to get involved in the JavaEE 8 JSRs which will start to kick off now. Mike reiterated that the results of the JavaEE survey fed directly into the priority list of JavaEE 8 so community involvement is key to JavaEE 8. John reiterated that GlassFish development is very much open for contributions, just sign the Oracle Contributor Agreement and away you go, with no barriers. Also FishCAT has been a core quality project and show stopper bugs were identified and fixed in GlassFish 4.1 via the community FishCAT programme.

                  All in all, in my opinion, the outlook from these top Oracle executives was very positive and the future of GlassFish looks to be a platform for rapid innovation in the latest JavaEE 8 goodies combined with providing developers with a quality open source platform for JavaEE.

                  I came away not only reassured that GlassFish has a future but also that it may have an exciting future.

                  MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 19

                  $
                  0
                  0

                   

                  Featured News

                   

                  GlassFish is here to stay. We heard it from the Oracle - read more
                  JavaOne Keynotes Available for Streaming - read more  

                   


                    JAVAONE 2014

                   

                  JavaOne Keynotes Available for Streaming - see more at Java.net 
                  Final Keynotes Reflect Back, Move Forward - read more on the Oracle blog 
                  Oracle Shares Key Updates on Java Platform, Enterprise Edition, Introduces GlassFish Server Open Source Edition 4.1 at JavaOne 2014 - read more on MarketWatch 
                  JavaOne 2014 in Depth - read the article by Peter Pilgrim 
                  JavaOne 2014 Observations by Proxy - read more by Dustin Marx
                  The highlights of Java EE 8 - read more on the ServerSide 
                  Life around the Java Hub - read more on the Oracle Blog
                  User Group Sunday Kicks Things Off - read more on the Oracle Blog 

                   JAVA EE / OPEN SOURCE

                  GlassFish

                  GlassFish Server Open Source Edition 4.1 Released! – find out more 
                  GlassFish is here to stay. We heard it from the Oracle. – read more by Steve Millidge
                  Alternative Logging Frameworks for Application Servers: GlassFish - read more on the C2B2 Blog 
                  London GUG event: Testing Java EE Applications Using Arquillian & GlassFish by Reza Rahman – find out more and register 
                  Payara - The New Fish on the Block, 24/7 GlassFish support - find out more 
                  Other
                  2014 Duke's Choice Award Winners – read more on the Oracle Blog 
                  Java Magazine – read the latest issue here 
                  Apache Tomcat 7.0.56 has arrived - read more on Jaxenter 
                  5 JAX London sessions every developer should see - read more on Jaxenter 
                  Jsr107 come, code, cache, compute! -  see Steve Millidge’s JavaOne slides here ; see the code on GitHub 
                  Java EE 8 will not include Configuration JSR for Java EE applications – find out more here 
                  Apache Tomcat 8: What It Is, What You Need To Know - read more on the Pivotal Blog 
                  Apache Warns of Tomcat Remote Code Execution Vulnerability – read more on the Threat Post 
                  Devoxx 2014 Schedule is now available to view – find out more
                  Reducing the frequency of GC pauses - seemingly an easy task? – read more on The Server Side 
                  How to Get the Most Out of a Tech Conference - read more on the Oracle Blog 
                  TomEE Security Episode 1: Apache Tomcat and Apache TomEE Security Under the Covers –read more on the Tomitribe Blog 

                  ORACLE

                   

                  Welcome the New Oracle WebLogic Server 12.1.3 Release – read more by Emin Askerov 
                  Top 5 Oracle Mobile Application Foundation (MAF) Videos - read more on the Oracle Blog 
                  Is Oracle doing enough for Java 9? - read more on Jaxenter.com 
                  Early Access release of JDK 9 available - read more on Jaxenter.com 
                  Alternative Logging Frameworks for Application Servers: WebLogic – read more on the C2B2 Blog 
                  New Features and Changes in BPM 12c – read the article by David Winters 
                  Getting the most out of WLDF Part 4: The Monitoring Dashboard – read the article by Mike Croft 

                  JBOSS & RED HAT

                   

                  Continuous Delivery with Docker, Jenkins, JBoss Fuse and OpenShift PaaS – read more on the Java Code Geeks 
                  How to solve impossible business resource planning problems – read more on Eric Schabell’s blog 
                  Red Hat Launches Red Hat Cloud for Government to Help Agencies Build Strategies to Address Cloud Challenges - read more on the Business Wire 
                  Configuring RBAC in JBoss EAP and Wildfly, Part One - read more on the C2B2 Blog 
                  Alternative Logging Frameworks for Application Servers: WildFly - read the article by Andrew Pielage 
                  Securing JBoss EAP 6 - Implementing SSL - read the article by Arvind Anandam 
                  4 Foolproof Tips Get You Started With JBoss BRMS 6.0.3 – read more on Dzone
                  Use Byteman in JBoss Fuse / Fabric8 / Karaf – read the article by Paolo Antinori 
                  OpenShift v3 Platform Combines Docker, Kubernetes, Atomic and More - read more on the OpenShift Blog 

                    DATA GRIDS

                   

                  Hazelcast raises $11 million in funds for further NoSQL expansion - read more on Jaxenter.com  
                  Beginner’s Guide To Hazelcast Part 1- read the article by Daryl Mathison 
                  JSR-000107 JCACHE - Java Temporary Caching API Compatible Implementations – find out more on the JCP website 
                  Running Tabular Reports with Coherence 12.1.3 Reporter - read more on the Oracle blog 
                  In-Memory Data Grids: Why market needs them? - listen to the podcast with VPs here 
                  New eBook: In-Memory Data Grids for Dummies – find out more on the Oracle Blog 

                  Administering GlassFish with the CLI

                  $
                  0
                  0
                  It’s always a good time to brush up on the basics, and with the recent release of GlassFish 4.1, and the announcement of Payara Server, it’s an even better time. With that in mind, I'm going to go through a few of the things you can do with the GlassFish CLI. The Command Line Interface (CLI) is a means of controlling GlassFish from the command line (or terminal, if you prefer) allowing you to start, stop, or edit the server in a number of ways. Whilst to some the administration console is the go-to for any administration that needs to be done, the CLI can be a potentially quicker and easier way of performing any administration tasks, particularly when dealing with headless servers (a server without a GUI).


                  This blog was written for GlassFish 4.1, so some of the commands may not be available in older versions.

                  Asadmin

                  The GlassFish CLI is used with the asadmin utility (as in "as-admin", it isn’t just gibberish). This can be found in the bin directory of your GlassFish install, as well as in the bin directory of any of your GlassFish domains. All GlassFish administration commands are sub-commands of this utility, and as such, each command must be prepended with this function (e.g. ./asadmin do-a-little-dance).

                  A handy tip to avoid prepending asadmin to each command is to call the asadmin function on its own (e.g. ./asadmin). This will run asadmin in an interactive mode, saving you a few keystrokes if you intend to execute several commands in sequence.

                  You can specify additional options for most commands in the CLI, some of which are mandatory for certain commands to execute. The list of options that a command takes, as well as a description of the command and examples, can be found by passing the --help parameter to the command (e.g. ./asadmin start-domain --help). Each of these options must be prepended with "--", though some options have a shortened version for which a single dash is sufficient. For example:

                   ./asadmin start-domain --debug true
                  is equivalent to:
                   ./asadmin start-domain -d

                  Starting and Stopping…
                  Domains
                  As you will likely know if you’ve already installed GlassFish, you can use the CLI to start and stop domains, clusters and instances; it's likely the first thing you did after installing GlassFish was to use the CLI to start the default domain:
                   ./asadmin start-domain  
                  The start-domain command assumes that there is only one domain, and will fail if multiple domains exist. In such a case, you must specify the domain to start as an additional parameter like this:
                   ./asadmin start-domain myDomain  
                  As previously noted, you can use --help to list the available parameters that each command can take. Doing so for the start-domain command reveals the following parameters: dry-run, domaindir, upgrade, debug, verbose, and watchdog. Of particular note are the debug and verbose options, which are very useful for debugging.

                  Stopping a domain is just as easy as starting it, the command is just stop-domain.
                   ./asadmin stop-domain  
                  As with when starting domains, if more than one domain exists, you must specify which domain to stop:
                   ./asadmin stop-domain domain1
                  Notable options for this command are force and kill. These options are useful to specify the shutdown behaviour of the domain. The force option determines whether the domain stops immediately (true), or waits for all threads to exit before stopping the domain (false). The kill option determines how the domain process is terminated. This defaults to false, meaning that the Java platform will terminate the process; specifying this option as true will cause the operating system to kill the domain process.
                   ./asadmin stop-domain --kill true domain1

                  Instances
                  There are two asadmin commands for starting GlassFish instances: start-instance and start-local-instance. The biggest difference between the two is that the start-local-instance command is designed solely for use with instances residing on the machine from which the command is being run from; start-instance can be used to start both local and remote instances.
                   ./asadmin start-instance instance1
                   ./asadmin start-local-instance instance2  
                  Debugging is available for these commands with the --debug­ option, which starts the instance with debugging enabled. An additional debugging option is available to the start-local-instance command: --verbose. This allows you to have both start-up and log messages printed to the console when starting the instance, helping you to quickly see what is happening.

                  Another notable and useful option is --sync. This option takes one of three parameters: none, normal, and full. If using this option, you must specify one of the parameters; it is mandatory. The sync option specifies how the instance synchronises itself with the DAS (Domain Administration Server):
                  • The none parameter means that the instance will not synchronise itself with the DAS, so will not reflect any changes made to it since it was last synchronised. This can be useful if you know that there have been no changes made, and want the instance to start as quickly as possible.
                  • The normal has the instance synchronise with any changes since it was last synchronised (which is typically the last time it was online). If you do not specify the --sync option, this will be the option that the start-instance or start-local-instance command will default to. As the help of the command states, this parameter causes the instance to synchronise the config directory, and the changed subdirectories in the applications and docroot directories. An exception to the synchronisation of the subdirectories exists however: only the subdirectories that have had a change at their top level will be synchronised; if a change is made lower down the file system hierarchy, but not at the top level, then the changes will not be synchronised.
                  • The final option, full, is not subject to this caveat. This synchronisation parameter will synchronise all files, even if they have not been changed. This is the most thorough way of ensuring that an instance is correctly synchronised, but comes with its own downsides: it is slower than the normal and none options, and requires the DAS to be running; the start instance command will fail, and the instance will not be able to start till it contacts the DAS again.
                   ./asadmin start-local-instance --sync full --verbose true --debug true instance1  
                  Just like with stopping domains, the commands to stop an instance are well named: stop-instance and stop-local-instance. The difference between the two commands are the same as the difference between the start commands; the former can be used for local and remote instances, whereas the latter can only be used for local instances.
                  ./asadmin stop-instance instance1 
                  ./asadmin stop-local-instance instance2 
                   As with the stop-domain command, you can specify the force and kill options, which act in the same way as they do for the stop-domain command.

                  Clusters
                  The command to start a cluster is start-cluster. This command will start all of the instances in the cluster. If some of the instances in the cluster are already running, this command will start the remaining instances of the cluster.
                  ./asadmin start-cluster cluster1
                  This command does not have much in the way of options, only one of which will likely see common use: the --verbose option. This option acts as you might expect, printing information relating to the starting of each instance in the cluster to the console.

                  To stop a cluster, use the stop-cluster command:
                  ./asadmin stop-cluster cluster1
                  This command comes with a verbose option, which will display progress messages about the instances as they shut down. You can also specify that the shutdown behaviour of the cluster be to kill the process with the kill option. Unlike with the stop-domain, stop-instance, and stop-local-instance commands however, this command does not have the option to force the shutdown of its instances.

                  Creating...
                  Domains
                  Now things can get a little complicated. If you just want to create a new domain with default values, run this:
                  ./asadmin create-domain 
                  It will ask for you to specify a name for the domain, and an administrator username and password. It will then list the ports that it will use the default ports, and generate a self-signed certificate.

                  If you want to customise the domain a little, then there are a wealth of options available for this. Below are some notable ones:
                  • adminport - This option will specify with port will be used for administration (defaults to 4848).
                  • instanceport - This option specifies which port will be used for the web application context roots (defaults to 8080).
                  • portbase - This option is used to determine port assignments, and cannot be used with the adminport,instanceport, or domainproperties options. The value specified for this option will be a base value, upon which the other required ports (e.g. the administration port) are calculated by a port specific offset. So if the port base was specified as 500, the administration port would be 548 (500 + the administration port offset of 48). Each of the port offsets can be seen under the create-domain help.
                  • template - Useful if you already have a domain template that you wish to copy from machine to machine, this option allows you to specify a template that the domain will be created to match.
                  • usemasterpassword - For added security, you can protect the GlassFish keystore with a password. If not specified, then the password changeit is used; this default is fairly well known, so it is advised that you change it (get it?) when securing GlassFish.
                  • domainproperties - This option allows you to specify each port that will be used individually, rather than by relying on portbase. It is worth noting however that the adminport and instanceport options will override the admin and instance port specified by this command. The full list of ports that can be set are available in the command's help, though I'll list a few here: domain.adminPort, domain.instancePort, http.ssl.port, java.debugger.port. When specifying multiple values, you separate them with a colon e.g. domain.adminPort=3456:domain.instanceport=4567

                  Nodes
                  There is a command to create each kind of GlassFish node: create-node-config,create-node-dcom, and create-node-ssh. If creating a Config node with default settings, you only need to provide a name for the node. If creating an SSH or DCOM node, then you will need to provide both a name for the node, and the host name of the machine that the node will be located on.
                  ./asadmin create-node-config configNode
                  ./asadmin create-node-dcom --nodehost example.domain.co.uk dcomNode
                  ./asadmin create-node-ssh --nodehost localhost sshNode
                   If the default values for when creating a config node are not suitable, you can have their node host, install directory, and node directory specified with the nodehost, installdir, and nodedir options respectively.
                  ./asadmin create-node-config --nodehost localhost --installdir /home/andrew/glassfish --nodedir /home/andrew/nodes configNode
                  These options refer to the machine that will host the node, the installation directory of GlassFish on that machine, and the directory under which the nodes are stored (this particular option can probably be omitted for most configurations). The default install directory is taken as the install location of the DAS; the default settings assume that GlassFish is installed in the same location on all hosts.

                  Due to the relative complexity of an SSH or DCOM node in comparison to that of a Config node, they respectively have more options that can be used to create and configure them. The full list of options can be seen by passing the help option (./asadmin create-node-ssh --help), but I'll cover a few for both here.

                  For SSH nodes:
                  • force - This will force the creation of the node in the DAS configuration, even if the parameters passed in for the creation of the node fail validation. This can be useful if you know the target node will not be able to be validated against, such as if the target node host is not currently available. This option is also available to the create-node-dcom command.
                  • sshport - If you are using a a non-defualt SSH port (22), then use this option to specify what port is being used instead.
                  • sshuser - This specifies the user to use when connecting to the host through SSH, which by default is the user that is running the DAS.
                  For DCOM nodes:
                  •  windowsuser - This sets which Windows user to use to connect through DCOM. If not set, this defaults to the user running the DAS process.
                  • windowsdomain - This specifies the domain that the Windows user will be found under. This defaults to the host that the subcommand is run under.
                  •  install - This option specifies whether or not GlassFish will be installed on the node host specified. This option is also available to the create-node-ssh command.

                  Clusters
                  Clusters are created on their own in the CLI; instances have to be created and added to a cluster separately (see the next section). To create a cluster with default values, enter the create-cluster command and specify a name:
                  ./asadmin create-cluster cluster1
                  There are quite a few ways that the cluster can be configured when creating it, though a few of these are compatibility options, and so shouldn't be used. Possibly the most commonly used and useful option available to this command is --systemproperties. This option is similar to the domainproperties option of the create-domain command, as you use it to individually specify each of the ports that the cluster will listen on. When specifying multiple ports, separate the different ports out with a colon, like this:
                  ./asadmin create-cluster --systemproperties ASADMIN_LISTENER_PORT=666:JAVA_DEBUGGER_PORT=777 cluster1


                  Instances 
                  When creating instances, there are again two commands for doing this: create-instance and create-local-instance. As you can likely tell from the name, the create-local-instance command can only be run on the host on which the instance will resides; it cannot be used to create instances on remote machines. This command is used to create all types of instances: shared, standalone, and clustered. 
                  ./asadmin create-instance instance1
                  ./asadmin create-local-instance instance2 
                  The type of instance created is determined by the parameters passed when creating the instance: a shared instance is created by specifying an instance configuration with the --config option; a clustered instance is created by specifying a cluster for the instance to belong to with the --cluster option. 
                  ./asadmin create-instance --config instance-config-1 instance3
                  ./asadmin create-instance --cluster cluster12 instance4 
                  Just like when creating a cluster, you can specify the listen ports that the instance will use with the --systemproperties option.

                  To specify the node that an instance will belong to, use the --node option, like this:
                  ./asadmin create-instance --node node1

                  Setting and Changing Passwords
                  You can use the CLI to manage the admin and master passwords of GlassFish. The two passwords are managed with the following commands:
                  ./asadmin change-admin-password
                  ./asadmin change-master-password
                  The change-admin-password command is specific to a domain, so has the following two options to allow you to specify the domain to use: domain_name, and domaindir. If multiple admin users exist, you can specify which user to change the password of like this:
                  ./asadmin --user admin change-admin-password
                  The change-master-password command has the following options: nodedir, which specifies the directory containing the node for which the password will be changed; domaindir, which specifies the domain directory used, and savemasterpassword, which dictates whether or not the master password will be saved to disk (which is pretty bad practice if you weren't aware).

                  Deploying an Application
                  Deploying an application can be quite a complicated affair depending on your environment set up. Conversely, deploying an application with the CLI can be significantly quicker than deploying it by the admin console if you know what you're doing!

                  To deploy an application to the default instance (server, where the admin console is installed), simply enter the following:
                  ./asadmin deploy $path_to_war
                  Once you have your GlassFish environment set up, you're not likely to be deploying to the default server instance, so the target option will likely be of use to you, as it specifies where your deployment will be deployed to. With this option, you can specify that the component be deployed to a standalone instance, a cluster, or a domain.
                  ./asadmin deploy --target cluster1 testyWar
                  ./asadmin deploy --target instance1 testyWar 

                  Wrapping Up
                  That's it for this introduction to the GlassFish CLI. You will have taken your first steps to navigating the technical wilderness that is the CLI, though the path stretches on for quite a way yet (for almost as long as I'm stretching this metaphor). Hopefully you can also see that the CLI can be a viable alternative to using the admin console to administer your GlassFish, rather than some needlessly complex thing used by people to show off!

                  Disabling SSLv3 in GlassFish 4.1

                  $
                  0
                  0
                  With the hubbub caused by the Poodle security flaw still hanging around, and its potential seriousness to several systems, now is a good time to instruct everyone on how to disable SSLv3 and plug a security hole. As you can see from the title, I’ll be covering how to do this in GlassFish 4.1, specifically, I will show you how to do this through the admin console, using the CLI, and by changing the domain configuration file.
                  As the title says, I am writing this blog for GlassFish 4.1, so I cannot guarantee that any of the methods below are applicable to earlier versions. I'll be using a clean install of GlassFish, and describing how to disable SSL3 on the default listeners; I'm sadly not omniscient so I won't know the particular set up of every person on the planet.

                  Admin Console
                  Disabling SSL3 via the admin console is how I imagine many of you will do this if you have pre-existing domains. SSL is disabled individually on the HTTP listeners of your domain, which are found in the Configurations tree, so I hope you don't have too many configurations and listeners!

                  For each of your configurations:
                  • Navigate to the Protocols menu, which can be found under Network Config.
                  • For each of the listeners listed, click on the listener's name:
                    • Check if the Security box is ticked; you will not be able to disable SSL3 via the Admin console unless Security is already enabled.
                    • Assuming the Security box is checked, Navigate to the SSL tab, and uncheck the SSL3 box.
                    • Click on the Save button, and move on to the next listener.
                  Once you have disabled SSL3 for each of your listeners, you must restart your domain for the changes to take effect. Before you do so however, you'll also want to disable SSL3 for your IIOP listeners, so for each of your configurations:
                  • Navigate to your IIOP Listeners, which can be found under ORB.
                  • For each of your listeners, click on the name of the listener, then:
                    • Check if the Security box is ticked; you will not be able to disable SSL3 via the Admin console unless Security is already enabled.
                    • Assuming the Security box is checked, Navigate to the SSL tab, and uncheck the SSL3 box.
                    • Click on the Save button, and move on to the next listener.
                  • Restart your domain.
                  And that's it! This is probably the easiest way to disable SSL3, particularly if you're configuring a fresh GlassFish install; on a fresh GlassFish install, you will only have two configurations to edit, and any new instances created can copy their configuration from these.

                  Editing domain.xml
                  Disabling SSL3 via the domain configuration file is a bit more complicated, but can be quicker than doing it via the admin console once you know what you're doing, particularly if you have many HTTPS connectors or configuration groups.

                  The file that you'll be looking for is the domain.xml file, which can be found under $GF_INSTALL/glassfish/domains/$DOMAIN/config, with $GF_INSTALL being the directory that you installed GlassFish in, and $DOMAIN being the name of your domain (the default is domain1). Before beginning to edit the file however, it is best to stop your domain; whilst the domain is still active it is possible for it to make changes to the domain.xml file, which is not something you want happening whilst you are making changes yourself!

                  For those of you uninitiated in the way of the domain configuration file, the wall of text may seem quite daunting. Fear not though, the name of the default HTTPS listener is http-listener-2, so you can make a beeline for this with your editor's find command. Inside of the tags of this protocol, what you'll likely see is something like this:
                  <protocol name="http-listener-2" security-enabled="true">  
                  <http max-connections="250" default-virtual-server="server">
                  <file-cache></file-cache>
                  </http>
                  <ssl classname="com.sun.enterprise.security.ssl.GlassfishSSLImpl" cert-nickname="s1as"></ssl>
                  </protocol> 
                  To disable SSL3, add ssl3-enabled=false between the <ssl> tags, so that it looks like this:
                  <protocol name="http-listener-2" security-enabled="true">  
                  <http max-connections="250" default-virtual-server="server">
                  <file-cache></file-cache>
                  </http>
                  <ssl ssl3-enabled=false classname="com.sun.enterprise.security.ssl.GlassfishSSLImpl" cert-nickname="s1as"></ssl>
                  </protocol> 
                  If you have secure admin enabled, then the protocol that you'll want to do this for is the sec-admin-listener protocol.

                  As before, you'll also want to secure your IIOP ports. These will look slightly different, but SSL3 is disabled in exactly the same way. The default IIOP listeners with SSL enabled are called SSL and SSL_MUTUALAUTH, so use that find function again unless you fancy combing through the entire file. As I noted before, the method to disable SSL3 is the same as disabling it for HTTP listeners, so change this:
                  <iiop-listener address="0.0.0.0" port="3820" id="SSL" security-enabled="true">
                  <ssl classname="com.sun.enterprise.security.ssl.GlassfishSSLImpl" cert-nickname="s1as"></ssl>
                  </iiop-listener>
                  <iiop-listener address="0.0.0.0" port="3920" id="SSL_MUTUALAUTH" security-enabled="true">
                  <ssl classname="com.sun.enterprise.security.ssl.GlassfishSSLImpl" cert-nickname="s1as" client-auth-enabled="true"></ssl>
                  </iiop-listener>
                  To this:
                  <iiop-listener address="0.0.0.0" port="3820" id="SSL" security-enabled="true">
                  <ssl ssl3-enabled="false" classname="com.sun.enterprise.security.ssl.GlassfishSSLImpl" cert-nickname="s1as"></ssl>
                  </iiop-listener>
                  <iiop-listener address="0.0.0.0" port="3920" id="SSL_MUTUALAUTH" security-enabled="true">
                  <ssl ssl3-enabled="false" classname="com.sun.enterprise.security.ssl.GlassfishSSLImpl" cert-nickname="s1as" client-auth-enabled="true"></ssl>
                  </iiop-listener>
                  And you'll have disabled SSL3 from both listeners.

                  Be sure to disable SSL3 on the connectors for both configuration groups; if using the find function of your editor, then you should notice that the listeners are configured twice, once in the default-config, and again in the server-config.

                  CLI
                  As I noted in my previous blog (Administering GlassFish with the CLI), administering GlassFish by the CLI can be faster than using the admin console. In this case, unless you really know what you're doing (or reading this guide!), it can be quite slow.

                  The command that you want is the set command. The problem comes in that to use this command, you must know the "dotted name" of the component you're trying to configure. As you can see below, these can be quite complex, but luckily you have this blog to tell you the dotted names!

                  For the HTTP listeners, the command is:
                   ./asadmin set server.network-config.protocols.protocol.http-listener-2.ssl.ssl3-enabled=false
                  And if you have secure admin enabled, be sure to disable SSL3 for the secure admin listener:
                   ./asadmin set server.network-config.protocols.protocol.sec-admin-listener.ssl.ssl3-enabled=false
                  For the IIOP listeners, the commands are:
                   ./asadmin set server.iiop-service.iiop-listener.SSL.ssl.ssl3-enabled=false
                   ./asadmin set server.iiop-service.iiop-listener.SSL_MUTUALAUTH.ssl.ssl3-enabled=false
                  Done! Told you it can be quicker!

                  Wrapping Up
                  The last thing you should do is test to make sure that your changes have successfully been applied. To do so, enter the following command at a terminal (assuming you're on Linux and have openssl), substituting the host and port for your own:
                   openssl s_client -connect localhost:4848 -ssl3
                   If your settings have correctly been applied, you should get a message like this:
                   CONNECTED(00000003)                                                                                            
                  139973006218896:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:598:
                  ---
                  no peer certificate available
                  ---
                  No client certificate CA names sent
                  ---
                  SSL handshake has read 0 bytes and written 0 bytes
                  And that's that! Hopefully this blog can help you secure your GlassFish against the flaws of SSL3 in whichever way you find easiest, and blunt the teeth of Poodle.

                  A GlassFish Alternative with Teeth - Get Voxxing

                  $
                  0
                  0
                  At last weeks Devoxx, a brand new knowledge sharing platform was unveiled, going by the name of Voxxed!

                  With links to Parleys and Devoxx, Voxxed plays host to the latest tech news, webinars, opinion pieces, interviews with the hottest names in tech and much more!

                  The site is packed with interesting information from articles such as 'JavaEE and the EWoks' by Murat Yener to 'The Internet of things Magic Show!' with Stephen Chin

                  Steve Millidge, Founder and Director of C2B2 Consulting and Payara did an interview with Voxxed in order to give a broader explanation Payara:

                  'A GlassFish Alternative with Teeth'

                  In recent months, Middleware specialist C2B2 has turned its attention the Java EE application server, developing a new solution named Payara, which is designed to push GlassFish “upstream.” Here, C2B2 Founder and Director Steve Milldge introduces this badass new solution.

                  Voxxed: Why was it so important to you to create a replacement for GlassFish when there are a number of Java EE solutions out there?

                  Steve: GlassFish is a great application server so by no means are we trying to replace it! When Oracle announced last year that they would no longer provide commercial support for GlassFish, we sat back and watched as numerous industry bloggers labelled GlassFish as ‘dead’ and ‘reduced to a toy’.

                  In our opinion this couldn’t be further from the truth and what most bloggers failed to mention was that Oracle announced that they will continue to release updates of GlassFish and it will remain the Reference Implementation for JavaEE. To us, this doesn’t sound like Oracle are looking to ‘kill’ GlassFish at all...

                  Click here to continue reading the full article

                  Remember, keep an eye out for more Payara and C2B2 news and Get Voxxing!

                  JavaOne 2014 - Impressions

                  $
                  0
                  0
                  It is a while since I have been to JavaOne, in fact I have not been since the Oracle acquisition of Sun and the move of JavaOne from the Moscone Centre to the Hilton, so my first observations are that it was a lot smaller than the old JavaOne, with less "tracks" and of those, less were focussed on the deep down technical aspects and more of them were focussed on development frameworks. However, of the sessions that I attended, there were definitely some useful bits of information.


                  The most interesting session I attended (other than my own, which I will talk about later) was on the SAP JVM, which is a fork of hotspot but with added features to improve supportability and diagnostics. One of the most interesting changes was that they have added a lot of information to the "core" exceptions that are often seen. So, for example an IO exception gives more details about the file that was being accessed, a ClassNotFound exception will tell you which classloader it was that failed to find the class, etc. They have also added a lot of information to thread dumps, so each thread can have additional "annotation" information associated with it, and they use this to record the user that the thread is working on behalf of, the amount of IO that has been done, and any URLs the thread is currently working with. All of this adds up to a really nice looking JVM which is unfortunately (due to licensing restrictions) only available for SAP applications. If SAP can find a way to overcome these restrictions (perhaps forking OpenJDK rather than HotSpot) I would have no hesitation in recommending that our customers consider this as their default JVM choice.

                  One of the other interesting sessions I attended was on Shenandoah, the garbage collector being written by Red Hat (don't worry too much, its being written by ex-sun employees who used to work on hotspot). Shenandoah is a low pause garbage collector that is looking to give lower and more consistent pauses than those offered by G1, although the motivating factor for Red Hat seems to be "because we can" and they didn't have a good set of acceptance criteria for when they would consider it ready. It is still in early stages at the moment, and while it does deliver low pauses, it has a significant overhead to performance, as to allow compaction of live memory they have introduced a "read barrier" check to all reads, which significantly slows down JVM performance. It is certainly an interesting project, but without a set of real performance figures, or customers that want to use Shenandoah (they were asking for customers to help them test it at the talk, as so far they have no real users testing it). I think it will struggle to achieve much significe. There were a number of Azul engineers at the talk who seemed to have tried all this before and decided it wasn't the most effective approach.

                  The most interesting talk was obviously mine ;) I was running a lab on JMX, which was "sold out" to 60 attendees, and I thoroughly enjoyed it. While I have given talks before I have never done so in front of an audience that was so engaged, and so receptive to the message I was giving. It was great to see developers that wanted to add monitoring to their application before things went wrong, and were getting excited about what to me is one of the best APIs in Java (but to most developers is of no interest whatsoever).

                  The exhibition area was smaller than it used to be, but there were a surprising number of monitoring products around, most of them though seem to be coming at things from an end-to-end transaction monitoring approach - adding a request ID to the incoming request and propagating it all the way down into the database, then giving you timings of how long each process took. The odd thing was that none of these tools incorporated the existing JMX metrics exposed by containers into this approach, and talking to the people on the stands, none of them understood why that would be a useful thing to do. There were a couple of other interesting monitoring tools, but the one that really caught my eye was HeapStats http://icedtea.classpath.org/wiki/HeapStats (developed by NTT), which is more of a problem identification tool than monitoring. HeapStats provides real-time monitoring of the heap contents to look for problems, it seems an interesting approach, and while the tool is far from being polished it certainly shows promise.
                  Unfortunately this still leaves me in the position that none of the monitoring tools around fit what I feel a good Java monitoring tool should do - the ones that are trendy at the moment are gathering good information if you want to know why one particular transaction is slow in the database, but if the problem is that your AJAX requests are exhausting your mod_jk thread pool then you aren't going to see that, and you really need to.

                  I'd like to return to JavaOne next year, when hopefully there will be some deeper dive technical sessions, and with any luck, some of the monitoring tools will do both transaction tracing and JMX monitoring.


                  'Through the JMX Window' - Hands-on-Lab by Matt Brasier




                  DOAG 2014 Impressions

                  $
                  0
                  0
                  I am writing this blog from a coffee shop in Nuremberg, having just attended DOAG 2014 at the Nuremberg Conference Center. This was my second time talking at the DOAG conference and it seemed even busier than last time. DOAG is the German Oracle User Group, and the crowd at its annual conference is a wide mix of software developers, product managers and sales from a wide range of companies using the whole spectrum of Oracle products.

                  As last year, the sessions were mostly in German, and while my German is good enough to have a simple conversation with someone when we are both trying to keep things simple and easy to understand, it isn't up to the level required to follow a complex technical session (I did try, attending an interesting looking session on Identity Propagation in Oracle Fusion Middleware, but I only managed to follow about 10% of what was said). The number of english language sessions in the middleware track was limited (just 12) and many of these clashed with each other, which was a shame. However the sessions I did manage to attend were interesting.

                  There was a definite focus on Oracle SOA Suite, BPM and OSB talks this time around, with many of them being roadmap outlines presented by the Oracle product management teams (unfortunately heavily caveated with Oracle's safe harbour statement preventing me discussing the plans here). There were a few interesting how-to talks, one on how to use the user messaging service in SOA suite to notify users of items of interest, and another on integrating Oracle Event Processing and SOA suite. OEP is a product that I can see becoming more widely used as organisations seek to identify and act on patterns in the event streams generated by their SOA Suite applications.

                  My presentation was on the Thursday (luckily in the afternoon, giving people time to recover from the community party the night before) and was well attended, with two thirds of the seats taken. The topic was using WLST to create WebLogic domains (see slides below), and the audience was a mix of people who have already started down this road and wanted advice on best practices, through to people who had no experience of WLST. I had some interesting conversations with people after the presentation, and it was good to see so many people who want to take devops to its logical conclusion and include domain builds in their continuous build and integration suites.

                  I hope to be back for DOAG 2015, and already have some ideas for presentations that people may find interesting. If DOAG can continue to expand the english language agenda (and schedule the english language sessions in a track so that they don't clash) it has the potential to draw people from all around Europe.




                  Alternative Logging Frameworks for Application Servers: Tomcat

                  $
                  0
                  0
                  The final part of the blog series has finally arrived, this time covering Tomcat. Once more, we will be covering the basics on configuring it to use Log4j2 and SLF4J with Logback as the logger for a sample web application.

                  The previous entries in this series can be found by following these links:
                  The environment used for this blog was a 64-bit Linux VM, JDK Hotspot 8u25, Tomcat 5.0.15, Log4j 2.1, logback 1.1.2, slf4j 1.7.7, and NetBeans 8.0.1 as my IDE.

                  Under the assumption that you’ve already downloaded and installed all that you need, let’s begin…

                  Log4j2

                  We’ll configure Tomcat to use Log4j2 on a per deployment basis, just like in the tutorial for WildFly. As with when configuring WildFly, we do not need to import any additional JARs into WildFly, we can just package them with the application. NetBeans will actually do this for you, which makes this process even easier.
                  To keep things simple, let’s continue with the test application used in the previous blogs. For those fresh coming in, or for those who just want to start from scratch, here are the instructions:
                  • Create a Server in NetBeans that maps to your Tomcat installation. 
                  • Create a Java web application.
                  • Create a Servlet in this new project.
                  • Import the Log4j2 JARs to the project:
                    • Right click on the project, and select properties
                    • Go to the Librariespage
                    • Select Add JAR/Folder, and import the following two files:
                      • log4j-api-2.1.jar
                      • log4j-core-2.1.jar
                    • Click OK
                  • Import the log4j package into your servlet by adding this code snippet below:
                   import org.apache.logging.log4j.*;  
                  • Declare and initialise a logger:
                   private static Logger logger = LogManager.getLogger(TestServlet.class.getName());  
                  • Add some logger messages at each log level to the processRequest method, so that they get executed when the Servlet is called:
                   protected void processRequest(HttpServletRequest request, HttpServletResponse response)
                  throws ServletException, IOException
                  {
                  response.setContentType("text/html;charset=UTF-8");
                  try (PrintWriter out = response.getWriter())
                  {
                  logger.trace("Tracing!");
                  logger.debug("Debugging!");
                  logger.info("Information!");
                  logger.warn("Warnings!");
                  logger.error("Oh noes!");
                  }
                  }
                  Having a try or finally block for our try clause would normally be expected, though to keep things basic we'll omit it (just don't make that excuse in your other code!).

                  • Edit the index.htmlpage that was automatically generated when you created the project so that you can click on a button to call the servlet:
                  <html>
                  <head>
                  <title>Testing</title>
                  <meta charset="UTF-8">
                  <meta name="viewport" content="width=device-width, initial-scale=1.0">
                  </head>
                  <body>
                  <form name="testForm" action="TestServlet">
                  <input type="submit" value="push me!" name="testybutton" />
                  </form>
                  </body>
                  </html>
                    With our test application created, let’s begin configuring Log4j2. As we are taking advantage of the Log4j configuration discovery process to find the configuration file on the classpath, the file must be called log4j2.xml. In NetBeans:
                    • Expand the Web Pages folder of your test application in the project navigator pane.
                    • Right click on the WEB-INF folder, and create a new folder inside it called classes.
                    • Right click on this new classes folder, expand the Newmenu, and select XMLDocument.
                    • Call it log4j2, and just create it as a well-formed document.
                    • Fill it out as follows (replacing the fileName value with your own file path and file name):
                    <?xml version="1.0" encoding="UTF-8"?>  
                    <Configuration status="WARN">
                    <Appenders>
                    <File name="FileLogger" fileName="/home/andrew/tomcat.log">
                    <PatternLayout pattern="%d{HH:mm} [%t] %-5level %logger{36} - %msg%n"/>
                    </File>
                    <Console name="ConsoleLogger" target="SYSTEM_OUT">
                    <PatternLayout pattern="%d{HH:mm:ss} [%t] %-5level %logger{36} - %msg%n"/>
                    </Console>
                    </Appenders>
                    <Loggers>
                    <Root level="trace">
                    <AppenderRef ref="FileLogger"/>
                    <AppenderRef ref="ConsoleLogger"/>
                    </Root>
                    </Loggers>
                    </Configuration>
                    This will log the messages we specified in the servlet to a custom file, and to the default Tomcat outlog, catalina.out. To help differentiate them, the messages logged to the console will log with the hour, minutes, and seconds, whereas the messages logged to our own file will only have the hour and minutes.

                    Testing

                    To test that everything is working as it should be, let’s give it a test run:
                    • Clean and build the project, this will add the two JARs needed by the logger (log4j-api-2.1.jarand log4j-core-2.1.jar) to $project_install_location/build/web/WEB-INF/libdirectory, so that Tomcat can use them.
                    • Right click on the project, and deploy it to Tomcat.
                    • Click on the play button, and your browser should load with the application.
                    • Click the button, and then check in your log file and in the catalina.out log file (this can be found in $tomcat_install_location/logs), you should see your logger messages.
                      Take note: if you started Tomcat through NetBeans, then the log messages will not be output to the catalina.out file, instead being directed to the console output in NetBeans.

                      Logback

                      Fortuitously, Logback can be configured to work with Tomcat in almost exactly the same way as Log4j2, the only difference being the code syntax. I’ll go through the whole process again for those of you who are just jumping to this bit:
                      • Create a Server in NetBeans that maps to your Tomcat installation.
                      • Create a Java Web application.
                      • Create a Servlet in this new project.
                      • Import the Logback and SLF4J JARs into the project:
                        • Right click on the project, and select properties
                        • Go to the Librariespage
                        • Select Add JAR/Folder, and import the following files:
                          • slf4j-api-1.7.7.jar
                          • logback-classic-1.1.2.jar
                          • logback-core-1.1.2
                        • Click OK
                      • Import the following SLF4J packages into your servlet:
                       import org.slf4j.Logger;
                      import org.slf4j.LoggerFactory;
                      • Declare and initialise a logger:
                       private static Logger logger = LoggerFactory.getLogger(TestServlet.class.getName());
                      • Log some messages at various levels in the processRequest method, such that they get triggered when the Servlet is called:
                       protected void processRequest(HttpServletRequest request, HttpServletResponse response)
                      throws ServletException, IOException
                      {
                      response.setContentType("text/html;charset=UTF-8");
                      try (PrintWriter out = response.getWriter())
                      {
                      logger.trace("Tracing!");
                      logger.debug("Debugging!");
                      logger.info("Information!");
                      logger.warn("Warnings!");
                      logger.error("Oh dears!");
                      }
                      }
                      Again, we should really put a catch or finally block for the auto-generated try, but given that we're keeping this as basic as possible we'll omit it.

                      As before, let’s edit the index.html page that was automatically generated for us so that you can click on a button to call the servlet:
                      <html>
                      <head>
                      <title>Testing</title>
                      <meta charset="UTF-8">
                      <meta name="viewport" content="width=device-width, initial-scale=1.0">
                      </head>
                      <body>
                      <form name="testForm" action="TestServlet">
                      <input type="submit" value="push me!" name="testybutton" />
                      </form>
                      </body>
                      </html>
                      Next, let’s create a logback configuration file that prints out our logger messages to a custom file, and to the catalina.out file. In NetBeans:
                      • Expand the Web Pages folder of your test application in the project navigator pane.
                      • Right click on the WEB-INF folder, and create a new folder inside it called classes.
                      • Right click on this new folder, expand the New menu, and select XMLDocument.
                      • Call it logback, and create it as a well-formed document; we don’t need any schema or DTD constraints for this.
                      • Populate it with this (replacing /home/andrew/tomcat.log with your own file path and file name):
                      <?xml version="1.0" encoding="UTF-8"?>  
                      <configuration>
                      <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
                      <encoder>
                      <pattern>%d{HH} [%thread] %-5level %logger{36} - %msg%n</pattern>
                      </encoder>
                      </appender>

                      <appender name="FILE" class="ch.qos.logback.core.FileAppender">
                      <file>/home/andrew/tomcat.log</file>
                      <append>true</append>
                      <encoder>
                      <pattern>%d{HH:mm} [%thread] %-5level %logger{52} - %msg%n</pattern>
                      </encoder>
                      </appender>

                      <root level="TRACE">
                      <appender-ref ref="STDOUT"/>
                      <appender-ref ref="FILE"/>
                      </root>
                      </configuration>
                      Done! The only thing left to do is, build, deploy, and test the application.

                      Testing

                      Follow these instructions to build, deploy, and test the application:
                      • Clean and build the project, this will add the JARs needed by the logger (logback-classic-1.1.2.jar, logback-core-1.1.2.jar, and slf4j-api-1.7.7.jar) to the $project_install_location/build/web/WEB-INF/libdirectory, so that they deploy to Tomcat with the application.
                      • Right click on the project, and deploy it to Tomcat.
                      • Click on the play button, and your browser should load with the application.
                      • Click the button, and then check in the log file you specified in the Logback configuration, and in the catalina.out log file (this can be found in $tomcat_install_location/logs), you should see your logger messages.
                      As I noted before, if you started Tomcat through NetBeans, then the log messages destined for the catalina.out file will instead be directed to the console output in NetBeans.

                        Final Thoughts

                        This is the last in this series, having covered GlassFish, WebLogic, WildFly, and now Tomcat; the “big four” Java EE application servers. I hope that you've found these tutorials helpful in getting started with using alternative logging frameworks with application servers, they do provide some benefits after all!

                        Good luck from here!

                          Clustering WebSockets on Wildfly

                          $
                          0
                          0

                          Brief

                          This is the first on a 2-part blog about how to assemble an application that is using WebSockets on WildFly in a clustered environment on Amazon EC2. There will be five main sections covered here and they can be broken down into the following:


                          • Architecture set up
                          • WildFly clustering configuration on EC2
                          • WebSocket application on WildFly
                          • Storing and ensuring high availability of data using Infinispan
                          • Apache load balancer configuration

                          This first part will be focusing on the first three points listed above. In the second part I will detail how to set up Infinispan and configure the Apache load balancer.

                          Introduction


                          To quote the lovely people at Mozilla, 'WebSockets is an advanced technology that makes it possible to open an interactive communication session between the user's browser and a server'.  In a more technical sense, it means that it allows two-way communication between a client and a server over a single TCP connection. It is primarily designed to be used in a web application, i.e. with a browser-based front-end. However, it is possible to use it in any client-server application as well.

                          WebSockets are one of the new features in Java EE 7 (JSR-356), so we thought it would be a nice idea to build a simple web application that made use of WebSockets in a clustered environment.

                          For this example, we will use the WildFly Application Server, which is the free open source Java EE 7 platform available from JBoss. We will be running this entire demo on Amazon EC2.

                          Architecture set-up


                          The following is the set up that we intend to use for the purposes of this application.


                          Both of the WildFly instances and the httpd one are set up to be on their own individual virtual machines on EC2. There are three main reasons for this set up:
                          1. We only need to expose the public IP address of the httpd server, thus we don't need to get a public IP for any of the WildFly instances.
                          2. We can add or remove more WildFly servers to the cluster if needed and not have to worry about the public facing IP address of the application.
                          3. Since there is one application running, we don't need to run the server in domain mode, and standalone is sufficient.


                          WildFly clustering configuration


                          Below is an example of how to configure JGroups for clustering on WildFly, you would have to add this into either the standalone-ha.xml or standalone-full-ha.xml


                           1
                          2
                          3
                          4
                          5
                          6
                          7
                          8
                          9
                          10
                          11
                          12
                          13
                          14
                          15
                          16
                          17
                          18
                          19
                          20
                          21
                          22
                          23
                          24
                          25
                          26
                          27
                          28
                          29
                          30
                          31
                          32
                          33
                          34
                          35
                          36
                          37
                          38
                          <subsystemxmlns="urn:jboss:domain:jgroups:2.0"default-stack="tcp">
                          <stackname="udp">
                          <transporttype="UDP"socket-binding="jgroups-udp"/>
                          <protocoltype="PING"/>
                          <protocoltype="MERGE3"/>
                          <protocoltype="FD_SOCK"socket-binding="jgroups-udp-fd"/>
                          <protocoltype="FD_ALL"/>
                          <protocoltype="VERIFY_SUSPECT"/>
                          <protocoltype="pbcast.NAKACK2"/>
                          <protocoltype="UNICAST3"/>
                          <protocoltype="pbcast.STABLE"/>
                          <protocoltype="pbcast.GMS"/>
                          <protocoltype="UFC"/>
                          <protocoltype="MFC"/>
                          <protocoltype="FRAG2"/>
                          <protocoltype="RSVP"/>
                          </stack>
                          <stackname="tcp">
                          <transporttype="TCP"socket-binding="jgroups-tcp"/>
                          <protocoltype="TCPPING">
                          <propertyname="initial_hosts">$YOUR_FIRST_IP_ADDRESS[7600],$YOUR_SECOND_IP_ADDRESS[7600]</property>
                          <propertyname="port_range">0</property>
                          <propertyname="timeout">3000</property>
                          <propertyname="num_initial_members">2</property>
                          </protocol>
                          <protocoltype="MERGE2"/>
                          <protocoltype="FD_SOCK"socket-binding="jgroups-tcp-fd"/>
                          <protocoltype="FD"/>
                          <protocoltype="VERIFY_SUSPECT"/>
                          <protocoltype="pbcast.NAKACK2"/>
                          <protocoltype="UNICAST3"/>
                          <protocoltype="pbcast.STABLE"/>
                          <protocoltype="pbcast.GMS"/>
                          <protocoltype="MFC"/>
                          <protocoltype="FRAG2"/>
                          <protocoltype="RSVP"/>
                          </stack>
                          </subsystem>

                          There are a few points to note here as compared to the default configuration that ships with the WildFly zip:

                          • On EC2, and other similar cloud providers, UDP multicast is disabled. Enabling multicast on a public cloud would mean a significantly higher number of messages being sent across the cloud infrastructure, resulting in a large performance hit to the service being provided. As a result, for the JGroups configuration on WildFly, we will use the TCP stack by default, and configure TCPPING.
                          • To configure TCPPING, we would need to add in the IP addresses of the Wildfly instances that will be present on start-up. This way we can allow for members to join the cluster. We also specify the port to be 7600.
                          • Finally, add in the number of initial members to 2.

                          WebSocket application


                          For the actual application, we will start by looking at the set up for the front-end. All that this app is doing is storing a key/value pair (both Strings) and then using that same key, we can pull the value out.

                          Below is a snippet of the main index.html page, which will produce two forms. These forms are used to either input a key/value pair or to simply get a value using a key.


                          <!-- Input form -->
                          <form>
                          <fieldset>
                          <label>Store:</label>
                          <inputid="putKey"type="text">
                          <inputid="putValue"type="text">
                          <inputtype="submit"id="store"value="StoreMe">
                          </fieldset>
                          </form>

                          <h4>Result from the submit operation:</h4>
                          <spanid="result"></span>


                          <br/>
                          <br/>
                          <!-- Get form -->
                          <form>
                          <fieldset>
                          <label>Get:</label>
                          <inputid="getKey"type="text">
                          <inputtype="submit"id="getValue"value="GetMe">
                          </fieldset>
                          </form>

                          <h4>Result from the get operation:</h4>
                          <spanid="getResult"></span>

                          The inputs from these forms will be processed by some JavaScript code, which does the following:
                          • Create two different WebSocket objects; one for storing and one for getting. These will have different endpoints associated with them - i.e. one has the resource 'store' and the other has 'getter'.
                          • Take the input from the forms, and then send that input using the WebSockets to the backend.
                          • Take the output from the server, and attach that message to the HTML. (See ws.onmessage and wsGet.onmessage)


                          <script>
                          var port = "";
                          var storerUrl = 'ws://' + window.location.host + port + window.location
                          .pathname + 'store';
                          var ws = new WebSocket(storerUrl);

                          var getterUrl = 'ws://' + window.location.host + port + window
                          .location.pathname + 'getter'
                          var wsGet = new WebSocket(getterUrl);

                          ws.onconnect = function(e) {
                          console.log("Connected up!");
                          };

                          ws.onerror = function(e) {
                          console.log("Error somewhere: " + e);
                          };

                          ws.onclose = function(e) {
                          console.log("Host has closed the connection");
                          console.log(e);
                          };

                          ws.onmessage = function(e) {
                          document.getElementById("result").innerHTML = e.data;
                          };

                          wsGet.onconnect = function(e) {
                          console.log("Connected up!");
                          };

                          wsGet.onerror = function(e) {
                          console.log("Error somewhere: " + e);
                          };

                          wsGet.onclose = function(e) {
                          console.log("Host has closed the connection");
                          console.log(e);
                          };

                          wsGet.onmessage = function(e) {
                          console.log("Received message from WebSocket backend");
                          document.getElementById("getResult").innerHTML = e.data;
                          };



                          document.getElementById("store").onclick = function(event){
                          event.preventDefault();
                          var key = document.getElementById("putKey").value;
                          var value = document.getElementById("putValue").value;
                          var objToSend = '{"key": ' + key + ', "value": ' + value + '}';
                          ws.send(objToSend);
                          };

                          document.getElementById("getValue").onclick = function(event) {
                          event.preventDefault();
                          var key = document.getElementById("getKey").value;
                          var objToGet = '{"key": ' + key + '}';
                          wsGet.send(objToGet);
                          };

                          </script>

                          That's all we require in order to set up the front end for this application. Again, it's important to note that this is a simple app which is demonstrating how to integrate WebSockets into your Java EE application. The next part is where we will look at how we can make use of WebSockets in our Java backend to deal with this input. First, let's look at the Storer class.




                          @ServerEndpoint("/store")
                          publicclassStorer {

                          privatestatic Logger storerLogger = Logger.getLogger(Storer.class.getName());

                          /**
                          * Method that will store some basic data using the application platform's
                          * clustered caching mechanism.
                          *
                          * @param data - The information, as a JSON from the js/html front-end.
                          * @param client - the client session
                          */
                          @OnMessage
                          publicvoidstore(String data, Session client) {
                          String returnText = null;

                          // Let's get the Json object first.
                          JsonParserFactory factory = JsonParserFactory.getInstance();
                          JSONParser parser = factory.newJsonParser();
                          Map jsonMap = parser.parseJson(data);

                          if (jsonMap.containsKey("key") && jsonMap.containsKey("value")) {
                          // Store the data in the cache now that we have validated it.
                          String key = (String) jsonMap.get("key");
                          String value = (String) jsonMap.get("value");
                          // TODO: Not actually storing anything yet!

                          // Creating some string returns to go back to the front-end along
                          // with some logging statements.
                          StringBuilder sb = new StringBuilder();
                          sb.append("Well done. We now have your data!");
                          storerLogger.info("Going to store key " + key + " and value " +
                          value + ".");
                          sb.append(" Key: ").append(key).append(" Value: ").append(value)
                          .append(". ");
                          sb.append(" Stored at date: ").append(buildDate()).append(".");


                          returnText = sb.toString();
                          } else {
                          // Failed the validation check. So we are now going to log some
                          // information to the server and return some information to the
                          // front-end as well.

                          storerLogger.info("Seems to be a problem with the input. Cannot find" +
                          " the appropriate \'key\' and \'value\' string keys.");
                          returnText = "Problem with the input that you sent in. Are they just" +
                          " the default values?";
                          }

                          // Send to client.
                          client.getAsyncRemote().sendText(returnText);
                          }

                          private String buildDate() {
                          returnnewSimpleDateFormat("HH:mm dd/MM/yy").format(Calendar
                          .getInstance().getTime());
                          }

                          }

                          There are some important points to note over here:


                          • The @ServerEndpoint class level annotation tells the application container, Wildfly in this case, that this class will be dealing with WebSocket messages to the endpoint '/store'. Going back a little bit, in our HTML form, when we submit the key and value that we want to store, that input will be taken by this Java class.
                          • A method with the @OnMethod annotation will be called when a WebSocket sends a message to this endpoint. Any class which has a @ServerEndpoint annotation can only have one @OnMessage annotation. This method will take in a String parameter (our input) and a javax.websocket.Session object.
                          • In this case, we are using a simple JSON parser available here. It will parse our input into a Map object and then we can take out the keys and values which we require (as long as they exist!).
                          • After storing the key/value pair, we can then return back to the front-end.
                          • We use the Session object to send a message back to the front-end using the same WebSocket connection.
                          • WE ARE NOT ACTUALLY STORING ANYTHING IN THIS CLASS AS YET.

                          As we can see, we would need a separate class for our Getter object. This would follow a similar path except it would use a different endpoint on the @ServerEndpoint annotation, and would expect a slightly different format of input. Below is how the Getter class has been set up.


                          @ServerEndpoint("/getter")
                          publicclassGetter {

                          static Logger getterLogger = Logger.getLogger(Getter.class.getName());

                          /**
                          * Method that will attempt to get a value for a given key.
                          *
                          * @param data - A String that is the key
                          * @param client - the client session
                          */
                          @OnMessage
                          publicvoidgetValue(String data, Session client) {
                          getterLogger.info("Getting key: " + data);

                          String returnText = "{No value found for key " + data + "}.";
                          try {
                          returnText = getValueFromCache(data);
                          } catch (Exception e) {
                          returnText = e.getMessage();
                          getterLogger.log(Level.WARNING, e.getMessage());
                          e.printStackTrace();
                          }

                          client.getAsyncRemote().sendText(returnText);
                          }



                          • The getValueFromCache() method is a dud here and doesn't do anything yet.

                          And that's it! 

                          That's all for the first part of this blog looking at how to set up a clustered WebSocket application on Wildfly running on Amazon EC2. Going forward from here, we will look at how to use Infinispan for this application to allow for the availability of our data on fail-over and also on how we can put a load balancer in front of our two Wildfly instances.

                          Thanks for reading! 

                          Navin Surtani
                          C2B2 Support Engineer

                          Guerilla JMX Monitoring

                          $
                          0
                          0
                          Very often in the course of work at C2B2, we find that customers are running their middleware with no monitoring in place. Often, there is some monitoring, but it is not at the JVM level, just at the OS level which is incredibly limiting. You will know the amount of RAM that the JVM is taking up, but you have no idea if the heap is 20%, 50% or 90% full.

                          This lack of insight always takes me by surprise, since monitoring is one of the first things we recommend to set up for any business. Why wouldn't you want to know how your business-critical infrastructure is doing in advance of that 3am phone call? Why wouldn't you want to have a week of data to pore over telling you exactly what went wrong at 2:48am leading to the outage, and how to put it right and prevent it happening again before your morning coffee?

                          The advantages of having a good monitoring solution in place are huge, and the benefits just get compounded. Black Friday is now, officially, a "thing" in the UK. Thanks, Amazon. This year saw a lot of businesses underestimating Black Friday and getting it very wrong on the most lucrative day of the year!

                          Having good monitoring in place enables such wonderful cloud-based goodies as auto-scaling, meaning you can react to these kinds of situations and increase your capacity to cope with all the extra transactions.

                          But what if things go wrong and you have no monitoring in place? That's exactly what happened to one customer of ours recently.

                          The scenario was fairly simple. Hundreds of ActiveMQ brokers were in place in remote offices where applications would queue messages to local queues. Those messages were then bridged to a central ActiveMQ, like a hub and spoke topology.

                          The problem was that remote queues would get backed up and messages wouldn't be bridged across to the central broker.

                          The solution to find out what was going on with the ActiveMQ broker was to use the Java Management eXtensions API (JMX). Below, I've included the Java class I put together for that purpose, to read data from MBeans exposed over JMX (written for Java 6):


                          .
                          The script is very basic and was written to serve a specific purpose. There's very little to actually tie it in to ActiveMQ, so if you want to modify it to make a generic version, it shouldn't be too difficult.

                          The output will be a CSV file with attributes for the broker (add or remove MBean attribute names from the brokerAttributes array as needed) and a separate CSV file for each queue on the broker.

                          The important lines in the above are (modified below):

                              String host = "localhost";
                              String port = "1099";
                              String url = "service:jmx:rmi:///jndi/rmi://" + host + ":" + port + "/jmxrmi";

                              JMXServiceURL serviceUrl = new JMXServiceURL(url);
                              JMXConnector jmxConnector = JMXConnectorFactory.connect(serviceUrlnull);

                              try {
                              
                                 MBeanServerConnection mbeanConn =
                                    jmxConnector.getMBeanServerConnection();
                                 ObjectName mBean = new
                                    ObjectName("mbean name");
                                 String attr = "mbean attribute name";
                                 mbeanConn.getAttribute(mBeanattr);
                              
                              } catch (Exception e) {
                                 e.printStackTrace();  
                              } finally {  
                                 jmxConnector.close();  
                              }

                          The above code is connecting to a JMX mbean server using the remote URL built at the top. Then an ObjectName is created with the name of the mbean I'm interested in, and then below that, a String of the attribute I'm interested in is created. I then use the getAttribute method with the MBean and attribute I want.

                          In a similar vein, I wrote a script using Groovy to monitor Tomcat for a different customer, when we were just interested in the number of HTTP connections.



                          I include both here because both are useful, but the difference between the Groovy and Java versions is stark. Both are doing essentially the same thing but, for a use case like this, Groovy lets you get straight to code which actually solves the problem, though perhaps Java 8 will go some way to improving the situation.

                          The important lines in Groovy to get an MBean attribute are:

                              def port = 8090
                              def host = '192.168.150.154'
                              def fileName = 'HTTPMbean.txt'
                              def connection = new JmxBuilder().client(port: port, host: host)
                              
                              connection.connect()
                              
                              // Get the MBeanServer.
                              def mbeanConn = connection.MBeanServerConnection
                              
                              //Create GroovyMBean. 
                              def mbean = new GroovyMBean(mbeanConn"mbean name")
                              
                              println "$mbean.attributeName"

                          In the above, the three main pieces are still there, the mbean connection, the mbean name and finally the attribute name. Groovy lets you access the attribute directly, so the name of the attribute here is "attributeName".

                          You can find names of new MBeans to monitor, and their attributes, with JConsole as shown in the screenshot below:


                          This is certainly not the sort of monitoring we would recommend long-term. These solutions were one-offs, designed to help in the solving of specific problems. If you ever find yourself in the position of needing to use scripts like these to get some insight into your system, ask yourself why you don't already have that insight and the pile of historical data to go with it!



                          | View Mike Croft's profile on LinkedIn | Mike CroftonGoogle+

                          Configuring RBAC in JBoss EAP and Wildfly - Part Two

                          $
                          0
                          0
                          This is a follow up to part one of my blog regarding setting up Role Based Access Control (RBAC) in JBoss EAP 6 and Wildfly.

                          Part one can be found HERE.

                          In Part One of this blog we first looked at what RBAC is and why it's needed. We then set up a number of users and assigned those users to groups with each group having different permissions.

                          In this follow-up we will look at constraints which allow more fine grained permission setting, scoped roles which allow you to set permissions on individual servers and audit logging which allows you to see who has accessed the management console and what changes they have made.

                          For the purposes of this blog I will be using the following:

                          OS - Ubuntu 14
                          Java - 1.7.0_67
                          JBoss - EAP 6.3

                          Although I'm using EAP these instructions should work just the same on Wildfly.

                          In order to do the practical examples you must have gone through the examples in Part One as the changes made here will follow on from that.

                          Constraints

                          Constraints are named sets of access-control configuration for a specified list of resources. RBAC uses a combination of contraints and role permissions to decide if a user is allowed to perform a specific action.

                          There are two types of constraint, Application constraints and Sensitivity constraints.

                          Application constraints

                          Application constraints define which resources, attributes and operations can be accessed by users with the Deployer role.

                          By default the only constraint that is enabled is core which gives the Deployer role the ability to manage deployments.

                          Application constraint configuration is in the Management API at:

                          /core-service=management/access=authorization/constraint=application-classification.

                          Sensitivity constraints

                          Sensitivity constraints define who can access resources that are considered to be sensitive. Only users with the Administrator role or the SuperUser role have the ability to make changes to senstive resources.

                          Sensitivity constraint configuration is in the Management API at:

                          /core-service=management/access=authorization/constraint=sensitivity-classification.

                          OK, so lets look at a practical example of setting up a new constraint.

                          In our original scenario we had a user Bob who was given the deployer role. Now we'd like Bob to be able to alter datasources as it's useful for him to be able to make changes when he's deployed a new app and there are issues.

                          Start up your JBoss server from Part One.

                          Log in to the management console as the user Bob.

                          In the management console, go to Configuration - Connector - Datasources

                          At the moment Bob in his current Deployer role can only read information regarding datasources, he can't edit the current settings or add or remove datasources.

                          Now, we could change Bob's role to give him more control but we only really want him to be able to alter datasources, not give him access to everything. This is where we can instead alter the constraints.

                          First of all cd to the datasources constraint:

                          cd /core-service=management/access=authorization/constraint=application-classification/type=datasources/

                          Now, we need to set the configured-application attribute to true.

                          NOTE - In order to alter the datasource configuration you need to set both the data-source and xa-data-source attributes. Setting just one will not work.

                          ./classification=data-source:write-attribute(name=configured-application,value=true)
                          ./classification=xa-data-source:write-attribute(name=configured-application,value=true) 

                          Now if you go back to the management console you should find that Bob can now edit datasources but does not have access to other resources that an Administrator or SuperUser has.

                          So, we have seen how we can set up users in groups, give particular users the ability to make changes based on their role and how to extend what that role can do based on individual constraints. But what if you have 100s of servers and we want even more fine-grained control and we want to allow users to only make changes to particular servers?

                          Scoped Roles

                          Scoped Roles are roles that you define yourself that allow you to grant the same permissions as the standard roles but apply that to a set of servers rather than all servers. This is done by specifying which server groups or individual hosts that role can make changes to. Once you have defined a scoped role it can be applied to users (or groups of users) the same as any other role.

                          NOTE - Only users in the SuperUser or Administrator roles can perform this configuration.

                          Now, the situation we have in our test scenario is that we have a new user Elvis who we want to be administrator but only for a certain set of servers.

                          First of all lets create a new server and server group and add the server to the group.

                          In the admin console, go to Domain - Server - Server Groups.

                          Add a new server group as follows:

                          Name - Test-Group
                          Profile - full
                          Socket Binding - full-sockets

                          Next, go to Domain - Server - Server Configurations.

                          Add a new server config as follows:

                          Name - Test-Server
                          Server Group - Test-Group
                          Port Offset - 100
                          Auto Start - True

                          Restart the server.

                          Now, we want to create a new user and give him access only to the configuration of the servers in Test-Group.

                          Firstly, we need to create our new scoped role:

                          /core-service=management/access=authorization/server-group-scoped-role=Test-Role:add(base-role=administrator, server-groups=[Test-Group])

                          Next we need to add our new user. For this use the same add-user.sh script we used in Part One to create the user Elvis.

                          Now we need to map our user to our new scoped role:

                          /core-service=management/access=authorization/role-mapping=Test-Role:add
                          /core-service=management/access=authorization/role-mapping=Test-Role/include=test-admin:add(name=Elvis, type=user)

                          Now, if you log in to the admin console as the user Elvis you will see you only have access to the configuration of the Test-Group servers (in this case just the one server Test-Server). Hopefully from this you will see how easy it is to set up servers into groups and then to assign users roles that allow them to configure just the servers in certain groups.

                          Audit Logging

                          Finally we will set up audit logging to capture administrative actions. Although not strictly a part of RBAC it is very useful to be able to see what changes have been made to a system, when those changes were made and by who, if only so you know who to blame when something goes wrong.  ;)

                          Audit logging is switched off by default so the first thing we will do is to switch it on.

                          Use the following command to switch on audit logging:

                          /host=master/core-service=management/access=audit/logger=audit-log:write-attribute(name=enabled,value=true)

                          Now, if you look in JBOSS_HOME/domain/data

                          You should see a file called audit-log.log. In here you will see one record detailing the change we just made.

                          Log records can be output to a file, forwarded to a Syslog server or both. By default they are output to file.

                          This will only switch on audit logging for the domain controller. If you wish to switch on audit logging for all servers run the following:

                          /host=master/core-service=management/access=audit/server-logger=audit-log:write-attribute(name=enabled,value=true)

                          These log files can be found in JBOSS_HOME/domain/servers/<server-name>/data

                          By default all log entries are stored in JSON format. Before being output to file all log entries are passed through a formatter and a handler. The formatter specifies what the log entries look like whilst the handler handles outputting the record to file or to Syslog.

                          Well, that's all for now. Hopefully this has been a useful introduction to Role Based Access Control for JBoss. It's a great new addition and something you should definitely consider when setting up JBoss servers.




                          Viewing all 223 articles
                          Browse latest View live