Quantcast
Channel: C2B2 Blog
Viewing all articles
Browse latest Browse all 223

JavaOne 2013 - Part 2

$
0
0
Content is King and when it comes to session content JavaOne is the greatest conference I've ever attended. I have no doubt about it. I've aimed to visit as many sessions as possible, because one does not have a chance to meet JVM engineers, industry legends or just the "smart pals from a mailing-list" everyday. In this post I'd like to highlight the sessions which impressed me the most. It doesn't necessary means those are the best sessions I've attended. 




Monday

How Not to Measure Latency - Gil Tene

Gil is a very bright guy and experienced speaker and I'd probably enjoy listening him speaking on any topic and this presentation on measuring latency is his master-piece. He demonstrated mistakes people commonly make when they measure latency such as using average values and standard deviations to characterise a system (and silently assuming a normal distribution of values) or assuming occasional glitches have little impact.

The real eye-opener was a part where he described a problem he called "coordinated omission" - in a nutshell: It's a situation affecting a lot of "performance tests" when a system measuring performance unintentionally coordinates with the system we are trying to measure resulting in defect results.

Let's say we have an application with two modes of operation: It replies within 1ms for a period of five minutes (the good mode), but then it takes 1 full minute to process a single request (the bad mode), then follows the good mode again, then the bad mode, etc. Now if we try to measure the latency we'll start our favourite tool and write a test hitting the system. Now how the test will be executed? It depends on the tool, but grab let's say Apache JMeter or HP LoadRunner and let them run for 10 minutes. They will record something like this:
1. During the good mode the system was responding just fine and we have a lot of samples with latency up to 1ms
2. After 5 minutes the bad mode kicked in -> the measuring thread was waiting 60s for a reply -> we have a single(!) sample with a latency of 1 minute
3. In the last 4 minutes of the test the response time was 1 ms again

--> Now we have a lot of requests with very low latency (1ms) and only single(!) request with high latency! Depending on our test-configuration we might end-up with a rather good average response time. If we compute the percentile we will see the vast majority (in fact all but one!) of requests have a response time of 1ms, therefore we might say something like "99,99% of requests were processed within 1ms" Not a bad result at all. Unfortunately it completely hides the fact that our system was not responding for 1 minute at all! That's 10% of total test time! According to Gil most of the tools used to measure performance are affected by this issue as they are designed as sychronised where one thread represents a single user.

Than he presented some open source tools he wrote to help him to measure the latency such as HdrHistogram or jHiccup. Gil has his agenda - he is using the talk to show how the JVM he is working on (Zing) is superior to HotSpot when it comes to latency - but he is very open about it and I don't have any problem with it. He has been presenting this talk at many events and I really recommend to watch it! https://www.google.co.uk/search?q=How+Not+to+Measure+Latency


HotSpot Internals: How to Explore and Debug the OpenJDK VM at the OS Level

For me this was one of the most hard-core sessions from this JavaOne. Goetz Lindenmaier and Volker Simonis (both from SAP) were demonstrating some really low-level stuff, such as debugging the HotSpot inside gdb, placing a breakpoint on a particular part of the template interpreter, etc. I'll write a separate blog about how to do it (if I succeed in doing it!)


Tuesday


Wholly Graal: Accelerating GPU Offload for Java

I have to admit I don't follow the attempt to off-load certain operations to GPU as carefully as I would like to. I attended this session mostly to fill this gap in my knowledge. I'm well aware that modern GPUs have enormous power for certain type of operations, especially when the operation can be parallelised. Given the latest improvements in Java 8 towards easier functional(-like) programming, offloading to GPU sounds like a very attractive idea. Unfortunately my impression from this presentation is that there is still a long way to go before an average developer will be able to utilise the GPU from Java. It still more of a research project, albeit a very interesting research project. Some very basic questions are yet to be answered - such as impact on garbage collector algorithms, etc.


What and How Java Troubleshooters Think: Eight Years of Troubleshooting Java

I had mixed feelings about this session. Content-wise it was not that great. It covered mostly the basics of troubleshooting: Trust nobody, be careful with interpreting the data, etc. Anyway it was fun to watch it as the presenters - Hideyuki Katsomoto and Shin Tanimoto - had really prepared the presentation well. They have rather poor Engrish with a strong Japanese accent - but it was still fun to watch them. I consider this to be a great motivation for me as it's a proof that even a speaker with inferior English can be entertaining!

Wednesday

Where Next with OpenJDK Community Build and Test?

This was meant to be a discussion panel with Ben Evans, Martijn Verburg and Richard Warburton from jClarity, Steve Poole (IBM) and Stuart Marks (Oracle). Unfortunately most of the guys didn't arrive due to other commitments so Steve Poole had to start on its own. It was still a pretty good session mostly thanks to Steve, who really sounds like a great guy with some strong opinions. It was great to see people outside Oracle are actually helping with the OpenJDK project. London Java Community is probably the most active JUG in this area and I feel really proud to be part of. I'll definitely attend one of the hacking session organized by LJC.

OpenJDK is a great project and one of the biggest weaknesses is the lack of publicly available regression test suites. Oracle and other companies have their own suite, but they have not been open source for various reasons.



Bck2Brws: The Java You Don't Have to (Un)install

This is the talk I would normally not even consider to attend. I don't really care about browsers, I have no problem to admit I'm a lousy web-developer, I'm terrible in JavaScript and I've never written a single line in CoffeeScript or any other JavaScript preprocessor/(trans-)compiler. I made an exception for a single reason only - the speaker - Jaroslav Tulach. The NetBeans architect, the author of one of the best books on design I've ever read (Practical API Design: Confessions of a Java Framework Architect) and a fellow Czech. My colleagues from marketing might appreciate his guerilla-style of promoting his talk.


Bck2Brws is a pet project of Jaroslav's. It's a transcompiler from Java to JavaScript allowing him to write Java and run it in a browser. That doesn't sound very excting or new as there have been other projects doing something similar for a while such as GWT. What is doing is something slightly different - GWT compiles Java to Javascript as a part of the build process - Bck2Brws is actually running JVM bytecode inside the browser and with not plugin whatsoever. Everything is interpreted as pure JavaScript! The other significant difference is that Jaroslav is using (a subset) of JDK classes taken directly from the OpenJDK project. So it's actually the same code the normal HotSpot is running! Of course it is somewhat slow (mainly the initial bootstrap) and it's really not production ready, but it's a true hacker project in the best sense of the word!

Permanent Generation Removal Overview

This was an interesting in depth talk delivered by Coleen Phillimore (Oracle) covering the permgen space removal in HotSpot for JDK8. The removal was motivated mostly by attempts to simplify the source code of HotSpot as the notion of permgen was adding unnecessary complexity. Part of the data originally placed in permgen are now on the regular heap, part of the data is placed in a new area (of native memory) called metaspace. I can understand the motivations to remove it (complexity of HotSpot), but I don't share the euphoria of some fellow developers. Sure, OOM:PermGen are not nice, but they are usually just symptoms of deeper problems. The new MetaSpace will be either unconstrained (and a leaking application will eat all available memory) or the space will be constrained and we will have exactly the same problem as with OOM:PermGen. I'll not repeat all the details as there is a nice summary on JavaLobby: http://java.dzone.com/articles/java-8-permgen-metaspace



Thursday


Save Scarce Resources by Managing Terabytes of Objects Off-Heap or Even on Disk

This was a session where Harvey Raja from Oracle and Chris Neal (Pegasus) were presenting how they use Coherence to cache stuff. Pegasus is a reservation system used by many booking agents such as booking.com. In order to provide availability they have to query a lot of 3rd party services. They use Oracle Coherence to cache responses. They described how they use overflow to disk in order to prevent evicting data before it expires. They use a 64GB heap and 1.6 TB disk journal / JVM. They have to write some custom code for Coherence in order to cope with such a large data set:

  • Eviction process was reading a value from disk before evicting it for indexers and listeners. This was killing throughput. They implemented BlindCompactSerializationCache to prevent it.
  • When the JVM was stopped, Coherence attempted to re-partition data. Given the size 1.6TB / node it has significant impact. They wrote DropContentPartitionListener to prevent re-repartitionig. It's simple better to call the 3rd party when (and if) needed than re-partitioning so huge amount of data.
  • When a new JVM process was started it had a similar effect as in the previous point. 


    Java EE and Beyond

    This was the very last session I attended. The discussion panel with Cameron Prudy from Oracle, Emmanuel Bernard from RedHat, Brian Martin (IBM), Davis Blevins (Tomitribe, TomEE), Scott Yara (Pivotal!) and  Antonio Gonclaves (individual). Discussion was quite interesting, it seems there is a demand to have Java EE platform more DevOps friendly, Antonio mentioned an unfortunate situation in a logging space (I'm not sure whether this really belongs to EE as it affects SE platform as well). It seems there is a general consensus that EJB will be broken apart and replaced by CDI in the long-term. Java EE 7 is a step toward this trend with its JTA 1.2 allowing declarative transactions for CDI beans.

    It was interesting to see a guy from Pivotal (the company behind Spring!) to attend a session on the future of Java EE. Does it mean the times are changing and we can see Spring become a Java EE container? Well, Antonio mentioned one of the biggest obstacles is Spring not supporting the CDI and I have to agree with him. I'm convinced the Spring Framework WILL eventually implement the CDI, the only question is when. 

    I was also surprised to see Davis Blevins (TomEE) to sit at one table with the "big guys" from Oracle, IBM and RedHat!


    Viewing all articles
    Browse latest Browse all 223

    Trending Articles