What is a Widget?

I’ve had widgets on my mind lately.

Following definition via Wikipedia

A web widget is a portable chunk of code that can be installed and executed within any separate HTML-based web page by an end user without requiring additional compilation. They are derived from the idea of code reuse.
Other terms used to describe web widgets include: gadget, badge,
module, capsule, snippet, mini and flake. Web widgets often but not
always use DHTML, JavaScript, or Adobe Flash.

Widgets are a big part of what we do here. We have our widget gallery where people can grab and customize widgets that display everything from their own Amazon wishlist to New York Times bestsellers, to the featured stocks from Wallstrip. People can also use our BlueOrganizer Firefox add-on to create custom widgets containing whatever books, music, movies, and more that they’d like.

What this leads me to think is that whether we call these things “widgets” or something else is not important. It does not matter what we call them, it matters what they do and how they do it. If people can easily showcase what they are interested in and find important, then you can call it whatever you want. Or to paraphrase Shakespeare:

“What’s in a name? That which we call a widget, by any other name would still be installed and executed within any separate HTML-based web page and express the interests of that page’s author.”

Software Engineering Tips for Startups


Software is at the very heart of any modern startup.

The business ideas, new utilities for society and the next big thing all boil down to code. If the code is good, the startup has a chance. If the code is bad, no matter how brilliant the business people are the startup is not going to get far.

1. Must have code

se_code.jpg The working code proves that the system is possible, it also proves that the team can build the system. The working code is the launchpad for the business. After it is ready, the business can happen.

In the old days, tech companies were funded based on the idea written on a piece of paper. Those days are long gone. Today a startup needs to have not only working code, but an assembled system and active users. Software Engineering transitioned from the post funding exercise to the means to being funding.

Software now needs to be built faster and more correctly. It needs to constantly change to address the changing nature of the market and meet customer demands. Fundamentally, software engineering in startups is now a different game.

The working system is what gets you in.

2. Must have a technical co-founder

tech_se.pngAny startup starts with the idea and just a few people. A lot of startup co-founders these days are techies, passionate about technology and life. It was not always like that. Just a few years back a purely technical founding team would have had a hard time fund raising because there was a school of thought that you need an MBA to run the company.

In fact, a lot of the reverse was true. A few business people would get together, come up with the idea for a product and then think: Where can we get a techie to get this done?

It is misguided notion that business and technology are somehow separate and that the first one is the king while the second is marginal. It is not, because technology is what makes the business possible to begin with.

So the first tip is to always have a strong technical co-founder. Someone who shares or invents the business along with others, but also has the technical feet on the ground. Someone who can make sure the business is mapped onto technology correctly.

3. Hire A+ engineers who love coding

passion_se.png The software industry survived close to 30 years of crisis. Until recently, building a large scale system that worked was black magic. Most software projects suffered for years, had large engineering teams and little consensus on what needed to be done and how to accomplish it. The resulting systems were buggy, unstable, hard to maintain and extend.

The problem was that there were just too many people who were not that good who were working on the problem.

Startups can not afford to have less than A+ engineers. In a larger company there is an opportunity to mentor and grow people. In a startup every hour is precious. Not much time can be spent teaching people, you need to get people who know what they are doing.

Qualifications for A+ engineers are:

  • Focused on results
  • Loves coding and fluent at it
  • Writes elegant quick code
  • Smart and quick
  • Loves refactoring
  • Values testing
  • Solid in Computer Science

4. Keep the engineering team small and do not outsource

david_goliath_se.jpgA team of 2-3 rock star engineers can build pretty much any system because they are people who are good, love building software, focus on the goal and don’t get in each other’s way. The team of 20 so-so engineers will not get far.

The mythical man-month book debunked the notion of scaling by adding more programmers to the project. The truth is that most successful software today is built by just a handful of good engineers. Less is more applies equally to code and the number of people working on it.

Once you embrace the idea of just a few rock star people building the system, then outsourcing development becomes a really bad thing to do. Tech is your bread and butter, why would you outsource it? There are not many things more important than your code. Trusting people you never met to build the very foundation of your company does not make sense.

Again, it is a myth that you can scale faster with more programmers. It is even a bigger myth that outsiders can get your work done for you. This is not the place to save money. Hire a few of the best guns you can find, pay them well, give them stock options, make them happy and jazz them up about the company.

5. Ask tough questions during the interview

tq_se.png There is nothing worse than being soft during the interview and getting the wrong person into the company. This is bad for you, but more importantly bad for the person. In the end you will end up parting ways, but it would be best to just not make this mistake to begin with. So be tough and ask a lot of technical questions during the interview. What to ask depends on what you are looking for, but here are the basics:

  • Ask standard computer science questions: data structures and algorithms. (If the person does not know what a Hashtable is or how it works or how to write one - thats a big red flag)
  • Get a feel for knowledge of the language: It does not matter what language they claim fluency in - confirm it by asking specific questions
  • Senior people need to know threads, queuing, distributed systems, databases
  • Senior people need to know design patterns
  • Senior people need to know unit testing inside out
  • Most importantly, the candidates need to demonstrate love for simple and elegant code

Always ask for code samples - a lot can be revealed. Give written timed tests, even if its over the web. And always check references before making an offer.

6. Avoid hiring managers

mgr_se.png You do not need these type of people in a small team. If everyone is sharp, knows what they are doing and executes on a task, why do you need a manager? People who try to overlay complex processes on top your objectives are going to slow you down and make you frustrated.

If during the interview someone who has been a manager says “I miss coding and want to code again”, beware that soon they might want to go back to management. Point being - the best startup engineers are people who are young and hungry to write code. More experienced people who are looking to do more management than coding will not be as passionate. And this is bad, because startups need passion and drive to build the impossible.

What you need are experienced technical people who love coding. These are going to be natural mentors for your younger engineers. Mentors and not managers.

7. Instill an agile culture


Modern startups need to move very quickly. There is no room to plan for 6 months and then execute because someone else will get there first. The new approach is to evolve the system. Of course you are doing planning for the release, but you are iterating quickly, doing frequent builds and constantly making changes.

Coding becomes sculpting. Starting with a shapeless form you continuously refine the code to satisfy the business requirements and make sure that the system is designed and implemented correctly. This is agile culture which values:

  • Clean and elegant code
  • Continuous refactoring
  • Focus on defect-free software
  • Code ownership and pride
  • Team work and little ego
  • Most importantly: use common sense

8. Do not re-invent the wheel

bb_se.JPG
A lot of startups go overboard with the infrastructure. This includes two types of things - rebuilding the libraries and building your own world-class scaling. On the first point - there are so many fantastic open source libraries out there that it just does not make sense to write them in house. Whether you are using JavaScript or PHP or .NET or Python or Ruby likely there are major libraries that can help you. Re-writing existing libraries is a waste of your time and you are not likely to do it better.

Building a large-scale system is a different matter entirely. First you need to get to scale and then worry about it. The guys from 37Signals have written about this on many occasions including their Get real book. Why worry about having millions of users while you do not have any at all right now? Spending time making sure that you will scale big is a waste of your time. Focus on what your product does best instead.

And to that effect, we have been using Amazon Web Services and are now supporting > 1M BlueOrganizer downloads. The Simple Storage Service (S3) allowed us to build a truly distributed and scalable system. We have not started using EC2, Amazon’s compute cloud service, but are planning to re-evaluate it soon.

The point is that there are tools, solutions and services out there that can help you get to scale. It is better to use them than to spend huge amounts of time and energy and money on building these systems in house.

Wrap up

Software is critical to any modern business. So the key to success of any startup is to have rock star technical team that can quickly turn the vision into a piece of software and then evolve it and iterate it until it turns into a real business.

15 Tools to Help You Develop Faster Web Pages

1. YSlow for Firebug

YSlow for Firebug - Screenshot

YSlow grades a website’s performance based on the best practices for high performance web sites on the Yahoo! Developer Network. Each rule is given a letter grade (A through F) stating how you rank on certain aspects of front-end performance. It’s a simple tool for finding things you can work on such as reducing the number of HTTP request a web page makes, and compressing external JavaScript and CSS files. A worthwhile read is the Ajax performance analysis post on IBM developerWorks that outlines practical ways of using YSlow in your web applications.

2. Firebug

Firebug - Screen shot

Firebug is an essential browser-based web development tool for debugging, testing, and analyzing web pages. It has a powerful set of utilities to help you understand and dissect what’s going on. One of the many notable features is the Net (network”) tab where you can inspect HTML, CSS, XHR, JS components.

3. Fiddler 2

Fiddler 2 - Screen shot

Fiddler 2 is a browser-based HTTP debugging tool that helps you analyze incoming and outgoing traffic. It’s highly customizable and has countless of reporting and debugging features. Be sure to read the “Fiddler PowerToy - Part 2: HTTP Performance” guide on the MSDN which discusses functional uses of Fiddler including how to improve “first-visit” performance (i.e. unprimed cache), analyzing HTTP response headers, creating custom flags for potential performance problems and more.

4. Cuzillion

Cuzillion - Screen shot

Cuzillion is a cool tool to help you see how page components interact with each other. The goal here is to help you quickly rapidly check, test, and modify web pages before you finalize the structure. It can give you clues on potential trouble-spots or points of improvements. Cuzillion was created by Steve Saunders, the ex-Chief Performance at Yahoo!, a leading engineer for the development of Yahoo’s performance best practices, and creator of YSlow.

5. mon.itor.us

mon.itor.us - Screen shot

monitor.us is a free web-based service that grants you a suite of tools for monitoring performance, availability, and traffic statistics. You can establish your website’s response time and set up alerts for when a service becomes unavailable. You can also set-up weekly, automated benchmarks to see if changes you’ve made impact speed and performance either positively or negatively.

6. IBM Page Detailer

IBM Page Detailer - Screen shot

The IBM Page Detailer is a straightforward tool for letting you visualize web components as they’re being downloaded. It latches onto your browser, so all you have to do is navigate to the desired site with the IBM Page Detailer open. Clicking on a web page component opens a window with the relevant details associated with it. Whenever an event occurs (such as a script being executed), the tool opens a window with information about the processes.

7. Httperf

Httperf is an open-source tool for measuring HTTP server performance running on Linux. It’s an effective tool for benchmarking and creating workload simulations to see if you can handle high-level traffic and still maintain stability. You can also use it to figure out the maximum capacity of your server, gradually increasing the number of requests you make to test its threshold.

8. Pylot

Pylot - Screen shot

Pylot is an open-source performance and scalability testing tool. It uses HTTP load tests so that you can plan, benchmark, analyze and tweak performance. Pylot requires that you have Python installed on the server - but you don’t need to know the language, you use XML to create your testing scenarios.

9. PushToTest TestMaker

PushToTest TestMaker - Screen shot

PushToTest TestMaker is a free, open-source platform for testing scalability and performance of applications. It has an intuitive graphical user interface with visual reporting and analytical tools. It has a Resource Monitor feature to help you see CPU, memory, and network utilization during testing. The reporting features let you generate graphs or export data into a spreadsheet application for record-keeping or further statistics analysis.

10. Wbox HTTP testing tool

Wbox HTTP testing tool - Screen shot

Wbox is a simple, free HTTP testing software released under the GPL (v2). It supports Linux, Windows, and MacOS X systems. It works by making sequential requests at desired intervals for stress-testing. It has an HTTP compression command so that you can analyze data about your server’s file compression. If you’ve just set up a virtual domain, Wbox HTTP testing tool also comes with a command for you to test if everything’s in order before deployment.

11. WebLOAD

WebLOAD - Screen shot

WebLOAD is an open-source, professional grade stress/load testing suite for web applications. WebLOAD allows testers to perform scripts for load testing using JavaScript. It can gather live data for monitoring, recording, and analysis purposes, using client-side data to analyze performance. It’s not just a performance tool – it comes with authoring and debugging features built in.

12. DBMonster

DBMonster - Code Screen shot

DBMonster is an open-source application to help you tune database structures and table indexes, as well as conduct tests to determine performance under high database load. It’ll help you see how well your database/s will scale by using automated generation of test data. It supports many databases such as MySQL, PostgreSQL, Oracle, MSSQL and (probably) any database that supports the JDBC driver.

13. OctaGate SiteTimer

OctaGate SiteTimer - Screen shot

The OctaGate SiteTimer is a simple utility for determining the time it takes to download everything on a web page. It gives you a visualization of the duration of each state during the download process (initial request, connection, start of download, and end of download).

14. Web Page Analyzer

Web Page Analyzer - Screen shot

The Web Page Analyzer is an extremely simple, web-based test to help you gain information on web page performance. It gives you data about the total number of HTTP requests, total page weight, your objects’ sizes, and more. It tries to estimate the download time of your web page on different internet connections and it also enumerates each page object for you. At the end, it provides you with an analysis and recommendation of the web page tested – use your own judgment in interpreting the information.

15. Site-Perf.com

Site-Perf.com - Screen shot

Site-Perf.com is a free web-based service that gives you information about your site’s loading speed. With Site-Perf.com’s tool, you get real-time capturing of data. It can help you spot bottlenecks, find page errors, gather server data, and more - all without having to install an application or register for an account.

If you have a favorite web performance tool that wasn’t on the list, share it in the comments. Would also like to hear your experiences, tips, suggestions, and resources you use.

Reference: Six Revisions

What is REST?

Representational State Transfer (REST) is a software architectural style for distributed hypermedia systems like the world wide web. The best way to explain this, is an example. A REST application might define the following resources:

  • http://example.com/users/
  • http://example.com/users/{user} (one for each user)
  • http://example.com/findUserForm
  • http://example.com/locations/
  • http://example.com/locations/{location} (one for each location)
  • http://example.com/findLocationForm

This is a very, very short explanation of only a small part of what REST does. More information, as always, can be found on Wikipedia.

When you use custom URLs, you effectively hide some of your internal structure behind more meaningful URLs. This means you can refactor more easily without breaking external links or bookmarks to a specific part of your site. This is also important for search engine optimization.

Resources:

- Create RESTful URLs with Wicket
- Describe REST Web services with WSDL 2.0
- RESTful SOA using XML
- How I Explained REST to My Wife

CMMi, what is the real value-added?

CMMi Logo, DacTin.com

In Vietnam, until now, there are approximately 6 companies, mostly foreign-owned ones, having achieved CMM/CMMi appraisals with maturity level 3 or above, including:
- FCGV (PSV): CMMi v1.1, maturity level 5
- FPT Software: CMMi v1.1, maturity level 5
- Global Cybersoft Vietnam (GCS): CMMi v1.1, maturity level 4
- ELCA: CMMi v1.1, maturity level 3
- TMA Solutions: CMMi v1.1, maturity level 3, currently in the last stage to be appraised for level 4.
- Harvey Nash (Silkroad): CMM, maturity level 3

In addition, there are about 15 organisations implementing the model with maturity level 3 (if not level 4, such as Lac Viet corp.) at this point, and planned to be officially appraised in the first half of 2008.

However, it is easily observable that most of the organisations do not really emphasize on the values that CMMi and other standards/best practices, as the tools, could bring to business development objectives. Of the above companies, quite a few has actually investigated enough budget and effort to practically improve the process so that it will help increasing product quality and productivity attributes. Usually is the target set just to get certificate papers first, as a strategic marketing/ advertising methodology, for which could bring negative effects when the outcomes do not actually meet customers’ quality expectations at the end.

What I want to say is, in fact, what the software outsourcing companies are doing is just not right. To compete against the competitors (in India, China, Phillipine, etc.), while not having any outstanding competencies at all and still getting customers in a very passive way which is unpredictable, incontrollable, and quite risky, trying to get the certificates by any cost is just to put ourselves into higher risky status. Why don’t we just find a way to improve real beneficial process first? Why couldn’t we have a good vision from the very early stage to identify our strengths over them? Why must we have quick fixes while we all know quick fixes always cause side effects?

May you the only one who can find the answer…

Resources: Dac Tin’s blog

Use your time wisely

Most people complain that they never have enough time to take action on their goals. All they can think about, after work, is to watch TV, relax, and sleep because they feel so tired. I was the same. I have never considered the importance of time before. I would just laze around, watch TV, lie on bed, hang out with friends, chat, surf the web, and play computer games (or online games) whenever I have free time. And one day, I thought about the things that I have achieved. To my surprise, I haven’t accomplished a lot and I was moving too slowly.

Undoubtedly, time is the most valuable asset one may possess. Once you lose your money, you can always get it back later. However, it will just never happen the same with your time. Once lost, it will never be reclaimed. Unfortunately, you can barely manage time, save time nor make time. You can only use your time wisely.

There are many advice on effective time consuming already. In this blog entry, I would like to summarize a few and add some that really work with me.

1. Choose your ultimate goals.
You may be wondering why goals first - because it will help. Trust me, you will hardly work any faster or any harder than you would usually do, regardless how hard you try. However, with an ultimate goal defined, you would have your approach optimize, you would be able to eliminate redundant activities or those that bring no values to you. It’s never a waste to put some time on goal setting and planning.

2. Be organized.
Try to break down the activities needed to achieve your goal with PRIORITIES. Review your plans every day or week to optimize the schedule. Let’s put everything under control. I suggest to daily schedule what to do when you are riding to workplace, and review what has been done on the way to come back home.

3. Learn to say ‘NO’. Cut your losses.
Chat and internet surfing, watching TV, and playing games are the topmost time-wasting activities. Believe me! These will not bring much values as you might have thought.

It is a good idea to enforce you yourself to only chat, check personal e-mail and surf the web on non-work stuffs at lunch time only, in no longer than 45 minutes per day. Spending longer hours on chatting and sending personal e-mails will not actually help you obtain any more information. Alternatively, don’t read every single pieces of news but focus on what is really useful for your goals, your job, and your life. Try RSS Feeds, it will also help.

Watching TV is also a time-killer giant. Don’t waste your time on movies as you cannot learn anything much compare with the time it takes. Use that amount of time to read self-improvement books is much better. In addition to this, Cinemax, HBO, StarMovies, etc. don’t really help you improve your English because it’s hard NOT to focus on the Vietnamese subtitles, try Bloomberg, Discovery Channel, and National Geographic instead. NEVER let the contemporary damned low quality Vietnamese movies ever bother you, simply because it isn’t worth your time.

Don’t play computer games or any other games that bring no help. Play the real-life game instead, you will find it an even much more interesting game.

4. Repeat the above steps.

Because most of us rarely find and take time to organize the way of doing things in our everyday life, we’re constantly overwhelmed with tasks and things to keep track of. This gives us the impression of being so impossibly busy all the time, that we can’t imagine doing something else on top of our daily routine. We feel lucky if we get to enjoy a weekend with our family, yet we never stop worrying about all these things waiting for us back at work.

Every single of us has equally 24 hours per day, neither more nor less. Find your own way to use time wisely and you will be surprised on the returns.

List of Free Java Decompilers

List of all free downloadable decompilers for Java. They will convert class files back to readable initial Java source code.


DJ Java Decompiler

The aim of this project is to develope a decompiler for java which is platform independent and has options to obfuscate the class file also. The project takes class file as input and decompiles it and provides the source file.

NavExpress DJ Java Decompiler

With NavExpress DJ Java Decompiler you can decompile java CLASS files and save it in text or other format. It's simple and easy.

Mocha, the Java Decompiler

The distribution archive (file "mocha-b1.zip") may be distributed freely, provided its contents ("mocha.zip" and this file, "readme.txt") are not tampered with in any way.

JCavaj Java Decompiler

JCavaj Java Decompiler is a free Java-based Java Decompiler. It reconstructs the original source code from a compiled binary CLASS file. You can decompile java applets, jar and zip files producing accurate java source code.

HomeBrew Java Decompiler

Have you ever lost the source code to a Java program and thought there was no way to get your code back? Well fret no longer, HomeBrew Decompiler to the rescue! It's still far from perfect, but hopefully it will be able to provide enough for you to reconstruct your lost source file.

Home Page of Jad - the fast Java decompiler

Jad is free for non-commercial use, but since the version 1.5.6 it's no longer free for commercial use. This means that Jad cannot be included into software products (especially decompilers) without my prior permission

Dava: A tool-independent decompiler for Java

Dava is a decompiler for arbitrary Java bytecode. It can be used to decompile bytecode produced by Java compilers, compilers for other languages (AspectJ, SML, C) that generate Java bytecode and tools like Java bytecode obfuscators, instrumentors and optimizers.

Jdec: Java Decompiler

JREVERSEPRO is a Java Decompiler / Disassembler written entirely in Java. This reverse engineering utility is issued under the GNU GPL The utlimate objective of this project is to provide a decompiler that generates a Java object-based structure that can be programmatically inspected using a specific API.

Java Optimize and Decompile Environment (JODE)

JODE is a java package containing a decompiler and an optimizer for java. This package is freely available under the GNU GPL. The bytecode package and the core decompiler is now under GNU Lesser General Public License, so you can integrate it in your project.

What's new in Java 6.0 - called Mustang.

Though there are no significant changes at the Language Level, though Mustang comes with a bunch of enhancements in the other areas like Core, XML and Desktop. Most of the features are applicable both to J2SE and J2EE Platforms.

This article just highlights a few of the main language enhancements that Java 6 represents for the Java language and the developers that use it:

• Common Annotations
• Scripting in the Java Platform
• JDBC 4.0
• Monitoring and Management
• Managing the File System

Common Annotations:

The aim of having Common Annotations API in the Java Platform is to avoid applications defining their own Annotations which will result in having larger number of Duplicates. This JSR-250 is targeted to cover Annotations both in the Standard as well the Enterprise Environments. The packages that contain the annotations are javax.annotation and javax.annotation.security.

Scripting in the Java Platform:

Java 6 provides the Common Scripting Language Framework for integrating various Scripting Languages into the Java Platform. Most of the popular Scripting Languages like Java Script, PHP Script, Bean Shell Script and PNuts Script etc., can be seamlessly integrated with the Java Platform.

Support for Intercommunication between Scripting Languages and Java Programs is possible now because of this. It means that Scripting Language Code can access the Set of Java Libraries and Java Programs can directly embed Scripting Code. Java Applications can also have options for Compiling and Executing Scripts which will lead to good performance, provided the Scripting Engine supports this feature.

There are two core components of the Scripting engine namely:
• Language Bindings
• The Scripting API

For example:
You obtain a new ScriptEngine object from a ScriptEngineManager, as shown here:
ScriptEngineManager manager = new ScriptEngineManager();
ScriptEngine engine = manager.getEngineByName("js");

Each scripting language has its own unique identifier. The "js" here means you're dealing with JavaScript.

Now you can start having some fun. Interacting with a script is easy and intuitive. You can assign scripting variables using the put() method and evaluate the script using the eval() method,. which returns the most recently evaluated expression processed by the script. And that pretty much covers the essentials. Here's an example that puts it all together:

engine.put("cost", 1000);
String decision = (String) engine.eval(
"if ( cost >= 100){ " +
" decision = 'Ask the boss'; " +
"} else {" +
" decision = 'Buy it'; " +
"}");
assert ("Ask the boss".equals(decision));

You can do more than just pass variables to your scripts— you can also invoke Java classes from within your scripts. Using the importPackage() function enables you to import Java packages, as shown here:

engine.eval("importPackage(java.util); " +
"today = new Date(); " +
"print('Today is ' + today);");

Another cool feature is the Invocable interface, which lets you invoke a function by name within a script. This lets you write libraries in scripting languages, which you can use by calling key functions from your Java application. You just pass the name of the function you want to call, an array of Objects for the parameters, and you're done! Here's an example:

engine.eval("function calculateInsurancePremium(age) {...}");
Invocable invocable = (Invocable) engine;
Object result = invocable.invokeFunction("calculateInsurancePremium",
new Object[] {37});

You actually can do a fair bit more than what I've shown here. For example, you can pass a Reader object to the eval() method, which makes it easy to store scripts in external files, or bind several Java objects to JavaScript variables using a Map-like Binding object. You can also compile some scripting languages to speed up processing. But you probably get the idea that the integration with Java is smooth and well thought-out.

JDBC 4.0:

Java Database Connectivity allows Application Programs to interact with the Database to access the Relational Data. JDBC provides the Pluggable Architecture wherein any type of Java Compliant Drivers can be plugged-in even during the run-time. The JDBC API provides functionality to establish Connection to the back-end Database session which can execute the Queries to get the Results. The new version of JDBC that comes along with Mustang is JDBC 4.0. JDBC 4.0 is one of the areas that were great affected with the new set of features.
• No need for Class.forName("DriverName")
• Changes in Connection and Statement Interface
• Enhanced SQL Exception Handling

Monitoring and Management:

Java 6 has made some good progress in the field of application monitoring, management, and debugging. The Java Monitoring and Management Console, or JConsole, is well known to system administrators who deploy Java applications.

Java 6 enhances JConsole in several ways, making it both easier to use and more powerful. The graphical look has been improved; you can now monitor several applications in the same JConsole instance, and the summary screen has been redesigned. It now displays a graphical dashboard of the key statistics. You also can now export any graph data in CSV form for further analysis in a spreadsheet.

One great thing about application monitoring in Java 6 is that you don't need to do anything special to your application to use it. In Java 5, you needed to start any application that you wanted to monitor with a special command-line option (-Dcom.sun.management.jmxremote). In Java 6, you can monitor any application that is running in a Java 6 VM.

Java 6 comes with sophisticated thread-management and monitoring features as well. The Java 6 VM can now monitor applications for deadlocked threads involving object monitors and java.util.concurrent ownable synchronizers. If your application seems to be hanging, JConsole lets you check for deadlocks by clicking on the Detect Deadlock button in the Threads tab.
At a lower level, Java 6 helps resolve a common hard-to-isolate problem: the dreaded java.lang.OutOfMemoryError. In Java 6, an OutOfMemoryError will not just leave you guessing; it will print out a full stack trace so that you can have some idea of what might have caused the problem.

Managing the File System:


Java 6 gives you much finer control over your local file system. For example, it is now easy to find out how much free space is left on your hard drive. The java.io.File class has the following three new methods to determine the amount of space available (in bytes) on a given disk partition:

File homeDir = new File("/home/john");
System.out.println("Total space = " + homeDir.getTotalSpace());
System.out.println("Free space = " + homeDir.getFreeSpace());
System.out.println("Usable space = " + homeDir.getUsableSpace());

As their names would indicate, these methods return the total amount of disk space on the partition (getTotalSpace()), the amount of currently unallocated space (getFreeSpace()), and the amount of available space (getUsableSpace()), after taking into consideration OS-specific factors such as write permissions or other operating-system constraints. According to the documentation, getUsableSpace() is more accurate than getFreeSpace().

When I ran this code on my machine, it produced the following output:

Total space = 117050585088
Free space = 100983394304
Usuable space = 94941515776

File permissions are another area where Java 6 brings new enhancements. The java.io.File class now has a set of functions allowing you to set the readable, writable, and executable flags on files in your local file system, as you would with the Unix chmod command. For example, to set read-only access to a file, you could do the following:

File document = new File("document");
documentsDir.setReadable (true);
documentsDir.setWritable(false);
documentsDir.setExecutable (false);

This will set read-only access for the owner of the file. You can also modify the access rights for all users by setting the second parameter (ownerOnly) to false:

documentsDir.setReadable (true, false);
documentsDir.setWritable(false, false);
documentsDir.setExecutable (false, false);

Naturally, this will work only if the underlying operating system supports this level of file permissions.

Just a Few of Many

Java 6 offers many other new features that I haven't mentioned here, such as support for JAX-WS Web services and JAXB 2.0 XML binding, improvements in the Swing and AWT APIs, and language enhancements such as sorted sets and maps with bidirectional navigation. Try it out!

Best Practices for Model-Driven Software Development

Model-driven software development no longer belongs to the fringes of the industry but is being applied in more and more software projects with great success. In this article, Sven Efftinge, Peter Friese, and Jan Köhnlein pass on, based on the experiences gathered in the past few years, their own contribution to its MDD's best practices.


Best practices covered include:
  • Separate the generated and manual code from each other
  • Don't check-in generated code
  • Integrate the generator into the build process
  • Use the resources of the target platform
  • Generate clean code
  • Use the complier
  • Talk in Metamodelese
  • Develop DSLs iteratively
  • Develop model-validation iteratively
  • Test the generator using a reference model
  • Select suitable technology
  • Use textual syntax correctly
  • Use Configuration By Exception
  • Teamwork loves textual DSLs
  • Use model-transformation to reduce complexity
  • Generate towards a comprehensive platform
The authors conclude the article recommending that despite these practices, the most important lesson is to be pragmatic:

DSLs and code generators can, when used appropriately, be an immensely useful tool. But the focus should always be the problem to be solved. In many cases, it makes sense to describe certain, but not all, aspects using DSLs. Projects which decide from the get-go to follow a model-driven approach are ignoring this last piece of advice. Are you doing MDD? What have been your experiences?

For more information: Best Practices for Model-Driven Software Development

Java Software Quality - Tools and Techniques

Software quality metrics are good. Automated software quality metrics are better. John Smart, author of the about to be released book "Java Power Tools", will discuss a number of open source tools that can automate code quality and test coverage reporting to your project, and how to integrate them into your development process.

More importantly, John will cover how these tools can be used to reduce bugs, speed up delivery, improve the quality of your project, and hone your team's skills. In this presentation, John will be discussing tools such as Checkstyle, PMD, FindBugs, Crap4j, Cobertura and Selenium, and looking at quality metrics in the real world - how they work best as a team learning tool, and poorly as a measure of individual performance.

Resources: Software Quality NZ

Simple Sprint Backlog Example

Screenshot of the simple Scrum product backlog


You can use this Excel sample as a template for your own backlogs. Feel free to copy, reuse or even resell this example, though it would be very kind of you not to delete from the sheet the link to this site.

Resource: Simple sprint backlog example (XLS)

Simple Product Backlog Example

Screenshot of the simple Scrum product backlog
This is a simple product backlog example for SCRUM. You can use this Excel example as a template for your own backlogs.

Resource: Minimal product backlog template(XLS)

Test Driven Development - Best practices

Here are my practice when apply TDD, your idea is welcome:

  • Write the simple test first, then write code. After that write test code for more complexity functionality
  • Keep test code as simple as possible
  • Run test immediately whenever complete writing code
  • If spending much time for writing code, break the functionality into smaller ones and applying TDD for these pieces
  • Whenever test code or productive code smells, refactor it immediately
  • After writing code, use coverage tool to make sure productive code is covered 100% by test method. If some piece of code is not covered, your implementation has issue!
  • Use mock objects if needed
  • Automate unit test and intergrate them into CI server

Pair review process - It should be better than pair programming

I am a supporter of XP process with unit test, TDD etc practices without pair programming. I learned the XP practices last year, reading many articles with statistical data about the benefits of pair programming. I used to believe the pair programming is the good way to improve code quality, share knowledge and senior developers can teach junior guy via pair programming but I changed my mind recently. There are some reasons I think that it is hard to apply pair programming in practice (at least in my works and other associates I used to work with):

- The different speed of working among associates. The clever can interfere much the thinking of other and developers are not peer but teacher and student relationship. With teacher-student relationship, it takes time for senior to work the same workspace with less mature one and we have other types of training instead of pair programming.

- Pair programming should not applied in ‘political’ or unfriendly environment. People tends to protect themselves and try not show their weakness to others.

- Some people just like to work lonely. They do not like any involve during they code until their works are finished.

- We do not need two guys do for simple works.

- Sometimes people need relax, other member will force them work while they are tire. There are some guys can work without relax within 4 hours but others need, the different focus level in work can cause people tire and protect against pair programming.

With above reasons, I do not see pair programming can be applied to many teams especially the big team because they are many tyles of working, any fail in one pair can impact to others. However, without any close cooperation among developers than current, it also cause issue about quality and knowledge management, the result is clear: code is not reviewed well (code review session is not effective because people only select random source file for code review in meeting), people can not share knowledge of coding and works etc. The simple thing we can improve our works to full code review to all source codes, two people understand the same code base is performing the peer review regularly. Instead dividing into pairs for coding, we let pairs do review only, one will code and other will review their works at the end of day.

Pair review should be performed every day because when some pair leave works not reviewed daily basics, they tend to select random files for review only when managers request and the effectiveness is not much. Review other works in daily basic will take much time at the beginning, but it takes more speed in future when people is aware with other’s source code, business domain. We also sure each line of source code are ‘done’ by pair (one code and one do inspection), if there is any conflicts in pair, the technical lead will join to solve the issue. One drawback of pair review with pair programming is the response time for fixing defects, pair programming do faster but I believe the benefits of pair review is worth for pair and it overcomes the limitations of pair programming to make it can be applied more wider area.

How To Optimize Your Site With HTTP Caching

What is Caching?

Caching is a great example of the ubiquitous time-space tradeoff in programming. You can save time by using space to store results.

In the case of websites, the browser can save a copy of images, stylesheets, javascript or the entire page. The next time the user needs that resource (such as a script or logo that appears on every page), the browser doesn’t have to download it again. Fewer downloads means a faster, happier site.

Here’s a quick refresher on how a web browser gets a page from the server:

HTTP_request.png

1. Browser: Yo! You got index.html?
2. Server: (Looking it up…)
3. Sever: Totally, dude! It’s right here!
4. Browser: That’s rad, I’m downloading it now and showing the user.

(The actual HTTP protocol may have minor differences; see Live HTTP Headers for more details.)

Caching’s Ugly Secret: It Gets Stale

Caching seems fun and easy. The browser saves a copy of a file (like a logo image) and uses this cached (saved) copy on each page that needs the logo. This avoids having to download the image ever again and is perfect, right?

Wrongo. What happens when the company logo changes? Amazon.com becomes Nile.com? Google becomes Quadrillion?

We’ve got a problem. The shiny new logo needs to go with the shiny new site, caches be damned.

So even though the browser has the logo, it doesn’t know whether the image can be used. After all, the file may have changed on the server and there could be an updated version.

So why bother caching if we can’t be sure if the file is good? Luckily, there’s a few ways to fix this problem.

Caching Method 1: Last-Modified

One fix is for the server to tell the browser what version of the file it is sending. A server can return a Last-modified date along with the file (let’s call it logo.png), like this:

Last-modified: Fri, 16 Mar 2007 04:00:25 GMT
File Contents (could be an image, HTML, CSS, Javascript...)

Now the browser knows that the file it got (logo.png) was created on Mar 16 2007. The next time the browser needs logo.png, it can do a special check with the server:

HTTP-caching-last-modified_1.png

1. Browser: Hey, give me logo.png, but only if it’s been modified since Mar 16, 2007.
2. Server: (Checking the modification date)
3. Server: Hey,
you’re in luck! It was not modified since that date. You have the latest version.
4. Browser: Great! I’ll show the user the cached version.

Sending the short “Not Modified” message is a lot faster than needing to download the file again, especially for giant javascript or image files. Caching saves the day (err… the bandwidth).

Caching Method 2: ETag

Comparing versions with the modification time generally works, but could lead to problems. What if the server’s clock was originally wrong and then got fixed? What if daylight savings time comes early and the server isn’t updated? The caches could be inaccurate.

ETags to the rescue. An ETag is a unique identifier given to every file. It’s like a hash or fingerprint: every file gets a unique fingerprint, and if you change the file (even by one byte), the fingerprint changes as well.

Instead of sending back the modification time, the server can send back the ETag (fingerprint):

ETag: ead145f
File Contents (could be an image, HTML, CSS, Javascript...)

The ETag can be any string which uniquely identifies the file. The next time the browser needs logo.png, it can have a conversation like this:

HTTP_caching_if_none_match.png

1. Browser: Can I get logo.png, if nothing matches tag “ead145f”?
2. Server: (Checking fingerprint on logo.png)
3. Server: You’re in luck! The version h
ere is “ead145f”. It was not modified.
4. Browser: Score! I’ll show the user my cached version.

Just like last-modifed, ETags solve the problem of comparing file versions, except that “if-none-match” is a bit harder to work into a sentence than “if-modified-since”. But that’s my problem, not yours. ETags work great.

Caching Method 3: Expires

Caching a file and checking with the server is nice, except for one thing: we are still checking with the server. It’s like analyzing your milk every time you make cereal to see whether it’s safe to drink. Sure, it’s better than buying a new gallon each time, but it’s not exactly wonderful. And how do we handle this milk situation? With an expiration date!

If we know when the milk (logo.png) expires, we keep using it until that date (and maybe a few days longer, if you’re a college student). As soon as it goes expires, we contact the server for a fresh copy, with a new expiration date. The header looks like this:

Expires: Tue, 20 Mar 2007 04:00:25 GMT
File Contents (could be an image, HTML, CSS, Javascript...)

In the meantime, we avoid even talking to the server if we’re in the expiration period:

HTTP_caching_expires.png

There isn’t a conversation here; the browser has a monologue.

1. Browser: Self, is it before the expiration date of Mar 20, 2007? (Assume it is).
2. Browser: Verily, I will show the user the cached version.

And that’s that. The web server didn’t have to do anything. The user sees the file instantly.

Caching Method 4: Max-Age

Oh, we’re not done yet. Expires is great, but it has to be computed for every date. The max-age header lets us say “This file expires 1 week from today”, which is simpler than setting an explicit date.

Max-Age is measured in seconds. Here’s a few quick second conversions:

  • 1 day in seconds = 86400
  • 1 week in seconds = 604800
  • 1 month in seconds = 2629000
  • 1 year in seconds = 31536000 (effectively infinite on internet time)

Bonus Header: Public and Private

The cache headers never cease. Sometimes a server needs to control when certain resources are cached.

  • Cache-control: public means the cached version can be saved by proxies and other intermediate servers, where everyone can see it.
  • Cache-control: private means the file is different for different users (such as their personal homepage). The user’s private browser can cache it, but not public proxies.
  • Cache-control: no-cache means the file should not be cached. This is useful for things like search results where the URL appears the same but the content may change.

However, be wary that some cache directives only work on newer HTTP 1.1 browsers. If you are doing special caching of authenticated pages then read more about caching.

Ok, I’m Sold: Enable Caching

We’ve seen the following headers that really help our caching:

  • Last-modified:
  • ETag:
  • Expires:
  • Cache-control: max-age=86400

Now let’s put it all together and get Apache to return the right headers. If your resource changes:

  • Daily or more: Use last-modifed or ETag. Apache does this for you automatically!
  • Weekly-monthly: Use max-age for a day or week. Put the .htaccess file in the directory you want to cache:

#Create filter to match files you want to cache

Header add "Cache-Control" "max-age=604800"




Header add "Expires" "Mon, 28 Jul 2014 23:30:00 GMT"
Header add "Cache-Control" "max-age=31536000"


How can a file never change? Simple. Put different versions of the file in different directories. For instacalc, I keep the core files of each build in a unique directory, such as “build490″. When I’m using build490, index.html pulls all images, stylesheets, and javascripts from that directory. I can cache the the files in build490 forever because build490 will never change.

If I have a new version (build491… how creative), index.html will point to that folder instead. I’ve created scripts to take care of this find/replace housekeeping, though you can use URL rewriting

rules as well. I prefer to have the HTML point to the actual file. Remember that index.html cannot be cached forever, since it changes every now and then to point to new directories. So for the “loader” file, I’m using the regular Last-Modified caching strategy. I think it’s fine to have that small “304 Not Modified” communication with the server — we still avoid sending requests for all the files in the build490 folder. If you want, monkey around and give the index.html file a small expiration (say a few hours).

Final Step: Check Your Caching

To see whether your files are cached, do the following:

  • Online: Examine your site in the cacheability query (green means cacheable)
  • In Browser: Use FireBug or Live HTTP Headers to see the HTTP response (304 Not Modified, Cache-Control, etc.). In particular, I’ll load a page and use Live HTTP Headers to make sure no packets are being sent to load images, logos, and other cached files. If you press ctrl+refresh the browser will force a reload of all files.
Remember: Creating unique URLs is the simplest way to caching heaven. Have fun streamlining your site!

Mobile Development MindMap

Over time, I get many people emailing me asking how to start mobile development. Some are students and some are hobbyists. There are also many companies wondering where to start. The first step is to research all the various development routes. But what are they? I have put together a quick mind map to set people on their way…

mindmap1small.gif

For more information: Quick Mind Map

Different Types of OutOfMemoryError You Can Encounter in Your Java Application

Java applications throw “OutOfMemoryError” when the java virtual machine does not have sufficient memory for creating new objects. There are different types of “OutOfMemoryError” that can occur in your java applications. In this article, let’s see the different types of Out-Of-Memory-Errors that can happen in your java application, the possible causes and some solutions.

The Different Types of Out-Of-Memory-Errors that can happen in your java application are

1) Heap memory error

2) Non-heap memory error

3) Native memory error

Heap memory error

java.lang.OutOfMemoryError: Java heap space

Heap memory is the runtime data area from which memory for all class instances and arrays is allocated. The Heap memory area expands dynamically. When a java application starts the heap size will be initialized to the default minimum size. When new objects are creates, memory will be allocated within the heap memory. If the heap memory is not sufficient for creating new objects, it will be automatically expanded by a fixed size. This process of expanding the heap will continue till the heap memory reaches the maximum default value.

The default maximum and minimum size for heap is 2MB and 64 MB. In server mode, the default sizes are 32MB and 128MB respectively (JVM can be run in server mode by specifying the –server flag).

You can set the min and max value of heap using the below flags:

-Xmssize in bytes

Sets the initial size of the Java heap.

-Xmxsize in bytes

Sets the maximum size to which the Java heap can grow.

The reasons for heap space going out of memory can be numerous. One simple reason can be that your application if big and cannot fit into the default VM heap space; in that case you can just specify higher sizes for the Xms and Xmx parameters.

Another reason can be that your application has some memory leak. In java, once memory is allocated for some objects, that memory will be freed by a process called garbage collection. The garbage collector can clean up objects only if they are not used. If you are not using any object and have a reference holding that object then that object will not be garbage collected. Diagnosing memory leaks can be tricky. You can use profiling tools like Optimize IT for diagnosing memory leaks.

Non-heap memory error

java.lang.OutOfMemoryError: PermGen space

The memory in the Virtual Machine is divided into a number of regions. One of these regions is PermGen. It's an area of memory that is used to (among other things) load class files. Unlike the heap memory space, the size of this memory region is fixed, i.e. it does not change when the VM is running. You can specify the size of this region with a commandline switch: -XX:MaxPermSize . The default is 64 Mb on the Sun VMs.

If there's a problem with garbage collecting classes and if you keep loading new classes, the VM will run out of space in that memory region, even if there's plenty of memory available on the heap. Setting the -Xmx parameter will not help: this parameter only specifies the size of the total heap and does not affect the size of the PermGen region.

The reasons for PermGen space error can be that you have loaded too many classes, in which can you can just increase the permgen space using the above flag.

It can also be that you are using many classloaders which have loaded the same classes thus duplicating the classes in memory. Such problems usually occur when redeploying web applications without restarting the application server.

Application servers such as Glassfish allow you to write an application (.ear, .war, etc) and deploy this application with other applications on this application server. Should you feel the need to make a change to your application, you can simply make the change in your source code, compile the source, and redeploy the application without affecting the other still running applications in the application server: you don't need to restart the application server. This mechanism works fine on Glassfish and other application servers (e.g. Java CAPS Integration Server).

The way that this works is that each application is loaded using its own classloader. Simply put, a classloader is a special class that loads .class files from jar files. When you undeploy the application, the classloader is discarded and it and all the classes that it loaded, should be garbage collected sooner or later.

Somehow, something may hold on to the classloader however, and prevent it from being garbage collected. And that's what's causing the java.lang.OutOfMemoryError: PermGen space exception. Such problems can be very difficult to solve. Read this blog for more details.

Native memory error

Whether it is heap space or permgen space, finally everything has to be allocated by the Operating System either in the RAM or in the Virtual Memory. What happens if the entire Virtual Memory is full? The OS may fail to allocate memory to the JVM. In this case a native memory error will be thrown like:

java.lang.OutOfMemoryError: request bytes for .

Out of swap space?

The reason can be that the virtual memory set in the OS is too low or you have many other applications running which are consuming lots of memory. In either case, you can try increasing the virtual memory. The procedure for increasing the virtual memory can vary from OS to OS.

In Windows, the VirtualMemory is allocated as a pagefile on the disk. When the virtual memory is full, windows will automatically increase the virtual memory and give a message “Windows is increasing your virtual memory, during this process; memory requests by some applications may be denied”.

Increasing the virtual/swap memory in unix/linux flavor operating systems can be difficult because in these operating systems, the virtual memory is allocated on a separate partition called swap partition. You will need to provide an additional swap partition or resize the existing swap partition.

There are also some special cases then Out-Of-Memory may be thrown. For example the toArray method in Collection cannot convert from the collection to array if the number of elements is more than Integer.MAX_VALUE. In that case you will get

java.lang.OutOfMemoryError: Required array size too large


Make Your Java Applications Run Faster - Part 4 - Methods, Synchronization, Instantiation, Casting, Exceptions and Threads

In this final part of the series "Make Your Java Applications Run Faster Part" we shall see how you can optimize Methods, Synchronization, Instantiation, Casting, Exceptions and Threads.

Method Call Optimization: There are two basic categories of methods: those that can be statically resolved by the compiler and the rest, which must be dynamically resolved at run time. To statically resolve a method, the compiler must be able to determine absolutely which method should be invoked. Methods declared as static, private, and final, as well as all constructors, may be resolved at compile time because the class to which the method belongs to is known. A statically resolved method executes more quickly than a dynamically resolved method because, statically resolved method, the binding is done by the compiler and for other methods the binding should happen at runtime depending on the object instance that is being refered.

Even for a statically resolved methods, there will be considerable execution overhead – the local variables should be saved, the method arguments should be pushed to the stack. So it is better to eliminate small methods if there is only one caller. You may not be able to eliminate method calls in your code, but you can at least minimize the method calls at the bytecode level by inline the methods. You can instruct the compiler to inline small methods by using the –O flag.

Inlining a method call inserts the code for the method directly into the code making the method call. This eliminates the overhead of the method call. For a small method this overhead can represent a significant percentage of its execution time. Note that only methods declared as either private, static, or final can be considered for inlining, because only these methods are statically resolved by the compiler. Also, synchronized methods won't be inlined. The compiler will only inline small methods typically consisting of only one or two lines of code.

The next best option is to convert method and especially interface invocations to static, private, or final calls. If you have methods in your class that don't make use of instance variables or call other instance methods, they can be declared static. If a method doesn't need to be overridden, it can be declared final. And methods that shouldn't be called from outside the class should always be declared private anyway.

Minimize Synchronization: Call to a synchronized method is very expensive, in fast it is the most expensive method call. Not only does it involve getting the lock on an object’s monitor but, but there is always the potential that the call will be delayed waiting for the monitor for the object. And when the method returns, the monitor must be released, which takes more time.

So, it is always better to reduce usage of synchronized methods and blocks as far as possible, so make sure that the operations actually require synchronization. Be careful though; this is not an area of your program to be overly ambitious in optimizing as the problems you can cause can be difficult to track down.

Also, you must try to use synchronized methods in place of synchronized blocks as far as possible because synchronized method invocation is slightly faster than entering a synchronized block as synchronized method invocation is handled differently in the byte code.

Also, a call to a synchronized method when the monitor is already owned by the thread executes somewhat faster than the synchronized method when the monitor isn't owned. So, If you have a heavily synchronized class that is calling lots of synchronized methods from other synchronized methods, you may want to consider having a few synchronized methods delegate the work to private non-synchronized methods to avoid the overhead of reacquiring the monitor.

Save Casting: Casting object references to refer to different object types in Java (object casts) can get pretty dense -- especially if you work with a lot of lists or other collection classes. It turns out that any object cast that can't be resolved at compile time is so expensive that that it is better to save the cast object in a local variable than to repeat the cast.

So instead of writing:


boolean equals (Object obj) {
if (obj instanceof Rectangle)
return (((Rectangle)obj).x == this.x && ((Rectangle)obj).y == this.y && ((Rectangle)obj).width == this.width && ((Rectangle)obj).height == this.height);
return false;
}


Do the casting only once:


boolean equals (Object obj) {
if (obj instanceof Rectangle) {
Rectangle rect = (Rectangle)obj;
return (rect.x == this.x && rect.y == this.y && rect.width == this.width && rect.height == this.height);
}
return false;
}


If the cast is to an interface, it's probably twice slower than casting to a class.
In fact, there is one type of cast that can take much longer to execute. If you have an object hierarchy like

interface I {}
class Super {}
class Sub extends Super implements I {}

then the following cast, to an interface implemented by a subclass, takes anywhere from two to three times as long as casting to the subclass.

Super su = new Sub();
I i = (I) su;

The further the separation between the interface and the subclass (that is, the further back in the interface inheritance chain the cast interface is from the implemented interface), the longer the cast takes to resolve.

Also beware of unnecessary uses of instanceof. The following cast is resolved by the compiler and produces no runtime code to implement the unnecessary cast.

Rectangle q = new Rectangle ();
Rectangle p = (Rectangle) q;

However,
Rectangle p = new Rectangle ();
boolean b = (p instanceof Rectangle);


cannot be resolved by the compiler because instanceof must return false if p == null.

Casting data types is simpler and cheaper than casting objects because the type of the data value being cast is known (for example, an int never is actually a subclass of an int.) However, again since int is the natural size used by the JVM (ints are the common data type that all other data types can be directly converted to and from), beware of using the other data types. Casting from a long to a short requires first casting from a long to an int and then from an int to a short.


Reuse Object Instances: Creating an object is fairly expensive in terms of CPU cycles. On top of that, discarded objects will need to be garbage collected at some point. It takes about as long to garbage collect an object as it takes to create one.


Also, the longer the hierarchy chain for the object, the more constructors that must be called. This adds to the instantiation overhead. If you add extra layers to your class hierarchy for increased abstraction, you'll also increase the instantiation time.

The best option is to avoid instantiating objects in tight loops. Where possible, reuse an existing object.
The loop
for (int i = 0; i < limit; i++) {
ArrayList list = new ArrayList();
// do something with list...
}

would be faster if written as
ArrayList list = new ArrayList();
for (int i = 0; i < limit; i++) {
list.clear();
// do something with list...
}


Also, it is better not to explicitly initialize instance variables to their default values because when an objects is created all instance variables will by virtue get initialized to the default values. If you are explicitly doing the same, it will duplicate the default initialization, generate additional bytecodes, and make the instantiation take longer. (There are rare cases when the explicit initialization is needed to reinitialize a variable that was altered from the default value during the super class's constructor.)

Optimizing Exception Handling: The try/catch/finally mechanism is implemented efficiently in the JVM. When an exception is thrown, only a quick check is done against the exception table for each method in the call chain to determine whether or not the exception is handled by the method.
Also, a try {} statement is cheap, no much bytecodes are generated, all that try {} statement adds to your code as far as overhead is the goto bytecode to skip the catch () {} statement and one or more entries in the method's exception table. A finally statement adds a little more overhead as it is implemented as an inline subroutine.


However, instantiating a new exception can be expensive. An exception generates and stores a snapshot of the stack at the time the exception was instantiated. This is where most of the time for instantiating an exception goes. If you're throwing exceptions regularly as part of the normal operation of your code, you may want to rethink your design.


But, if you do have a need to do this (such as returning a result from several calls deep), and if you're simply going to catch and handle the exception, then repeatedly instantiating an exception can be a waste of time. Instead, it is possible to reuse an exception. That is when you are not concerned with logging the exception, but you are only interested in returning a result from several calls deep, then you can have the exception as a class variable and always change the required values in the exception and throw the reused exception.

Reduce Threads: Using threads can cause problems memory wise and also CPU cycle wise. Each running thread has both a native/"C" stack and a Java stack which will take some memory. There is also an overhead for switching between threads (a context switch), which varies significantly between platforms. Moreover, if the threads are in some synchronized method or block, then a context switch also involves releasing / acquiring the monitor for an object, so context switch will be much slower when threads are in synchronized methods or blocks.

There are many places in your code where threads can be minimized or avoided. For Example, a java game can be easily implemented by using separate threads for painting, keys handling and AI, In fact, Painting, key handling and AI together can be handled by a single thread make will make the code slightly complex but will avoid the over associated to context switching between threads..