Friday, September 30, 2005

CONned by windows

Link: Naming a file

Do not use the following reserved device names for the name of a file: CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also avoid these names followed by an extension, for example, NUL.tx7.

I was writing an auto code generator which was generating many Java files and one of the files to be generated was My generator generated every other file but when it came to it threw a FileNotFoundException. Which was very weird because I was trying to create a new file and so it not finding the file was in fact a good thing! To justify that it wasn't just a figment of my imagination I checked and confirmed that there was no such file. I rechecked and yet there was no such file. I cursed and ran the code generator again - this, my never failing trump card betrayed me as well.

Of course these days where everything else fails, Google doesn't. And sure enough it brought me to this page and it was clear that CON was one of the many reserved words and so I could not create a filed named so.

While I am ok with Windows not allowing me to create a file with a name which is a reserved word but not allowing files with a reserved word followed by an extension is all too limiting. And worse still the error messages when I try to create a file con.whatever vary from "access denied" to "file already exists" to "are you kidding me?" (no, not the last one but it came close).

Monday, September 26, 2005

Extend the ArcGIS ADF with POJOs - Part III

The first cardinal rule when working with pooled ArcGIS Server objects in a webapp is that you must release the server context after every web server request. This is because you are sharing the server object with many other users and you need to put it back into the pool so that other users can access it. The second cardinal rule is that you cannot reference any server object after you have released the server context. If you do, it would be like executing a JDBC Statement after closing the Connection. You always need a live connection or a context in a client-server environment to work with objects that are remotely hosted.

So the question then is: How do I persist with the current state of a server object after releasing the context? The answer lies in the saveObject() and loadObject() methods of the IServerContext. You can serialize objects to their string representations with saveObject() and you can deserialize them to their object forms with loadObject(). So calling saveObjects before releasing the context and calling loadObjects on reconnect sets you up well to persist with the current client state while working with pooled objects.

As always, the ADF assists you in making such experiences easy for you. Rather than you having to scratch your heads about when to calls loads and when to calls saves, where to call them, how to keep track of them, et al the ADF provides you a simple interface in WebLifecycle where in you can do all of these tasks and that for only those objects needed for that particular task. The ADF calls relevant methods of the WebLifecycle just prior to releasing the context as well as immediately after a reconnect.

In the CountFeatures class that we have been working on through Parts I and II we may want to persist with the SpatialFilter object. Generally you persist only those objects which have enough client state in them for you to justify the saves and loads. It's obvious that the SpatialFilter doesn't have much state in it to merit justification but the idea here is to showcase how to do it easily so that you can apply it to more pertinent objects such as graphic elements and symbols.

The WebLifecycle defines 3 methods - activate(), passivate() and destroy() - which are called at different stages of a request / session lifecycle by the ADF. The passivate() method is called after a request has been serviced giving you the opportunity to serialize server objects to strings. The activate() method is called before servicing the request where you can deserialize the strings so that they are available as live objects when you perform the business tasks. And finally the destroy() is called when the session is being terminated for you to perform the necessary cleanup.

Below is the code which extends the CountFeatures class to participate in this lifecycle:

public class CountFeatures implements WebContextInitialize, WebContextObserver, WebLifecycle {
//serialized SpatialFilter - valid after passivate() and before activate()
String serializedSpatialFilter;
//the spatial filter object - only valid between activate() and passivate()
SpatialFilter filter;

public void init(WebContext webContext) {
//create SpatialFilter and set spatial relationship
filter = new SpatialFilter(agsctx.createServerObject(SpatialFilter.getClsid()));

public void passivate() { //serialize to strings
serializedSpatialFilter = agsctx.saveObject(filter);

public void activate() { //deserialize to objects
filter = new SpatialFilter(agsctx.loadObject(serializedSpatialFilter));

public void destroy() { //cleanup
filter = null;
serializedSpatialFilter = null;

public String doCount() throws Exception {
//spatial filter is already created - only need to set geometry to the current extent
return null;

The complete source code can be downloaded from here.

It's important to note that the loads still create new instances of the objects on the server. So there are no performance benefits to saves and loads versus new instance creations. The benefit to gain is that you don't have to worry about persisting with the state of the object yourself. The server will do that work for you.

That does it for this trilogy (ok so that was blatant abuse of that word - but hey, who's to say... It's my world around here :)

Saturday, September 10, 2005

JDK 5 concurrency API: group / batch thread pool

First up, let me say that the new concurrency API in JDK 5 is indeed a boon for the Java community especially for developers (including yours truly) who before this indulged in threads and concurrent programming only sparingly. Not because we didn’t know how to do it but because getting it right was quite an ordeal. The concurrency API should surely make concurrent programming, the bastion of only a selected few so far, more "mainstream".

So here’s my scenario: I need for a group / batch of tasks to execute concurrently and additionally, I need to wait until all of them have finished executing before moving forward.

Levaraging the new concurrency API; to implement this I can use the Executors factory to create a new thread pool. (A thread pool being an instance of ExecutorService.) To this pool I can submit the tasks of my batch which will be executed according to the policies of the pool. Then if I want to wait on all of them to complete, I need to call shutdown() followed by awaitTermination(). With this my code will indeed block until all tasks have been executed but the problem is that the thread pool no longer accepts any new tasks. So for my next batch of tasks I need to create a new thread pool all over again - which obviously is unneeded and expensive.

All said and done, I need an awaitExecution() method which like awaitTermination() blocks until all tasks have completed but unlike the shutdown() + awaitTermination() combo does not reject new tasks.

Below is a simple wrapper with the awaitExecution() method included. You can ofcourse use any of the extension patterns - decorator, adapter, etc. - for a more refined solution.

public class GroupThreadPool {
protected ExecutorService pool;
protected ArrayList<Future> futures = new ArrayList<Future>();

public GroupThreadPool(int poolSize) {
pool = Executors.newFixedThreadPool(poolSize);

public void submit(Runnable command) {

public void awaitExecution() {
try {
for (Iterator<Future> iter = futures.iterator(); iter.hasNext(); ) {; //blocking call
} catch (Exception ignore) {
} finally {

The user creates this GroupThreadPool just once, calls submit() to submit various tasks in a batch and then calls awaitExecution() to block until all tasks have executed. He can continue to use the same GroupThreadPool object to execute subsequent batches.

The implementation adds the submitted tasks to a list of Futures. To block until all tasks have completed, it calls get() on all Futures which itself is a blocking operation. So awaitExecution() returns only after all tasks have been executed but before returning it clears the list of Futures to accept the next batch of tasks.

I would love suggestions / feedback on this implementation. Is there a better approach? Also, is this a common use case which merits inclusion of awaitExecution() in ExecutorService itself?