Wednesday, December 21, 2005

Exception handling in web frameworks

There are many good articles out there pertaining to good exception handling practices. Here I'll list some that I've learned over the course of working on web applications and as it pertains to web frameworks.


  1. Chain Exceptions:
    Sure one should never lose sight of the destination, but with exceptions one should never lose sight of the source - the all important root cause, the crux of the problem. While you can catch exceptions and throw custom exceptions, never lose the exception that you caught. When creating custom exception classes, always include a constructor that takes the cause as an argument:
    public CustomException(String message, Throwable cause) {
    super(message, cause);
    }

    Or if you are using an exception which does not have such a constructor, you can always use the initCause() method:
    catch(AnException e) {
    IllegalStateException ise = new IllegalStateException("Something went wrong.");
    ise.initCause(e);
    throw ise;
    }


  2. Throw framework specific exceptions:
    The root cause tells you what went wrong but you also want to know where it went wrong. This is especially critical in web frameworks where the framework makes callbacks, does dependency injection and other IoC stuff to call into app specific custom client code. When your client comes to you with a support call or posts the problem on the forums the first thing you want to know is where in your stack did it fail. Debugging and fixing the problem becomes a lot easier once you have narrowed it down to your courtyard. Framework exceptions are best implemented as a hierarchy of custom runtime exceptions. The root of this hierarchy could be a RuntimeException named MyFrameworkException and it could have any number of sub-classes such as MyFrameworkModelException, MyFrameworkControllerException, MyFrameworkViewException, etc. depending on how you have categorized different pieces of your framework.

  3. Log, log, log:
    Where exception messages don’t tell much, debug messages logged from important places will. What comments are to source code, log messages are to runtime. The standard JDK logging API allows you to log messages at multiple levels including INFO, CONFIG, FINE, FINER, et al. Logging messages at various levels will not only provide answers to the hows, whens and wheres of the problem but will also go a long way in helping your customers understand the underpinnings of your framework. And more often than not that is all they need to fix the problem themselves!


We are constantly trying to improve the exception handling and logging in the ArcGIS ADF and you can be assured that we are employing these principles as well.

Friday, September 30, 2005

CONned by windows

Link: Naming a file

Do not use the following reserved device names for the name of a file: CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also avoid these names followed by an extension, for example, NUL.tx7.

I was writing an auto code generator which was generating many Java files and one of the files to be generated was Con.java. My generator generated every other file but when it came to Con.java it threw a FileNotFoundException. Which was very weird because I was trying to create a new file and so it not finding the file was in fact a good thing! To justify that it wasn't just a figment of my imagination I checked and confirmed that there was no such file. I rechecked and yet there was no such file. I cursed and ran the code generator again - this, my never failing trump card betrayed me as well.

Of course these days where everything else fails, Google doesn't. And sure enough it brought me to this page and it was clear that CON was one of the many reserved words and so I could not create a filed named so.

While I am ok with Windows not allowing me to create a file with a name which is a reserved word but not allowing files with a reserved word followed by an extension is all too limiting. And worse still the error messages when I try to create a file con.whatever vary from "access denied" to "file already exists" to "are you kidding me?" (no, not the last one but it came close).

Monday, September 26, 2005

Extend the ArcGIS ADF with POJOs - Part III


The first cardinal rule when working with pooled ArcGIS Server objects in a webapp is that you must release the server context after every web server request. This is because you are sharing the server object with many other users and you need to put it back into the pool so that other users can access it. The second cardinal rule is that you cannot reference any server object after you have released the server context. If you do, it would be like executing a JDBC Statement after closing the Connection. You always need a live connection or a context in a client-server environment to work with objects that are remotely hosted.



So the question then is: How do I persist with the current state of a server object after releasing the context? The answer lies in the saveObject() and loadObject() methods of the IServerContext. You can serialize objects to their string representations with saveObject() and you can deserialize them to their object forms with loadObject(). So calling saveObjects before releasing the context and calling loadObjects on reconnect sets you up well to persist with the current client state while working with pooled objects.


As always, the ADF assists you in making such experiences easy for you. Rather than you having to scratch your heads about when to calls loads and when to calls saves, where to call them, how to keep track of them, et al the ADF provides you a simple interface in WebLifecycle where in you can do all of these tasks and that for only those objects needed for that particular task. The ADF calls relevant methods of the WebLifecycle just prior to releasing the context as well as immediately after a reconnect.


In the CountFeatures class that we have been working on through Parts I and II we may want to persist with the SpatialFilter object. Generally you persist only those objects which have enough client state in them for you to justify the saves and loads. It's obvious that the SpatialFilter doesn't have much state in it to merit justification but the idea here is to showcase how to do it easily so that you can apply it to more pertinent objects such as graphic elements and symbols.


The WebLifecycle defines 3 methods - activate(), passivate() and destroy() - which are called at different stages of a request / session lifecycle by the ADF. The passivate() method is called after a request has been serviced giving you the opportunity to serialize server objects to strings. The activate() method is called before servicing the request where you can deserialize the strings so that they are available as live objects when you perform the business tasks. And finally the destroy() is called when the session is being terminated for you to perform the necessary cleanup.


Below is the code which extends the CountFeatures class to participate in this lifecycle:



public class CountFeatures implements WebContextInitialize, WebContextObserver, WebLifecycle {
...
//serialized SpatialFilter - valid after passivate() and before activate()
String serializedSpatialFilter;
//the spatial filter object - only valid between activate() and passivate()
SpatialFilter filter;

public void init(WebContext webContext) {
...
//create SpatialFilter and set spatial relationship
filter = new SpatialFilter(agsctx.createServerObject(SpatialFilter.getClsid()));
filter.setSpatialRel(esriSpatialRelEnum.esriSpatialRelContains);
}

public void passivate() { //serialize to strings
serializedSpatialFilter = agsctx.saveObject(filter);
}

public void activate() { //deserialize to objects
filter = new SpatialFilter(agsctx.loadObject(serializedSpatialFilter));
}

public void destroy() { //cleanup
filter = null;
serializedSpatialFilter = null;
}

public String doCount() throws Exception {
...
//spatial filter is already created - only need to set geometry to the current extent
filter.setGeometryByRef(agsmap.getFocusMapExtent());
...
return null;
}
}


The complete source code can be downloaded from here.


It's important to note that the loads still create new instances of the objects on the server. So there are no performance benefits to saves and loads versus new instance creations. The benefit to gain is that you don't have to worry about persisting with the state of the object yourself. The server will do that work for you.


That does it for this trilogy (ok so that was blatant abuse of that word - but hey, who's to say... It's my world around here :)

Saturday, September 10, 2005

JDK 5 concurrency API: group / batch thread pool

First up, let me say that the new concurrency API in JDK 5 is indeed a boon for the Java community especially for developers (including yours truly) who before this indulged in threads and concurrent programming only sparingly. Not because we didn’t know how to do it but because getting it right was quite an ordeal. The concurrency API should surely make concurrent programming, the bastion of only a selected few so far, more "mainstream".

So here’s my scenario: I need for a group / batch of tasks to execute concurrently and additionally, I need to wait until all of them have finished executing before moving forward.

Levaraging the new concurrency API; to implement this I can use the Executors factory to create a new thread pool. (A thread pool being an instance of ExecutorService.) To this pool I can submit the tasks of my batch which will be executed according to the policies of the pool. Then if I want to wait on all of them to complete, I need to call shutdown() followed by awaitTermination(). With this my code will indeed block until all tasks have been executed but the problem is that the thread pool no longer accepts any new tasks. So for my next batch of tasks I need to create a new thread pool all over again - which obviously is unneeded and expensive.

All said and done, I need an awaitExecution() method which like awaitTermination() blocks until all tasks have completed but unlike the shutdown() + awaitTermination() combo does not reject new tasks.

Below is a simple wrapper with the awaitExecution() method included. You can ofcourse use any of the extension patterns - decorator, adapter, etc. - for a more refined solution.


public class GroupThreadPool {
protected ExecutorService pool;
protected ArrayList<Future> futures = new ArrayList<Future>();

public GroupThreadPool(int poolSize) {
pool = Executors.newFixedThreadPool(poolSize);
}

public void submit(Runnable command) {
futures.add(pool.submit(command));
}

public void awaitExecution() {
try {
for (Iterator<Future> iter = futures.iterator(); iter.hasNext(); ) {
iter.next().get(); //blocking call
}
} catch (Exception ignore) {
} finally {
futures.clear();
}
}
}

The user creates this GroupThreadPool just once, calls submit() to submit various tasks in a batch and then calls awaitExecution() to block until all tasks have executed. He can continue to use the same GroupThreadPool object to execute subsequent batches.

The implementation adds the submitted tasks to a list of Futures. To block until all tasks have completed, it calls get() on all Futures which itself is a blocking operation. So awaitExecution() returns only after all tasks have been executed but before returning it clears the list of Futures to accept the next batch of tasks.

I would love suggestions / feedback on this implementation. Is there a better approach? Also, is this a common use case which merits inclusion of awaitExecution() in ExecutorService itself?

Monday, August 22, 2005

Extend the ArcGIS ADF with POJOs - Part II

In Part I we discussed how you could implement GIS functionalities in POJOs and plug them into the ADF. In this part we'll extend the POJO a little further.

In Part I, the CountFeatures object calculated and updated the feature count on a client interaction such as a button click. Suppose that we now need for this object to recalculate the count automatically whenever the current extent of the map changes or the map refreshes due to some other action.

The ADF provides a very simple way to do this. Objects can register themselves as observers of the WebContext and whenever the context is refreshed (by virtue of the user calling the refresh() method on the context), all observers will be intimated of this event and each observer can act upon it individually. This way we have loosely coupled objects reacting together to the context refresh.

With this background let's now extend our CountFeatures class to implement this behavior.


public class CountFeatures implements WebContextInitialize, WebContextObserver {

public void init(WebContext context) {
...
context.addObserver(this);
}

public void update(WebContext context, Object arg) {
doCount(); //perform the business action on update
}
...
}

First up, all observers of the WebContext need to implement the WebContextObserver interface. Next, they register themselves as observers of the context by calling the addObserver() method on the context. Finally, on every context refresh, the update() method of the WebContextObserver interface is called by the ADF and the object reacts (performs the business action) to the same. In this case we simply call the doCount() method which recalculates the feature count of the updated map. This will ensure that whenever the context refreshes (for example when the user zooms or pans), this object will recalculate and display the new count to the user.

As simple as that. Apart from a few modifications to the Java code, nothing else needs to change from the Part I source code. The JSP as well as the configuration files remain unchanged. You can download all the source code (including the unchanged JSP) for this part from here.

In Part III we'll extend CountFeatures further by implementing the WebLifecycle interface.

Tuesday, August 9, 2005

Extend the ArcGIS ADF with POJOs - Part I

This is the first of a 3 part series where we'll discuss how to add custom GIS functionality to the ADF as POJOs (Plain Old Java Objects). To accomplish this we'll be leveraging the IOC inherent in the ADF discussed earlier. We talked about 3 very important interfaces in the IOC discussion - WebContextIntialize, WebContextObserver and WebLifecycle. Putting these 3 interfaces in practice will be central to the 3 parts respectively. In this part we'll make use of the WebContextInitialize interface.

We'll keep the functionality to be implemented quite simple: Count the number of features of a given layer in the map's current extent.

To implement this scenario our POJO will need a few basic properties and methods - a read-only count property, a read/write layerId property representing the layer whose features are to be counted and a business method doCount() which implements the business task at hand. With this said, the skeleton of the class (we'll call it CountFeatures) will be as such:


public class CountFeatures {

//properties
int count;
int layerId;
public int getCount() { return count; }
public int getLayerId() { return layerId; }
public void setLayerId(int layerId) { this.layerId = layerId; }

//business method
public String doCount() {
...
...
count = ...;
return null;
}
}

You might have noticed that the doCount() method returns a String. This is because the ADF is JSF based and when the user clicks on say a command button on a web page, it results in a call to doCount(). Based on the return value of this method the JSF framework decides which page to navigate to. Returning a null ensures that the webapp stays on the same page.

OK, so now we have the skeleton in place but we also need access to the ArcGIS Server and the underlying ArcObjects to perform the GIS task at hand. This is where the WebContextIntialize comes into the picture:

public class CountFeatures implements WebContextInitialize {

//the context associated with this object
AGSWebContext agsctx;
public void init(WebContext context) {
agsctx = (AGSWebContext)context;
}
...
}

The ADF will call the init(WebContext) method of objects implementing WebContextInitialize immediately after the object is instantiated. This gives the object access to the WebContext. The AGSWebContext (which is the actual implementation of the WebContext that we work with) maintains references to the ArcGIS Server objects and ArcObjects (such as IMapServer, IMapDescription, etc.) as well as to other ADF objects (such as AGSWebMap). This implies that by virtue of gaining access to the AGSWebContext our custom object now has a hook into the whole of ArcObjects as well as the ADF - basically everything that you need to accomplish your GIS task at hand.

With access to everything that our class needs, the business logic can now be implemented in the doCount() method to perform the count operation and set the result to the count variable.

That's it - our Java code ends here. All that is left to do now is to register this object as a managed attribute of the WebContext so that the ADF can automatically instantiate the object on demand as well as call the init(WebContext) method immediately after instantiation. This is accomplished by adding the following lines of XML to managed_context_attributes.xml which you can find in the /WEB-INF/classes folder of your ADF webapp:

<managed-context-attribute>
<name>countFeatures</name>
<attribute-class>custom.CountFeatures</attribute-class>
<description>counts features of a given layer in the current extent...</description>
</managed-context-attribute>

With this done, you can now access our custom object by name (countFeatures in this case).

You can download the full source here. In addition to the Java code, the ZIP file also contains a sample JSP. The JSP has a command button to trigger the business method, a dropdown to choose the layer and a text out to display the count.

In conclusion I'd like to mention that while admittedly the functionality that we have implemented here is trivial, you can essentially follow the same programming model to implement your own functionality as well: POJOs which implement WebContextInitialize

In Part II we'll extend this same object to be an observer of the context and in Part III we'll make this object participate in the ADF lifecycle.

Sunday, August 7, 2005

Inversion of Control in the ArcGIS Java ADF

Martin Fowler in a recent blog gave a good short explanation of the inversion of control pattern... In ESRI's ArcGIS Java ADF we employ this approach at a few places.


  1. The WebContextInitialize interface declares an init(WebContext) method. The ADF will call this method on objects which implement this interface and register themselves as attributes of the WebContext. This method will be called immediately after they are registered with the WebContext. Users interested in getting access to the associated WebContext object or want to do some initialization tasks should implement this interface.

  2. The WebLifecycle interface declares methods which will be called by the ADF at various phases of the webapp's lifecycle. Users can implement activation, passivation and destroy logic in these methods. This interface is most relevant when using pooled objects since users may want to rehydrate and dehydrate the states of the server objects when the ADF reconnects and releases its connection to the ArcGIS server on every request.

  3. The WebContextObserver interface declares an update(WebContext webContext, Object args) method. Objects implementing this interface can register themselves as observers of the WebContext by calling the addObserver(WebContextObserver) method. After users perform operations which change the state of the objects that they work with (for example zoom to a different extent, add a graphic element, etc.), they call the refresh() method on the WebContext. When this happens, the ADF iterates thru all the registered observers of the context and calls their update() methods. This ensures loose coupling among the various objects but at the same time gives these loosely coupled objects an opportunity to be in sync with the changed state of the app. This is a classic implementation of the observer pattern with the WebContext acting as the Observable object.


With the advent of JDK 5 annotations, it might be convenient for the users if #1 could be achieved by simply annotating a field or a setter method with an @Resource like annotation. The ADF on encountering this annotation on a WebContext field or setter method could inject the same into the interested object. Further, users can do the initialization tasks in any arbitrary method annotated with the @InjectionComplete annotation and the ADF will call this method immediately after injecting the WebContext. (Both these annotations are proposed by JSR 250 - Common Annotations).

#2 could also be achieved through annotations. Much like how EJB3 is proposing @PostActivate, @PrePassivate, et al; users can annotate the lifecycle methods with annotations such as @OnActivate, @OnPassivate and @OnDestroy. This would alleviate them of having to implement interfaces for lifecycle callbacks and further, they can choose to participate in only those phases of the ADF lifecycle which makes business sense for their objects.

Comments / feedbacks welcome.

Tuesday, August 2, 2005

Adding layers dynamically in the ArcGIS Java ADF

There have been many questions about adding layers dynamically in the ADF... And of course the requirement is that the added layer will reflect not only on the map but also on the TOC, layer drop downs, etc...

When working with non-pooled objects, there's a straightforward way of doing this. Look at the source code below:


AGSWebContext agsCtx = ...; //get hold of the AGSWebContext
AGSWebMap agsMap = (AGSWebMap)agsCtx.getWebMap();

//Step 1
agsCtx.applyDescriptions();

//Step 2
MapServer mapso = new MapServer(agsCtx.getServer());
IMap map = mapso.getMap(agsMap.getFocusMapName());
ILayer layer = ...; //create the layer
map.addLayer(layer);

//Step 3
agsCtx.reloadDescriptions();

Let's discuss the 3 steps now:

  • Step 1: Before making stateful changes to the object graph (like adding a new layer in this case) you want to apply the current state of the MapDescriptions to the object graph. Calling the applyDescriptions() method on the AGSWebContext does exactly that.

  • Step 2: Now that you have the object graph in the current state, you can make the modifications there. In this case we add a new layer to the map.

  • Step 3: Once you have made changes to the object graph you want to reload the MapDescriptions to reflect the changed state and additionally, you also want the web controls to display this new state (in this case display the layer on the map control, on the TOC control as well as on any other components working with layers). A single call to the reloadDescriptions() method on the AGSWebContext will do all of this for you.


And that's about it! This sequence of steps holds true for any stateful changes that you want to make to non-pooled objects. The 3-word mantra is APPLY-CHANGE-RELOAD.

If you wanted to work with dynamic layers (or make any stateful changes) in the pooled context, there's indeed more work to do because you are now sharing the server object with others and you want to return the object back to the pool in the same state that you had received it. You need to get access to the server object, apply the current MapDescriptions to the object graph, make the changes to the object graph, reflect the changes in the MapDescriptions and the web controls, undo the changes to the graph and then return the object back to the pool. You can check out the dynamic layers sample on EDN to see this use case in action.

Friday, July 22, 2005

ArcGIS Server 9.2 (Java): Coming soon to a GIS near you

The ESRI User Conference is around the corner and it's time to talk about what the future holds for our users. Before I delve into the details let me say this: ArcGIS Java users, we have heard you! Here's what the future holds for you:


  • A pure Java experience: No more query interfacing... no proxies to deal with... The Java programmer will get an API s/he lives by - pure Java... diagnostic exception handling... established design patterns... the whole shebang... you name it we got it!

  • A rich SDK experience with Eclipse plug-ins to help you get easily started with using ArcGIS... one click deploy and run of samples... ready to use application templates... code snippets... an integrated and intuitive help system...

  • The web tier is being reworked to leverage state of the art technologies (if you are thinking AJAX you are right, but there's MORE)... how about the ability to work with multiple data sources - ArcGIS, ArcIMS, ArcWeb Services... how about the ability to add your custom source... dynamic layer reordering and grouping... network analysis... the list goes on...

  • Geoprocessing: You'll get a pure Java experience with GP too - tools, Java data types, geoprocessor, exceptions... You can also build your own custom tool / model, use a tool generator to generate a pure Java class for the same and just like that you are ready to incorporate new GP functionality into your Java GIS.

  • OTB EJBs to integrate GIS with your J2EE infrastructure.


All this and more will be discussed at the UC in a couple of sessions:

  • ArcGIS Road Ahead: What's Coming for GIS Developers at 9.2

    • Offering I - Wed 8:30 AM Room: 6A (SDCC)

    • Offering II - Thu 1:30 PM Room: 6A (SDCC)



  • ArcGIS Road Ahead: What's Coming in ArcIMS and ArcGIS Server at 9.2

    • Offering I - Tue 1:30 PM Room: 6A (SDCC)

    • Offering II - Thu 8:30 AM Room: 6A (SDCC)




I'm sure you are rushing to add them to your agenda! Also, here's the UC Q and A for more info...

See ya!

Sunday, July 3, 2005

The JSF evolution

It's been more than 2 years since I started looking into JSF. JSF was in its early access avatar then and the JSF community felt like a startup trying to find its way into the unknown. And now 2 years on it's very heartening to see such wide spread industry support for JSF with any number of IDE vendors supporting it, every other technology on the block showcasing how it too can be integrated with JSF, JSF being the prime topic at many a J1 discussions, et al...

We at ESRI have been showcasing our GIS components for the past 2 JavaOnes. Doreen presented them in 2004 and Steve did it this year. They were well received on both occasions. But what has been even more interesting to me is the changing developer perspectives between J1 2004 and 2005 with regard to JSF. Last year they were just intrigued by this new technology and wanted to see what it was all about. This time around folks had a lot better understanding of it (they had either read extensively about it or actually used it themselves) and they wanted to see how they could actually use it in their organizations.

True JSF has its faults but IMO industry wide support should definitely tip the scales in its favor when it comes to developers choosing which framework to use in their new projects. But even without industry support I firmly believe that JSF has more merits than chinks. It does have a steep learning curve but when you are over it it's worth the effort in gold - it makes you think "components" and not request parameters, you now write business logic in POJOs and not worry about how the controller will call into them, you perform actions in simple methods in backing beans and not in an obscure action form or servlet - the list just goes on...

It's the same learning curve that one has to wade through when s/he went from procedural programming to OOP, from just getting the job done to perusing the GoF design patterns and employing them in their projects... It takes time but the end result is that you are a better programmer because of it.

Saturday, July 2, 2005

Annotation use cases

My first blog here but will cut straight to the point.

The most talked about feature at J1 this year was annotations. It was as if every new API / framework had to have support for annotations or must have something pertaining to it on their radar to gain acceptance or even be considered a contender.

I have not yet being able to make my mind if this profileration of @YeahIHaveAnAnnotationToo is a good thing or not. For the time being I am trying to come up with use cases of where annotations make sense. Here are some that I have assimilated from various blogs, J1 sessions and my own brain dumps:

1. Dependency injection.


  • The @Resource and the @EJB annotations introduced by EJB3 are obvious examples of dependency injections. Any custom framework will have some notion of a context or an environment or a manager and injecting such resources into users' classes would make a lot of sense.


2. Aspects (boiler plate stuff handled by the container / framework) and interceptors

  • The @TransactionAttribute annotation associates a transaction aspect to a method. @Query and @Update annotations introduced by JDBC 4.0 perform tasks which go beyond just boiler plate - they actually populate and manipulate objects which directly affect business logic. I've put interceptors alongwith aspects coz I consider interceptors as an implementation technique for aspects (Correct me if I am wrong). I think one needs to be really careful when designing annotations in this category because taking this route could be a slippery slope - you design a bunch of annotations which perform certain core tasks and your co-developer has his/her own set of tasks which s/he religiously believes are "core" and your manager feels that it's critical to design a third set to appease that billion $ client. IMO, for this category of annotations just design those which strictly do boiler plate stuff and don't necessarily alter the business state for the client.


3. Callback methods

  • The @PostActivate and @PrePassivate ones fall in this category. I am not sure if I am a fan of annotations in this category. Basically with this all you are trying to do is eliminate the need to implement certain interfaces. Which means all you gain is that you don't need to have empty implementations for methods that you don't need. But it becomes very difficult for somebody to understand what contracts your object fulfills or which methods are actually designated as callbacks. One might argue that objects are no longer POJOs if they have to adhere to some framework specific interfaces - if this argument is important for you then implement callback listeners (an alternative that EJB3 also provides) to achieve this objective.


4. Container contracts

  • Tagging an EJB as @Stateless and @Stateful makes it known to the container how it should treat that bean. Moreover, it also sets a programming pattern for the user how to access and use these objects. Most marker annotations could be included in this category. So if your framework needs to handle certain objects in specific ways, you could think about such annotations.


5. Programmatic access to metadata for classes, fields and methods.

  • This is an oft forgotten use case but one which I think makes sense for UI objects. While it's a given that it's important to publish information regarding a method or a field's behavior through detailed comments and Javadocs; for UI objects this information may also need to be presented to the user on desktop panels and HTML forms. So the next time you author a UserBean think about associating a runtime @Description annotation with the username and password fields so as to give consistent descriptions for these fields across different views of your app.


(I'll try to add more as I get my head around it more (or if you have any more points for me).)

While this push toward annotated POJOs for everything might be a good thing; the thing that I fear is that Java classes of the future would have less Java code and more of annotations. It may become increasingly difficult for the user to deterministically gauge what behavior any given method would exhibit because what the method does could change dramatically depending on which annotations were applied when the method was actually called.

Not trying to attach a pessimistic annotation to annotations (what can I say, I love meta-anything!) but just waving a flag of caution before we head heads first into the annotated unknown...