tag:blogger.com,1999:blog-76690704499857944342024-03-18T23:44:37.337-07:00abstract finalYou may be right but I am not wrong.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-7669070449985794434.post-50251155856708820892008-02-17T16:26:00.000-08:002008-02-17T16:29:42.310-08:00Untangling REST<a href="http://roy.gbiv.com/untangled/2008/why-untangled">Link</a>: <br /><blockquote> Lately, however, it seems that I spend more time answering people’s questions about REST than I should [...]. I need to start organizing my own correspondence.</blockquote><br />When <a href="http://roy.gbiv.com/untangled/about">the Roy</a> himself talks about REST you listen.<br /><br />Subscribed.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com3tag:blogger.com,1999:blog-7669070449985794434.post-83803346185139161002007-06-18T11:55:00.000-07:002007-06-18T11:59:38.878-07:00ArcGIS Server REST and JavaScript APIs announcedREST and JavaScript APIs for the ArcGIS Server were announced at the ESRI UC Plenary Session today.<br /><br />At 9.3 you can REST-enable your ArcGIS Server services. You can then either consume them from the ArcGIS JavaScript library as well as from other clients such as Google Maps, Virtual Earth and the like.<br /><br />While at the UC, you can learn more about these APIs at the Server Road Ahead sessions, the EDN sessions, Advanced ArcGIS Online Sessions and also at the Java SIG.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com17tag:blogger.com,1999:blog-7669070449985794434.post-58045457876384361652007-06-05T18:23:00.000-07:002007-06-05T18:24:54.480-07:00HTTP Content OptimizationYou have your business engine all setup. Your processing algorithms have been optimized to the hilt. Your data model is as scalable as any. Any now you are publishing your data to the WWW. Well, the good news is you can still do more - all with plain old HTTP.<br /><br />Herein I list 3 simple ways you can leverage HTTP to help you better serve your content:<br /><ol><li><code>Cache-Control</code> headers</li><li><code>ETag</code> + <code>If-None-Match</code></li><li><code>gzip</code></li></ol><span style="font-weight: bold;">1. <code>Cache-Control</code> headers</span><br /><br /><code>Cache-Control</code> is byfar is the most widely used of all http headers and for good reason. You generate your response and you set a <code>Cache-Control</code> response header with a validity period. The clients, the intermediaries and the web infrastructure at large all work overtime for you caching your content until the time that you have tagged it valid.<br /><br />This of course works best for static resources or for such dynamic resources whose validity you can reasonably predict before hand.<br /><br /><span style="font-weight: bold;">2. <code>ETag</code> + <code>If-None-Match</code></span><br /><br />This is one of the most powerful but unfortunately, a not-frequently used technique. So your content is such that you can't reasonably predict its validity period. Which means #1 doesn't work for you. Your next best buddy is <code>ETag</code>s.<br /><br />This is how it works: You generate your content and you set the http header <code>ETag</code> (entity tag). The <code>ETag</code> represents the state of your resource. Even if one bit of the resource content changes, so does its <code>ETag</code>. You can think of <code>ETag</code> as a simple hash of your content. Ok so you have set the <code>ETag</code> and sent the response. Now the next time the client tries to access the same URL, it will send an <code>If-None-Match</code> request header and it will be set to the same value as that of your <code>ETag</code>. Now you can either regenerate the content or you may have it cached on your server, if the <code>ETag</code> of your content matches the <code>If-None-Match</code> it implies that the content has not changed. The client has indicated to you thru the <code>If-None-Match</code> that it already has this content. So what do you do - send <em>nothing</em>! Yes - simply set the response status to HTTP 304 (Not Modified) and the size of the content that you send this time is exactly 0. At a minimum, you gain in saved bandwidth (read performance) but if you have cached the content on your server, you also gain from saved computation.<br /><br /><span style="font-weight: bold;">3. <code>gzip</code></span><br /><br />With #1 and #2 you benefit by not having to resend your content in certain situations. But even when you can't get away from having to send content, you can still gain in bandwidth by simply compressing it. But of course you want to compress content only for clients that you know can decompress it and even then you need to tell the client that the content you sent is compressed and it needs to decompress it.<br /><br />No hassle - http makes it fairly straightforward. If the client understands gzip, it sends an <code>Accept-Encoding</code> request header with the string <code>gzip</code> in it. If you (the server) read this header and find the string <code>gzip</code>, you gzip your content and set a response header <code>Content-Encoding</code> to <code>gzip</code>. This tells the client that the content is gzipped and it must decompress it before providing it to the user.<br /><br />It's normal for gzip to compress text content in upwards of 70% and given how easy it is to compress content you should be compressing your content right about now.<br /><br />Note that #2 + #3 put together have <a href="http://abstractfinal.blogspot.com/2007/05/ie-6-gzip-etag-if-none-match.html">problems in IE 6</a> and you might have to take that into account.<br /><br />But all in all optimizing the delivery of your content with http is simple yet powerful and your web application can only benefit from it.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com5tag:blogger.com,1999:blog-7669070449985794434.post-51848616104210932802007-05-25T11:54:00.000-07:002007-05-25T11:55:55.467-07:00IE 6: gzip + ETag != If-None-Match<code>ETag</code> + <code>If-None-Match</code> give you the benefit of not having to send unmodified content repeatedly (HTTP 304). IE6 handles this well.<br /><br /><code>Content-Encoding</code> with <code>gzip</code> gives you the benefit of compressing the content that you send. IE6 handles this well as well.<br /><br />So <code>gzip</code> + <code>ETag</code> / <code>If-None-Match</code> should give you the combined benefit of sending compressed content when you must and not sending content at all if it's not modified. Well, as you might have already guessed, IE6 does not handle this well. If your content is gzipped and you send an <code>ETag</code> header as well, IE6 does <em>not</em> send an <code>If-None-Match</code> on subsequent requests. Which of course means that you can't leverage HTTP 304.<br /><br />So if you are servicing IE6 clients beware that it supports either compression or <code>ETag</code>s but not both.<br /><br />Thankfully, this has been fixed in IE7. Firefox of course just works.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com3tag:blogger.com,1999:blog-7669070449985794434.post-28946722941667169372007-05-18T09:39:00.000-07:002007-05-18T10:13:16.831-07:00More Resources == ScalableLink: <a href="http://www.intertwingly.net/blog/2006/06/05/Elevator-Pitch">Sam Ruby on Google Maps</a><br /><blockquote>Of course, the rest of the iceberg was that Google had simply tiled the Earth. In so doing, they converted a single web service (call me with a bunch of information, and I will provide you with a custom result) into a large number of individually addressable, cacheable, and scalable web resources.</blockquote>That's it. More the resources more is the opportunity to use Cache-Control headers, to use ETags, to distribute and load-balance the system.<br /><br />In the same article, Sam also talks about how the web is not a service but a space. And in today's world adding more "space" will scale your system manifold than implementing a state-of-the-art service with the most optimal algorithm. Processor speeds have flattened. Today it's about dual-cores, quad-cores, (your-budget)-cores. The more cores your program can use to get the job done, the more scalable will your system be.<br /><br />As <a href="http://www.briangoetz.com/blog/">Brian Goetz</a> puts it: "Tasks must be amenable to parallelization". Parallelization comes for free with every resource you add. So add more resources and see your system scale.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-43354304654860157672007-05-13T18:52:00.000-07:002007-05-13T18:54:38.848-07:00JavaOne Days 3-4: Mashups and garbage collectionIf you are surprised that I am talking about mashups and garbage collection in the same post, well, so was I. But if one can talk about <a href="https://phobos.dev.java.net/">implementing servers in JavaScript</a>, I most definitely can talk about mashups and garbage collection in the same breath.<br /><br />Day 1 was <a href="http://abstractfinal.blogspot.com/2007/05/javaone-day-1-scripting-everywhere.html">primarily scripting</a>, day 2 <a href="http://abstractfinal.blogspot.com/2007/05/javaone-day-2-hardcode-java-and-java-ee.html">hardcore java</a>. Days 3 and 4 saw sessions on the entire gamut of Java technologies - from garbage collection to mashups. I discuss some of them here.<br /><br /><span style="font-weight: bold;">Blueprints for Mashups</span><br /><br />This was arguably the most informative session for me by far at this year's JavaOne. Kudos to <a href="http://weblogs.java.net/blog/gmurray71/">Greg</a>, <a href="http://blogs.sun.com/basler/">Mark</a> and <a href="http://weblogs.java.net/blog/sean_brydon/">Sean</a> for putting together a to-the-point, practical and readily-usable session together. Personally, this session validated the REST and JSON concepts I had gathered over the past few months. I'll recommend this session to anybody interested in building mashups, REST services, AJAX and a whole lot more. (No, they haven't paid me to say this.)<br /><br />Anybody building REST services should design their system with JavaScript in mind. In today's mashup world the browser (and hence JavaScript) is your first class client. XML and JSON are the typical content types returned by REST services. To get over the browser's cross-domain restrictions, there are 2 possible solutions:<br /><ul><li><span style="font-weight: bold;">Server-side proxy</span>: Clients always send requests to the server that is hosting the page. The server in turn acts as a proxy, sends your request to the (remote) mashup service and returns the data it gets from the mashup service to you.</li><li><span style="font-weight: bold;">Dynamic script tags</span>: Browsers allow script tags to communicate with cross domain servers. This opens up the opportunity for you to issue requests to any mashup service by generating dynamic script tags.</li></ul>Consider the Atom format if you are serving XML. This opens up your service to the large number of feed readers and other Atom clients out there.<br /><br />JSON has of course gained immense popularity of late. Although JSON is technically a serialized JavaScript object, the JSON format is highly portable and language independent. In addition to returning JSON, you can also support wrapping the JSON object in a JavaScript callback method (they called it jsonp - JSON with padding). This enables clients to specify a callback method which can readily work with the JSON that the server returns.<br /><br />Various options are available for securing your REST services:<br /><ul><li>User tokens</li><li>Session based hash</li><li>URL based API key</li><li>Authentication headers</li></ul>JavaScript best practices:<br /><ul><li>Use namespaces</li><li>Use CSS for applying styles</li><li>Don't add to the prototype of common JavaScript objects</li><li>Setting the <code>innerHTML</code> property is easier / better than DOM manipulation</li></ul>Components of a ("good") Mashup API / library:<br /><ul><li>A server-side service</li><li>A client-side JavaScript API / library</li><li>A client-side CSS for applying styles</li><li>Document the API</li><li>Create simple examples enabling a simple cut-and-paste approach to learning your API</li></ul>As you can see, they have, true to the name, provided blueprints for mashups, best practices, possible hurdles and workarounds, problems and solutions. Once again - highly recommended for anybody who has / will have / may have anything to do with mashups.<br /><br /><span style="font-weight: bold;">Phobos - A Server-side JavaScripting framework</span><br /><br />While JS clients are ubiquitous and obviously here to stay, for the life of me, I have not yet come to terms to implementing my server in JS as well. The 2 reasons they cited - impedance mismatch and all-you-need-is-an-F5 to redeploy - didn't quite do it for me. May be in another life I'll grow up to JavaScript servers, but not for now. And if at all that day were to come, there should be a Java<span style="font-style: italic;">Script</span>One and no JavaOne.<br /><br /><span style="font-weight: bold;">Garbage collection</span><br /><br />If there was one objective of this session, it was to clear the GC myths out there. Some of the finest Java minds were refuting the urban legends out there and when they talk you listen:<br /><ul><li>Object allocation is cheap. Reclaiming young objects is also cheap.</li><li>Small, short-lived immutable objects are good. Large, long-lived mutable objects are bad.</li><li>Nulling references rarely helps - except when it comes to arrays.</li><li>Avoid finalizers - in most cases there are better alternatives possible.</li><li>Avoid <code>System.gc()</code> - except between well-defined application phases and when the load on your system is low.</li><li>Object pooling is not required in today's VMs. It is a legacy of older VMs. Exceptions are objects that are expensive to allocate / initialize or objects that represent scarce resources.</li><li>Consider using <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/ref/package-summary.html">reference-objects</a> for limited interactivity with the garbage collector.</li></ul><br />Certain memory-leak pitfalls:<br /><ul><li>Objects in wrong scope</li><li>Lapsed listeners</li><li>Metadata mismanagement</li></ul><br />They strongly advocated <a href="http://findbugs.sourceforge.net/">FindBugs</a> for finding such pitfalls as well as other potential bugs.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com2tag:blogger.com,1999:blog-7669070449985794434.post-25115911050184570852007-05-13T13:07:00.000-07:002007-05-13T14:40:54.029-07:00JavaOne Day 2: Hardcore Java and Java EE 5While Day 1 saw an <a href="http://abstractfinal.blogspot.com/2007/05/javaone-day-1-scripting-everywhere.html">overdose of scripting</a>, sanity made a return on Day 2. Although JavaScript <em>technology</em> (whoever came up with that) sessions continued, there was a good supply of deep-dive Java and Java EE sessions to keep me busy. And there was also some <span style="font-style: italic;">WADL</span>ing.<br /><br /><span style="font-weight: bold;"> Generics and Collections</span><br /><br />Generics have been around for a while and most Java programmers have some understanding of it. There have been many criticisms against the erasure based implementation of generics - since parameter types are not reified, constructs that require type information at runtime don't work well. However erasures allow 2 most important benefits - migration compatibility and piecewise generification of existing code. Just these benefits make erasures a necessary bane.<br /><br />Huge additions have been made to the Collections framework in Java 5 and 6. So much so that many recommend programmers to never use arrays - Collections all the way!<br /><br /><span style="font-weight: bold;">Concurrency Practices</span><br /><br />The concurrency classes introduced in Java 5 are a great new toolset for the Java programmer. New concurrency related annotations such as <code>@ThreadSafe</code>, <code>@GuardedBy</code>, etc. are being considered. One important aspect to keep in mind - imposing locking requirements on external code is asking for trouble. Ensure that you make your code threadsafe yourself. Performance penalties incurred by <code>synchronized</code> constructs are overblown.<br /><br />Immutable objects are your friends - <code>final</code> is the new <code>private</code>! They are automatically threadsafe. Object creation is cheap. Aim for less and less mutable state.<br /><br />Performance is now a function of parallelization - write code such that it can use more cores to get the job done faster. So to improve scalability find serialization in your code and break it up:<br /><ul><li>Hold locks for less time</li><li>Use <code>AtomicInteger</code> for counters</li><li>Use more than one lock</li><li>Consider using <code>ConcurrentMap</code></li><li>Consider <code>ThreadLocal</code> for heavyweight mutable objects that don't need to be shared</li></ul><span style="font-weight: bold;">Effective Java</span><br /><br />Builder pattern allows you to construct objects cleanly in a valid state with both required and optional parameters. The basic premise can be expressed in code as such:<br /><pre>MyObject myobj = new MyObject.Builder(requiredParam1,requiredParam2).optionalParam1(opt1).optionalParam2(opt2).build();</pre><br />Generics - avoid raw types in new code. Don't ignore compiler warnings. Annotate local variables with <code>@SuppressWarnings</code> rather than the entire method / class. Use bounded wildcards to increase applicability of APIs. 3 take home points for paramterized methods:<br /><ul><li>Parameterize using <code><? extends T></code> for reading from a Collection</li><li>Parameterize using <code><? super T></code> for writing to a Collection</li><li>Parameterize using <code><T></code> for methods that both read and write</li></ul>If a type variable appears only once in a method signature then consider using a wildcard. Avoid using bounded wildcards in return types. Generics and arrays don't mix well - consider using generics wherever possible.<br /><br /><span style="font-weight: bold;">Java EE 5 Blueprints</span><br /><br /><code>Filter.doFilter()</code> may run in a different thread than the <code>Servlet.service()</code> method. The Servlet API does not use NIO, however it is possible to implement it yourself. New annotations may make <code>web.xml</code> optional.<br /><br />The JSP and JSF EL have been unified. <code>#</code> is now reserved in JSP 2.1. The <code>javax.faces.ViewState</code> hidden field (for client side state) will be standardized. Include this field in your AJAX postback. Application-wide configuration of JSF resource bundles will be possible in <code>faces-config.xml</code>. The <code><f:verbatim></code> tag is no longer needed to interleave HTML and JSF content. <code>@PostConstruct</code> and <code>@PreDestroy</code> annotations will be supported for JSF managed beans.<br /><br /><span style="font-weight: bold;">WADL</span><br /><br />WADL - Web Application Description Language. As a colleague of mine put it - it's WSDL for REST! And IMO that's almost what it is. It's all in good intent to introduce WADL - a formal definition of your REST resources, allows you to automatically stub out code in your favorite language, aids testing. However WADL has many rough edges and I don't see myself using it anytime soon until those have been addressed - overloaded / loosely typed query parameters, security, non-standard content negotiation, et al.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-42297320081816416662007-05-08T23:54:00.000-07:002007-05-09T00:04:27.715-07:00JavaOne Day 1: Scripting EverywhereIf you were new to JavaOne, you might be forgiven for thinking that the message was - Strongly typed languages are so-90s, scripting is in! And why wouldn't you - every other word you heard was - JRuby, Groovy, Server-side JavaScripting. And if that wasn't enough, Sun introduced a half-baked, half-after-thought, all put-together-in-a-last-minute-sprint JavaFX. Ok so I don't want to disregard the hard work put together by folks to get JavaFX ready for JavaOne but it all seemed so unconvincing. The demos were clunky, some didn't work and those that did were hardly "Wow!".<br /><br /><span style="font-weight: bold;">Nothing to "Wow!"</span><br /><br />If you like showtime, there was none. No big announcements, no new features, no gennext projects. But IMO that was a good thing. Java has seen many evolutions, many innovations. It's time to make it just a little better, a little more robust, a little more performant. Java is a mature technology now and any radical change might actually be shaky news for the massive Java community.<br /><br /><span style="font-weight: bold;">Oh so Groovy</span><br /><br />As I mentioned - scripting languages were the talk of the day. And Groovy is one of them. It has been around for a while and the rich set of features it offers are testimony to that - Java-like syntax, pre-compiled or runtime compilation, "duck typing", annotation support and a whole pageful of other features.<br /><br />I wonder what has lead to this sudden wave of dynamic languages? Wasn't it only yesterday when everything-should-be-strongly-typed was the hallowed rule of programming. Maybe it's the advent of mashups. Maybe the uptake in AJAX. Or maybe we just need something new to keep us interested!<br /><br /><span style="font-weight: bold;">Web as a platform</span><br /><br />"Integrated rich clients" - or what we know as mashups - are the killer apps of the day. <span style="font-style: italic;">Things </span>(technologies, techniques, hacks) that allow for mashups to be <span style="font-style: italic;">assembled </span>with minimal code will be on the victory parade. More code is executed on the client but most of that code is still sent to the client by the server. No wonder scripting is big today and getting bigger. But what I <span style="font-style: italic;">do </span>wonder is where does that leave EJBs / ESBs?<br /><br /><span style="font-weight: bold;">JSR 311 - Java REST API</span><br /><br />I had <a href="http://abstractfinal.blogspot.com/2007/04/do-we-need-frameworks-for-rest.html">questioned</a> the need for frameworks to build RESTful services earlier. And after having sat thru a session explaining the new Java REST API spec I am no more close to getting an answer to that. Ok so they showed a few annotations which got rid of a few lines of code. But does the same simplicity work if I have even a slightly complicated URI pattern? Or will it work if I don't use the "Accepts" header for conent negiotiation and use query params or path extensions? And from what I saw it seemed they were generating XML and JSON from Java code. Aren't JSPs (or any template technology) a whole lot easier to do that?<br /><br /><span style="font-weight: bold;">I got Closure</span><br /><br /><a href="http://gafter.blogspot.com/">Neal Gafter</a> gave a presentation on Closures in Java. Very well done, no gimmicks, to the point. A great presentation for a potentially very powerful feature in the Java language. If Neal needs more voices to move this up the JCP and into Sun's plans for Java SE 7, count me at +1.<br /><br /><span style="font-weight: bold;">Take home point</span><br /><br />Like 'em or hate 'em - scripting is in town.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com2tag:blogger.com,1999:blog-7669070449985794434.post-33382297085424350812007-04-28T11:51:00.000-07:002007-04-28T11:52:43.680-07:00Do we need frameworks for REST?Many REST frameworks such as <a href="http://www.restlet.org">Restlets</a> and <a href="http://simpleweb.sourceforge.net/">simple web</a> are gaining popularity of late. There's also <a href="http://jcp.org/en/jsr/detail?id=311">a new JSR</a> to give Java developers a new API to build RESTful web services. It's quite natural that as more people <a href="http://pluralsight.com/blogs/tewald/archive/2007/04/26/46984.aspx">"get" REST</a> they look for ways to simplify building their next REST web application / service.<br /><br />And while I admittedly have to yet fully "get" REST, whatever I have got so far is certainly not by using frameworks that let me implement REST by writing POJOs but rather by extensive use of the POW (Plain Old Web) and POGS (Plain Old Google Search). And applying the principles I learned by using POS (Plain Old Servlets).<br /><br />This is not a criticism of these frameworks. May be I still haven't understood the role of these frameworks. And may be once I do understand their goodness I myself will start using them.<br /><br />But my question is this - why are servlets not good enough? Sure they have their limitations. But rather than have yet another framework or a brand new API, why not have a JSR to fix the servlets and JSPs themselves? (A good start would be to <em>not</em> enable sessions for JSPs by default.) After all isn't it the convenience of using frameworks galore that has kept the larger community from understanding the goodness of HTTP? Isn't it the same convenience that has made it a common practice to use (bloated) sessions?<br /><br />You <em>have to</em> wet your feet to tread the waters. You <em>have to</em> get your hands dirty in HTTP to implement REST.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com2tag:blogger.com,1999:blog-7669070449985794434.post-22405569814954283262007-04-19T20:43:00.000-07:002007-04-19T20:44:45.475-07:00Legacy operations vs. REST resources<a href="http://www.innoq.com/blog/st/2007/04/19/rest_is_not_crud.html">Stefan Tilkov</a> and <a href="http://zcologia.com/news/437/resources-not-objects/">Sean Gillies</a> respond to my <a href="http://abstractfinal.blogspot.com/2007/04/restful-urls-for-non-crud-operations.html">previous post</a> about modelling operations in REST. There's also an interesting discussion on this on the <a href="http://tech.groups.yahoo.com/group/rest-discuss/message/8244">rest-discuss yahoo group</a>.<br /><br />First of all, I think my example wasn't a good one. While the operations I had in mind were operations alright, they weren't state changing ones. I realize that my example definitely implies changing state. And I myself would advocate a PUT or at worst a POST in that case. The case I am making is for operations that don't change state but are like queries on a given resource.<br /><br />I agree with both Stefan and Sean. "Resources, not Objects" as Sean puts it. In the REST world, a <code>person</code> does not <code>walk</code> but s/he reaches a <code>location</code>. And if I were designing a REST API for a new system I would most certainly use that approach.<br /><br />But if I have existing APIs or SOAP web services such that the verbs <code>talk</code> and <code>walk</code> were firmly instilled in the verbiage of my user community, it might be a difficult proposition for me to suddenly introduce a new vocabulary for the same set of operations to my users. The user community sees them as operations and not in terms of the resulting resources (<code>words</code> and <code>location</code>). Legacy wins over technical correctness.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com3tag:blogger.com,1999:blog-7669070449985794434.post-79531975323125684432007-04-17T16:32:00.000-07:002007-04-17T19:14:50.043-07:00RESTful URLs for non-CRUD operationsA common way of designing a REST system is to recognize the resources and associate a logical hierarchy of URLs to them and perform the traditional CRUD operations by using the well-known HTTP methods (GET, POST, PUT and DELETE).<br /><br />However, many systems have operations which don't quite fit into the CRUD paradigm. Say, I have a repository of <code>persons</code>. And say I have an <code>id</code> associated with every person which allows me to have nice URLs to each person as such: <br /><pre>http://example.com/persons/1</pre>This URL obviously GETs me the details pertaining to person <code>1</code>. Now what if I wanted this person to <code>talk</code> and <code>walk</code>. I can think of 3 approaches to designing this scenario:<ol><li><b>Operations as resources</b>: Which means I can have URLs such as:<br /><pre>http://example.com/persons/1/talk<br />http://example.com/persons/1/walk</pre>However, <code>walk</code> and <code>talk</code> are obviously <em>not</em> resources and designing it this way might be considered unRESTful(?).</li><br /><br /><li><b>Operation as a query parameter</b>: This leads to URLs such as:<br /><pre>http://example.com/persons/1?operation=talk<br />http://example.com/persons/1?operation=walk</pre>A drawback of this approach comes to the fore if you need <em>other</em> parameters to perform an operation. So, for instance, if you had to specify <code>what</code> to <code>talk</code> or <code>where</code> to <code>walk</code>. You'll end up with URLs such as these:<br /><pre>http://example.com/persons/1?operation=talk&what=Hello<br />http://example.com/persons/1?operation=walk&where=North</pre>As you can see, with this approach you end up having resources with overloaded operations and parameters. And you have to be aware of these combinations yourself and also explain them to your users.</li><br /><br /><li><b>Matrix URIs</b>: Specify the operations using <a href="http://www.w3.org/DesignIssues/MatrixURIs.html">matrix-like URIs</a>. With this, your URLs look as such:<br /><pre>http://example.com/persons/1;talk<br />http://example.com/persons/1;walk</pre>And you can specify the operation parameters using the traditional query string:<br /><pre>http://example.com/persons/1;talk?what=Hello<br />http://example.com/persons/1;walk?where=North</pre>With this approach you have 3 distinct ways of representing 3 different things - slashes (<code>/</code>) for hierarchical resources, semi-colons(<code>;</code>) for operations and query strings (<code>?param1=val1&param2=val2</code>) for parameters.</li><br /></ol>Although I like the clarity of #3, I haven't seen this approach used all that much. Which makes me reluctant using it myself. Are there any web systems out there that use this approach? Are there any drawbacks to this approach which is why it is not widely employed? Are there any other approaches to designing URLs for operations?<br /><br />Many questions. Any answers?Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com10tag:blogger.com,1999:blog-7669070449985794434.post-8627848569810672102007-04-15T22:31:00.000-07:002007-04-15T22:34:09.603-07:00SOAP over DCOM in the ArcGIS ADF explainedThe ArcGIS Server Java ADF supports accessing the ArcGIS Server over the internet (http) as well as locally (dcom). Internet access uses SOAP. Local access can work with the server objects directly over DCOM or you can also issue SOAP calls over DCOM.<br /><br />For local access, the ADF gives preference to SOAP over DCOM while it accesses the ArcObjects directly over DCOM only when the functionality is not available thru SOAP. There are 2 primary reasons why SOAP / DCOM is preferred to ArcObjects / DCOM:<ul><li>Performance</li><li>Code reuse</li></ul><br /><b>Performance</b>:<br /><br />When you work with ArcObjects / DCOM the server gives you access to remote ArcObjects proxies (such as <code><a href="http://edndoc.esri.com/arcobjects/9.2/Java/api/arcobjects/com/esri/arcgis/carto/IMapDescription.html">IMapDescription</a></code>, <code><a href="http://edndoc.esri.com/arcobjects/9.2/Java/api/arcobjects/com/esri/arcgis/carto/ILayerDescription.html">ILayerDescription</a></code>, etc.). Every method call on these proxies is a remote method call. So methods such <code>IMapDescription.getName()</code> and <code>ILayerDescription.getID()</code> are both remote method calls.<br /><br />On the other hand, when you work with SOAP / DCOM, only the methods defined in the <a href="http://edndoc.esri.com/arcobjects/9.2/Java/java/server/webservices/web_svc_soap_wsdl.html">WSDLs</a> are remote calls. Once you have made those remote calls, you get access to objects which are local value objects (such as <code><a href="http://edndoc.esri.com/arcobjects/9.2/Java/api/adfwebcontrols/com/esri/arcgisws/MapDescription.html">MapDescription</a></code>, <code><a href="http://edndoc.esri.com/arcobjects/9.2/Java/api/adfwebcontrols/com/esri/arcgisws/LayerDescription.html">LayerDescription</a></code>, etc.). So methods such as <code>MapDescription.getName()</code> and <code>ILayerDescription.getLayerID()</code> are both local method calls.<br /><br />As you can infer, ArcObjects / DCOM elicits more "chattiness" with the server than SOAP / DCOM. And the reduced number of remote calls in case of SOAP / DCOM obviously translates to better performance over the lifetime of your application.<br /><br /><b>Code reuse</b>:<br /><br />If you look at various functionalities supported by the ADF such as <code><a href="http://edndoc.esri.com/arcobjects/9.2/Java/api/adfwebcontrols/com/esri/adf/web/ags/data/AGSMapFunctionality.html">AGSMapFunctionality</a></code>, <code><a href="http://edndoc.esri.com/arcobjects/9.2/Java/api/adfwebcontrols/com/esri/adf/web/ags/data/AGSGeocodeFunctionality.html">AGSGeocodeFunctionality</a></code>, etc., the same functionality classes are used for both internet <em>and</em> local access. This was possible because these functionalities were implemented by using the SOAP interface to the server. The transport is HTTP in case of internet access and DCOM in case of local access but the code remains the same allowing us to reuse the same functionality implementation in both cases.<br /><br />Capabilities such as the <code><a href="http://edndoc.esri.com/arcobjects/9.2/Java/api/adfwebcontrols/com/esri/adf/web/ags/tasks/EditingTask.html">EditingTask</a></code> which are not available with SOAP have obviously been implemented by using ArcObjects / DCOM.<br /><br /><b>Bottom line</b>:<br /><br />In summary, if your functionality can be implemented by using the SOAP interface you should use it. The richness of ArcObjects / DCOM is of course always available to you in cases where SOAP does not suffice.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com1tag:blogger.com,1999:blog-7669070449985794434.post-39214584586256151192007-04-13T20:39:00.000-07:002007-04-13T21:51:25.010-07:00What is REST?There is a wealth of material out there on REST but very few that actually explain them succinctly enough for you to, well, pitch them to your manager in the elevator. Looks like someone has tried to do that and done a very good job at it:<br /><br />Link: <a href="http://www.megginson.com/blogs/quoderat/2007/02/15/rest-the-quick-pitch/">REST: the quick pitch</a><br /><blockquote>With REST, every piece of information has its own URL.</blockquote><br />I'll use some of David's material myself and highlight the key REST concepts as bullet points:<br /><ul><br /><li>[<i>Of course</i>] <b>Everything is a URL</b>: And what does that mean? Immediately all your information is readily accessible to everyone. It is cache-able, bookmark-able, search-able, link-able - basically it's intrinsically web enabled.</li><br /><li><b>Think resources</b>: With REST it helps if you design your system as a repository of resources. Not services. Not as a data provider - but resources.</li><br /><li><b>URLs matter</b>: You might argue that if it's machines that are calling into my REST resources, how does the niceness of URLs matter? Well, given that URLs are <em>representations</em> of resources and representations can be human readable text formats or browser readable html; your REST URLs are no longer just a privilege of machines. So URLs matter. Avoid query parameters as much as possible. You have a better chance of being indexed by search engines if you avoid 'em. Your implementation becomes easier. Refactoring is smoother.</li><br /><li><b>POSTs are ok</b>: In the ideal world all HTTP clients and servers would allow PUT and DELETE. But the world doesn't come to a standstill without these methods. Many have done just fine using POST and so would you.</li><br /><li><b>Requesting content type in URLs is also ok</b>: Again, in the ideal world, clients and servers could do content negotiation. And again, many have done just fine by specifying the format in the URL path or as a query parameter and so would you.</li><br /><li><b>Consider JSON</b>: JSON is simple. Parsing JSON is simpler. You don't even need to parse it if you are consuming it in a browser. You still want to serve XML given the huge support for it but JSON support is spreading every day and you'll benefit if you're a part of it.</li><br /><li><b>Use HTTP as a platform</b>: HTTP is not just a protocol. It's a platform. It already provides services such as caching, security (of course more could be done here), standardized status codes - benefit from them.</li><br /></ul><br />Is that all to it? Hardly. There's literally a whole <a href="http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm">science</a> behind it. But that will do for now.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-9476733448083312632007-04-05T06:45:00.000-07:002007-04-13T21:51:15.372-07:00Enough already... I need some RESTLike many others, inspired by Pete Lacey's <a href="http://wanderingbarque.com/nonintersecting/2006/11/15/the-s-stands-for-simple/">S Stands for Simple</a>, late last year I began to look into REST and by extension into HTTP, status codes, web caching, et al. In a nutshell I went back to the basics and discovered the wealth that I had turned a blind eye to what with the latest and greatest frameworks "abstracting out" what constitutes the web from me.<br /><br />Suddenly the stateless nature of HTTP transformed from being a limitation to a virtue. The status codes weren't just numbers but a means of communication (in some cases even the lack of it - 304, anyone?). Caching wasn't something I needed to build but something I needed to learn how to use (ETag, If-None-Match, Cache control headers, what not). Ditto with security. URLs ceased being just names - they are a language.<br /><br />It took all of that for me to realize that it's not the next WS* standard that will help me develop the next state-of-the-art web service but it's the existing goodness in HTTP, it's what makes the web work today, it's what brought you to this page and what enabled me to publish this page to the world.<br /><br />Having relatively recently discovered REST I find it simple and natural. Simple is good. Natural is good. It uses the existing web / HTTP infrastructure not merely as a protocol but as a platform. And it fits into this Web 2.0 thingy to a tee: Issue AJAX requests(Actually it's more like AJAX without the X). Receive JSON responses. There's your secret sauce to building mashups.<br /><br />This is not to say that I suddenly shunt everything that is SOAP and just do REST all the way. Far from it. SOAP has served me very well and I like it and I'll continue to use it. Something that lets me use pure Java / .NET while working with a piece of software half a world away from me is too precious to be ignored.<br /><br />I believe that SOAP and REST are not contradictory but complementary. They have their own usages and users and they will coexist. And I'll continue to use them both as per my application needs. Horses for courses<br /><br />I rest my case.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-85034710234420124942006-12-04T17:50:00.000-08:002007-04-13T21:51:06.578-07:00LinkedHashMap to implement LRU cachesI must be living under a rock to have not noticed this before. The <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/LinkedHashMap.html">LinkedHashMap</a> has a <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/LinkedHashMap.html#LinkedHashMap%28int,%20float,%20boolean%29">3 argument constructor</a>. The last argument, if <code>true</code>, orders the map according to the access order. It also defines a protected <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/LinkedHashMap.html#removeEldestEntry%28java.util.Map.Entry%29">removeEldestEntry()</a> method which returns <code>false</code> by default. One can override this method to return <code>true</code> in certain situations, in which case the map will remove its eldest entry.<br /><br />The implementation of an <code>LRUMap</code> will look something like this:<br /><pre>public class LRUMap<K,V> extends LinkedHashMap<K,V> {<br /><br /> int maxLimit;<br /><br /> public LRUMap(int maxLimit) {<br /> super(maxLimit * 10/7 + 1, 0.7f, <strong>true</strong>);<br /> this.maxLimit = maxLimit;<br /> }<br /><br /> @Override<br /> protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {<br /> return size() > maxLimit;<br /> }<br /><br />}</pre><br />The <code>removeEldestEntry()</code> method is invoked by the <code>put*()</code> methods. With the above implementation, if the size of the map exceeds the max limit, the eldest entry is removed.<br /><br />Short and sweet. Albeit some 2 years too late for my unenlightened self.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-46958989042738999102006-03-26T18:36:00.000-08:002007-04-13T21:50:49.153-07:00ArcGIS Server coding practicesThe <a href="http://www.esri.com/events/devsummit/">ESRI dev summit</a> was a great experience. Meeting fellow developers is always good - you talk the same language, exchange ideas, receive feedback, discuss how things can be made better,... I could go on and on.<br /><br />Ok, so I was asked quite a few server related questions at the summit: What should I keep in mind while coding against the server? Any do's and dont's? Any particular classes / methods I should read more about? etc... So here I'll talk about certain points you should keep in mind while working with the ArcGIS Server, particularly the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/carto/MapServer.html">MapServer</a>:<br /><ul><br /> <li><strong>Use the description objects</strong>: Sure the <code>MapServer</code> gives you access to the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/carto/IMap.html">IMap</a> but you don't have to go there. A lot can be accomplished by using the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/carto/IMapDescription.html">IMapDescription</a>, <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/carto/ILayerDescription.html">ILayerDescription</a>, et al.</li><br /> <li><strong>Release the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/server/ServerContext.html">ServerContext</a></strong>: In a pooled environment you are sharing the server object with other users. So it's your responsibility to release the context once you have performed your set of operations. For web applications, this translates to releasing the context after every request. Of course, if you are using the ADF, the ADF does that for you so you need not worry. But even then it's good to keep this in mind.</li><br /> <li><strong>Do not reference server objects after the context has been released</strong>: Do you continue to work with the <a href="http://java.sun.com/j2se/1.4.2/docs/api/java/sql/Statement.html">Statement</a> object once the JDBC <a href="http://java.sun.com/j2se/1.4.2/docs/api/java/sql/Connection.html">Connection</a> has been closed? NO. Why? Because a <code>Statement</code> can be executed only while the <code>Connection</code> is live. Similarly, you should not reference ANY server objects once you have released the <code>IServerContext</code>. If you want to persist the state of any server object, you should use the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/server/IServerContext.html#saveObject(java.lang.Object)">saveObject()</a> method to get a serialized string representation of the object before releasing the context. You can once again rehydrate the object by using the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/server/IServerContext.html#loadObject(java.lang.String)">loadObject()</a> method once you have regained access to the context.</li><br /> <li>In case you had forgotten - <strong>use the description objects</strong>: Did you know you could add serializable custom graphics to the <code>IMapDescription</code> object?</li><br /> <li><strong>Use methods exposed by the <code>MapServer</code></strong>: Look at the javadoc for the <code>MapServer</code> and for all the interfaces that it implements. It can do a lot more than you might think - you can export maps, layouts, do queries, identifies, get feature info, handle SOAP requests, and a whole lot more.</li><br /> <li><strong>Create server objects on the server</strong>: You don't create RMI objects on the client. You don't create EJBs on the client. Similarly, you don't create ArcGIS server objects on the client. ALWAYS use the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/server/IServerContext.html#createObject(java.lang.String)">IServerContext.createObject()</a> method to create all and any ArcObject in a server application within your server context at the server.</li><br /> <li>BTW, <strong>use the description objects</strong>: Did you know you could change layer visibilities, select features, even set definition expressions with the <code>ILayerDescription</code> object?</li><br /></ul><br />This by no means is an exhaustive list but something which might help you when working with the server. Oh yeah, in parting, in case you had missed: <em>use the description objects</em>!Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-80082862027819889132006-03-14T08:10:00.000-08:002007-04-13T21:50:39.391-07:00Using the context control in a multiple page webappThere have been many <a href="http://forums.esri.com/Thread.asp?c=158&f=1699&t=164586&mc=12">forum questions</a> about the correct usage of the context control in a multiple page web application. I will be the first one to admit that this is more of a bug in the ADF than an incorrect usage on the part of the user. Ok, so the short end of it is this question: <em>How do I access the same context across multiple pages of an ADF web application?</em><br /><br />Typically, the context is specified on a JSP using the <code>context</code> tag:<br /><pre><ags:context id="myCtx" resource="myServerObject@myHost"/></pre><br />Now, in an ideal world, if you have multiple pages in your webapp, you should have to use the context control in the exact same manner on all other pages and it should work fine. Unfortunately, that's not the case at 9.1. Currently, the way the <code>context</code> tag works is that if you specify the <code>resource</code> attribute of the tag, the first time that the page is accessed, it actually creates a <em>new</em> context. What this means is that if you use the <code>context</code> tag by specifying the <code>resource</code> attribute on different pages of the same webapp, it gives an impression that the context is "resetting" itself - i.e. the current state of your application is lost and the application reverts to the original state of the map server object.<br /><br />A workaround to this problem is that on subsequent pages of your webapp, you should not specify the <code>resource</code> attribute of the <code>context</code> tag and ensure that the <code>id</code> is same as the id that you specified on the main page:<br /><pre><ags:context id="<strong>myCtx</strong>"/><br /><!-- resource attribute is not used and id is same as the id of the context on the main page --></pre><br />If the <code>resource</code> attribute is not specified, the <code>context</code> tag does not create a new context but instead tries to find an existing context with the same id as specified in the <code>id</code> attribute. So as long as you have the correct id specified, the context that you created on the main page will be accessed on all subsequent pages of the webapp.<br /><br />This is admittedly confusing and we have addressed it at 9.2 so that the context control can be used in a consistent manner across all pages of the webapp.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com1tag:blogger.com,1999:blog-7669070449985794434.post-69508782272301876222005-12-21T14:32:00.000-08:002007-04-13T21:50:28.904-07:00Exception handling in web frameworksThere are many good articles out there pertaining to good exception handling practices. Here I'll list some that I've learned over the course of working on web applications and as it pertains to web frameworks.<br /><ol><br /> <li> <strong>Chain Exceptions</strong>:<br />Sure one should never lose sight of the destination, but with exceptions one should never lose sight of the source - the all important root cause, the crux of the problem. While you can catch exceptions and throw custom exceptions, never lose the exception that you caught. When creating custom exception classes, always include a constructor that takes the <code>cause</code> as an argument:<br /><pre>public CustomException(String message, <strong>Throwable cause</strong>) {<br /> super(message, cause);<br />}</pre><br />Or if you are using an exception which does not have such a constructor, you can always use the <code>initCause()</code> method:<br /><pre>catch(AnException e) {<br /> IllegalStateException ise = new IllegalStateException("Something went wrong.");<br /> <strong>ise.initCause(e);</strong><br /> throw ise;<br />}</pre><br /></li><br /> <li> <strong>Throw framework specific exceptions</strong>:<br />The root cause tells you <em>what</em> went wrong but you also want to know <em>where</em> it went wrong. This is especially critical in web frameworks where the framework makes callbacks, does dependency injection and other <a href="http://www.jroller.com/page/javanoid?entry=inversion_of_control_in_the">IoC</a> stuff to call into app specific custom client code. When your client comes to you with a support call or posts the problem on the forums the first thing you want to know is where in <em>your</em> stack did it fail. Debugging and fixing the problem becomes a lot easier once you have narrowed it down to your courtyard. Framework exceptions are best implemented as a hierarchy of custom runtime exceptions. The root of this hierarchy could be a <code>RuntimeException</code> named <code>MyFrameworkException</code> and it could have any number of sub-classes such as <code>MyFrameworkModelException</code>, <code>MyFrameworkControllerException</code>, <code>MyFrameworkViewException</code>, etc. depending on how you have categorized different pieces of your framework.</li><br /> <li> <strong>Log, log, log</strong>:<br />Where exception messages don’t tell much, debug messages logged from important places will. What comments are to source code, log messages are to runtime. The standard JDK logging API allows you to log messages at multiple levels including <code>INFO, CONFIG, FINE, FINER,</code> et al. Logging messages at various levels will not only provide answers to the hows, whens and wheres of the problem but will also go a long way in helping your customers understand the underpinnings of your framework. And more often than not that is all they need to fix the problem themselves!</li><br /></ol><br />We are constantly trying to improve the exception handling and logging in the ArcGIS ADF and you can be assured that we are employing these principles as well.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com1tag:blogger.com,1999:blog-7669070449985794434.post-81468116469255394442005-09-30T19:41:00.000-07:002007-04-13T21:50:19.153-07:00CONned by windowsLink: <a href="http://msdn.microsoft.com/library/en-us/fileio/fs/naming_a_file.asp">Naming a file</a><br /><blockquote> Do not use the following reserved device names for the name of a file: CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also avoid these names followed by an extension, for example, NUL.tx7.</blockquote><br />I was writing an auto code generator which was generating many Java files and one of the files to be generated was <code>Con.java</code>. My generator generated every other file but when it came to <code>Con.java</code> it threw a <code>FileNotFoundException</code>. Which was very weird because I was trying to <em>create</em> a new file and so it not finding the file was in fact a good thing! To justify that it wasn't just a figment of my imagination I checked and confirmed that there was no such file. I rechecked and yet there was no such file. I cursed and ran the code generator again - this, my never failing trump card betrayed me as well.<br /><br />Of course these days where everything else fails, Google doesn't. And sure enough it brought me to <a href="http://msdn.microsoft.com/library/en-us/fileio/fs/naming_a_file.asp">this</a> page and it was clear that CON was one of the many reserved words and so I could not create a filed named so.<br /><br />While I am ok with Windows not allowing me to create a file with a name which is a reserved word but not allowing files with a reserved word followed by an extension is all too limiting. And worse still the error messages when I try to create a file <code>con.whatever</code> vary from "access denied" to "file already exists" to "are you kidding me?" (no, not the last one but it came close).Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-51228231153605860162005-09-26T00:11:00.000-07:002007-04-13T21:50:07.329-07:00Extend the ArcGIS ADF with POJOs - Part III<p><br />The first cardinal rule when working with pooled ArcGIS Server objects in a webapp is that you must release the server context after every web server request. This is because you are sharing the server object with many other users and you need to put it back into the pool so that other users can access it. The second cardinal rule is that you cannot reference any server object after you have released the server context. If you do, it would be like executing a JDBC <code>Statement</code> after closing the <code>Connection</code>. You always need a live connection or a context in a client-server environment to work with objects that are remotely hosted.<br /><p><br /><br />So the question then is: How do I persist with the current state of a server object after releasing the context? The answer lies in the <code>saveObject()</code> and <code>loadObject()</code> methods of the <a href="http://edndoc.esri.com/arcobjects/9.1/Java/arcengine/com/esri/arcgis/server/IServerContext.html">IServerContext</a>. You can serialize objects to their string representations with <code>saveObject()</code> and you can deserialize them to their object forms with <code>loadObject()</code>. So calling <code>saveObject</code>s before releasing the context and calling <code>loadObject</code>s on reconnect sets you up well to persist with the current client state while working with pooled objects.<br /><br /><p><br />As always, the ADF assists you in making such experiences easy for you. Rather than you having to scratch your heads about when to calls loads and when to calls saves, where to call them, how to keep track of them, et al the ADF provides you a simple interface in <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebLifecycle.html">WebLifecycle</a> where in you can do all of these tasks and that for only those objects needed for that particular task. The ADF calls relevant methods of the <code>WebLifecycle</code> just prior to releasing the context as well as immediately after a reconnect.<br /><p><br />In the <code>CountFeatures</code> class that we have been working on through Parts <a href="http://www.jroller.com/page/javanoid?entry=extend_the_arcgis_adf_with">I</a> and <a href="http://www.jroller.com/page/javanoid?entry=extend_the_arcgis_adf_with1">II</a> we may want to persist with the <code>SpatialFilter</code> object. Generally you persist only those objects which have enough client state in them for you to justify the saves and loads. It's obvious that the <code>SpatialFilter</code> doesn't have much state in it to merit justification but the idea here is to showcase how to do it easily so that you can apply it to more pertinent objects such as graphic elements and symbols.<br /><br /><p><br />The <code>WebLifecycle</code> defines 3 methods - <code>activate()</code>, <code>passivate()</code> and <code>destroy()</code> - which are called at different stages of a request / session lifecycle by the ADF. The <code>passivate()</code> method is called after a request has been serviced giving you the opportunity to serialize server objects to strings. The <code>activate()</code> method is called before servicing the request where you can deserialize the strings so that they are available as live objects when you perform the business tasks. And finally the <code>destroy()</code> is called when the session is being terminated for you to perform the necessary cleanup.<br /><br /><p><br />Below is the code which extends the <code>CountFeatures</code> class to participate in this lifecycle:<br /><p><br /><pre><br />public class CountFeatures implements WebContextInitialize, WebContextObserver, <b>WebLifecycle</b> {<br /> ...<br /> //serialized SpatialFilter - valid after passivate() and before activate()<br /> String serializedSpatialFilter;<br /> //the spatial filter object - only valid between activate() and passivate()<br /> SpatialFilter filter;<br /><br /> public void init(WebContext webContext) {<br /> ...<br /> //create SpatialFilter and set spatial relationship<br /> filter = new SpatialFilter(agsctx.createServerObject(SpatialFilter.getClsid()));<br /> filter.setSpatialRel(esriSpatialRelEnum.esriSpatialRelContains); <br /> }<br /> <br /> public void <b>passivate</b>() { //serialize to strings<br /> serializedSpatialFilter = agsctx.saveObject(filter);<br /> }<br /> <br /> public void <b>activate</b>() { //deserialize to objects<br /> filter = new SpatialFilter(agsctx.loadObject(serializedSpatialFilter));<br /> }<br /> <br /> public void <b>destroy</b>() { //cleanup<br /> filter = null;<br /> serializedSpatialFilter = null;<br /> }<br /><br /> public String doCount() throws Exception {<br /> ...<br /> //spatial filter is already created - only need to set geometry to the current extent<br /> filter.setGeometryByRef(agsmap.getFocusMapExtent());<br /> ...<br /> return null;<br /> }<br />}<br /></pre><br /><p><br />The complete source code can be downloaded from <a href="http://www.geocities.com/javanoid_1/extend-adf-3.zip">here</a>.<br /><p><br />It's important to note that the loads still create new instances of the objects on the server. So there are no performance benefits to saves and loads versus new instance creations. The benefit to gain is that you don't have to worry about persisting with the state of the object yourself. The server will do that work for you.<br /><p><br />That does it for this trilogy (ok so that was blatant abuse of that word - but hey, who's to say... It's my world around here :)<br /><p>Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com1tag:blogger.com,1999:blog-7669070449985794434.post-16570305760696468482005-09-10T07:03:00.000-07:002007-04-13T21:49:55.498-07:00JDK 5 concurrency API: group / batch thread poolFirst up, let me say that the new concurrency API in JDK 5 is indeed a boon for the Java community especially for developers (including yours truly) who before this indulged in threads and concurrent programming only sparingly. Not because we didn’t know how to do it but because getting it right was quite an ordeal. The concurrency API should surely make concurrent programming, the bastion of only a selected few so far, more "mainstream".<br /><br />So here’s my scenario: I need for a group / batch of tasks to execute concurrently and additionally, I need to wait until all of them have finished executing before moving forward.<br /><br />Levaraging the new concurrency API; to implement this I can use the <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executors.html">Executors</a> factory to create a new thread pool. (A thread pool being an instance of <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ExecutorService.html">ExecutorService</a>.) To this pool I can submit the tasks of my batch which will be executed according to the policies of the pool. Then if I want to wait on all of them to complete, I need to call <code>shutdown()</code> followed by <code>awaitTermination()</code>. With this my code will indeed block until all tasks have been executed but the problem is that the thread pool no longer accepts any new tasks. So for my next batch of tasks I need to create a new thread pool all over again - which obviously is unneeded and expensive.<br /><br />All said and done, I need an <code>awaitExecution()</code> method which like <code>awaitTermination()</code> blocks until all tasks have completed but unlike the <code>shutdown() + awaitTermination()</code> combo does not reject new tasks.<br /><br />Below is a simple wrapper with the <code>awaitExecution()</code> method included. You can ofcourse use any of the extension patterns - decorator, adapter, etc. - for a more refined solution.<br /><pre><br />public class GroupThreadPool {<br /> protected ExecutorService pool;<br /> protected ArrayList<Future> futures = new ArrayList<Future>();<br /><br /> public GroupThreadPool(int poolSize) {<br /> pool = Executors.newFixedThreadPool(poolSize);<br /> }<br /><br /> public void submit(Runnable command) {<br /> futures.add(pool.submit(command));<br /> }<br /><br /> public void <strong>awaitExecution</strong>() {<br /> try {<br /> for (Iterator<Future> iter = futures.iterator(); iter.hasNext(); ) {<br /> iter.next().get(); //blocking call<br /> }<br /> } catch (Exception ignore) {<br /> } finally {<br /> futures.clear();<br /> }<br /> }<br />}</pre><br />The user creates this <code>GroupThreadPool</code> just once, calls <code>submit()</code> to submit various tasks in a batch and then calls <code>awaitExecution()</code> to block until all tasks have executed. He can continue to use the same <code>GroupThreadPool</code> object to execute subsequent batches.<br /><br />The implementation adds the submitted tasks to a list of <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Future.html">Future</a>s. To block until all tasks have completed, it calls <code>get()</code> on all <code>Future</code>s which itself is a blocking operation. So <code>awaitExecution()</code> returns only after all tasks have been executed but before returning it clears the list of <code>Future</code>s to accept the next batch of tasks.<br /><br />I would love suggestions / feedback on this implementation. Is there a better approach? Also, is this a common use case which merits inclusion of <code>awaitExecution()</code> in <code>ExecutorService</code> itself?Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-29167280047446127592005-08-22T12:27:00.000-07:002007-04-13T21:49:44.377-07:00Extend the ArcGIS ADF with POJOs - Part IIIn <a href="http://www.jroller.com/page/javanoid?entry=extend_the_arcgis_adf_with">Part I</a> we discussed how you could implement GIS functionalities in POJOs and plug them into the ADF. In this part we'll extend the POJO a little further.<br /><br />In Part I, the <code>CountFeatures</code> object calculated and updated the feature count on a client interaction such as a button click. Suppose that we now need for this object to recalculate the count automatically whenever the current extent of the map changes or the map refreshes due to some other action.<br /><br />The ADF provides a very simple way to do this. Objects can register themselves as observers of the <code>WebContext</code> and whenever the context is refreshed (by virtue of the user calling the <code>refresh()</code> method on the context), all observers will be intimated of this event and each observer can act upon it individually. This way we have loosely coupled objects reacting together to the context refresh.<br /><br />With this background let's now extend our <code>CountFeatures</code> class to implement this behavior.<br /><pre><br />public class CountFeatures implements WebContextInitialize, <strong>WebContextObserver</strong> {<br /><br /> public void init(WebContext context) {<br /> ...<br /> <strong>context.addObserver(this);</strong><br /> }<br /><br /> public void <strong>update</strong>(WebContext context, Object arg) {<br /> <strong>doCount();</strong> //perform the business action on update<br /> }<br /> ...<br />}</pre><br />First up, all observers of the <code>WebContext</code> need to implement the <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebContextObserver.html">WebContextObserver</a> interface. Next, they register themselves as observers of the context by calling the <code>addObserver()</code> method on the context. Finally, on every context refresh, the <code>update()</code> method of the <code>WebContextObserver</code> interface is called by the ADF and the object reacts (performs the business action) to the same. In this case we simply call the <code>doCount()</code> method which recalculates the feature count of the updated map. This will ensure that whenever the context refreshes (for example when the user zooms or pans), this object will recalculate and display the new count to the user.<br /><br />As simple as that. Apart from a few modifications to the Java code, nothing else needs to change from the Part I source code. The JSP as well as the configuration files remain unchanged. You can download all the source code (including the unchanged JSP) for this part from <a href="http://www.geocities.com/javanoid_1/extend-adf-2.zip">here</a>.<br /><br />In Part III we'll extend <code>CountFeatures</code> further by implementing the <code>WebLifecycle</code> interface.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-5327479609420616522005-08-09T01:08:00.000-07:002007-04-13T21:49:33.172-07:00Extend the ArcGIS ADF with POJOs - Part IThis is the first of a 3 part series where we'll discuss how to add custom GIS functionality to the ADF as POJOs (Plain Old Java Objects). To accomplish this we'll be leveraging the <a href="http://www.jroller.com/page/javanoid?entry=inversion_of_control_in_the">IOC inherent in the ADF</a> discussed earlier. We talked about 3 very important interfaces in the IOC discussion - <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebContextInitialize.html">WebContextIntialize</a>, <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebContextObserver.html">WebContextObserver</a> and <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebLifecycle.html">WebLifecycle</a>. Putting these 3 interfaces in practice will be central to the 3 parts respectively. In this part we'll make use of the <code>WebContextInitialize</code> interface.<br /><br />We'll keep the functionality to be implemented quite simple: Count the number of features of a given layer in the map's current extent.<br /><br />To implement this scenario our POJO will need a few basic properties and methods - a read-only <code>count</code> property, a read/write <code>layerId</code> property representing the layer whose features are to be counted and a business method <code>doCount()</code> which implements the business task at hand. With this said, the skeleton of the class (we'll call it <code>CountFeatures</code>) will be as such:<br /><pre><br />public class CountFeatures {<br /><br /> //properties<br /> int count;<br /> int layerId;<br /> public int getCount() { return count; }<br /> public int getLayerId() { return layerId; }<br /> public void setLayerId(int layerId) { this.layerId = layerId; }<br /><br /> //business method<br /> public String doCount() {<br /> ...<br /> ...<br /> count = ...;<br /> return null;<br /> }<br />}</pre><br />You might have noticed that the <code>doCount()</code> method returns a <code>String</code>. This is because the ADF is JSF based and when the user clicks on say a command button on a web page, it results in a call to <code>doCount()</code>. Based on the return value of this method the JSF framework decides which page to navigate to. Returning a <code>null</code> ensures that the webapp stays on the same page.<br /><br />OK, so now we have the skeleton in place but we also need access to the ArcGIS Server and the underlying ArcObjects to perform the GIS task at hand. This is where the <code>WebContextIntialize</code> comes into the picture:<br /><pre><br />public class CountFeatures <strong>implements WebContextInitialize</strong> {<br /><br /> //the context associated with this object<br /> AGSWebContext agsctx;<br /> public void <strong>init(WebContext context)</strong> {<br /> agsctx = (AGSWebContext)context;<br /> }<br /> ...<br />}</pre><br />The ADF will call the <code>init(WebContext)</code> method of objects implementing <code>WebContextInitialize</code> immediately after the object is instantiated. This gives the object access to the <code>WebContext</code>. The <code>AGSWebContext</code> (which is the actual implementation of the <code>WebContext</code> that we work with) maintains references to the ArcGIS Server objects and ArcObjects (such as <code>IMapServer</code>, <code>IMapDescription</code>, etc.) as well as to other ADF objects (such as <code>AGSWebMap</code>). This implies that by virtue of gaining access to the <code>AGSWebContext</code> our custom object now has a hook into the whole of ArcObjects as well as the ADF - basically everything that you need to accomplish your GIS task at hand.<br /><br />With access to everything that our class needs, the business logic can now be implemented in the <code>doCount()</code> method to perform the count operation and set the result to the <code>count</code> variable.<br /><br />That's it - our Java code ends here. All that is left to do now is to register this object as a managed attribute of the <code>WebContext</code> so that the ADF can automatically instantiate the object on demand as well as call the <code>init(WebContext)</code> method immediately after instantiation. This is accomplished by adding the following lines of XML to <code>managed_context_attributes.xml</code> which you can find in the <code>/WEB-INF/classes</code> folder of your ADF webapp:<br /><pre><br /><managed-context-attribute><br /> <name>countFeatures</name><br /> <attribute-class>custom.CountFeatures</attribute-class><br /> <description>counts features of a given layer in the current extent...</description><br /></managed-context-attribute></pre><br />With this done, you can now access our custom object by name (<code>countFeatures</code> in this case).<br /><br />You can download the full source <a href="http://www.geocities.com/javanoid_1/extend-adf-1.zip">here</a>. In addition to the Java code, the ZIP file also contains a sample JSP. The JSP has a command button to trigger the business method, a dropdown to choose the layer and a text out to display the count.<br /><br />In conclusion I'd like to mention that while admittedly the functionality that we have implemented here is trivial, you can essentially follow the same programming model to implement your own functionality as well: POJOs which implement <code>WebContextInitialize</code><br /><br />In Part II we'll extend this same object to be an observer of the context and in Part III we'll make this object participate in the ADF lifecycle.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-62896637602555558152005-08-07T22:24:00.000-07:002007-04-13T21:49:06.985-07:00Inversion of Control in the ArcGIS Java ADFMartin Fowler in a recent blog gave a good short explanation of the <a href="http://www.martinfowler.com/bliki/InversionOfControl.html">inversion of control</a> pattern... In ESRI's ArcGIS Java ADF we employ this approach at a few places.<br /><ol><br /> <li>The <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebContextInitialize.html">WebContextInitialize</a> interface declares an <code>init(WebContext)</code> method. The ADF will call this method on objects which implement this interface and register themselves as attributes of the <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebContext.html">WebContext</a>. This method will be called immediately after they are registered with the <code>WebContext</code>. Users interested in getting access to the associated WebContext object or want to do some initialization tasks should implement this interface.</li><br /> <li>The <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebLifecycle.html">WebLifecycle</a> interface declares methods which will be called by the ADF at various phases of the webapp's lifecycle. Users can implement activation, passivation and destroy logic in these methods. This interface is most relevant when using pooled objects since users may want to rehydrate and dehydrate the states of the server objects when the ADF reconnects and releases its connection to the ArcGIS server on every request.</li><br /> <li>The <a href="http://edndoc.esri.com/arcobjects/9.0/Java/webcontrols/com/esri/arcgis/webcontrols/data/WebContextObserver.html">WebContextObserver</a> interface declares an <code>update(WebContext webContext, Object args)</code> method. Objects implementing this interface can register themselves as observers of the <code>WebContext</code> by calling the <code>addObserver(WebContextObserver)</code> method. After users perform operations which change the state of the objects that they work with (for example zoom to a different extent, add a graphic element, etc.), they call the <code>refresh()</code> method on the <code>WebContext</code>. When this happens, the ADF iterates thru all the registered observers of the context and calls their <code>update()</code> methods. This ensures loose coupling among the various objects but at the same time gives these loosely coupled objects an opportunity to be in sync with the changed state of the app. This is a classic implementation of the observer pattern with the <code>WebContext</code> acting as the <code>Observable</code> object.</li><br /></ol><br />With the advent of JDK 5 annotations, it might be convenient for the users if #1 could be achieved by simply annotating a field or a setter method with an <code>@Resource</code> like annotation. The ADF on encountering this annotation on a WebContext field or setter method could inject the same into the interested object. Further, users can do the initialization tasks in any arbitrary method annotated with the <code>@InjectionComplete</code> annotation and the ADF will call this method immediately after injecting the WebContext. (Both these annotations are proposed by JSR 250 - Common Annotations).<br /><br />#2 could also be achieved through annotations. Much like how EJB3 is proposing <code>@PostActivate</code>, <code>@PrePassivate</code>, et al; users can annotate the lifecycle methods with annotations such as <code>@OnActivate</code>, <code>@OnPassivate</code> and <code>@OnDestroy</code>. This would alleviate them of having to implement interfaces for lifecycle callbacks and further, they can choose to participate in only those phases of the ADF lifecycle which makes business sense for their objects.<br /><br />Comments / feedbacks welcome.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com0tag:blogger.com,1999:blog-7669070449985794434.post-77883618537629886012005-08-02T20:52:00.000-07:002007-04-13T21:48:57.238-07:00Adding layers dynamically in the ArcGIS Java ADFThere have been many <a href="http://forums.esri.com/Thread.asp?c=158&f=1699&t=164619&mc=8">questions</a> about adding layers dynamically in the ADF... And of course the requirement is that the added layer will reflect not only on the map but also on the TOC, layer drop downs, etc...<br /><br />When working with non-pooled objects, there's a straightforward way of doing this. Look at the source code below:<br /><pre><br />AGSWebContext agsCtx = ...; //get hold of the AGSWebContext<br />AGSWebMap agsMap = (AGSWebMap)agsCtx.getWebMap();<br /><br />//Step 1<br />agsCtx.applyDescriptions();<br /><br />//Step 2<br />MapServer mapso = new MapServer(agsCtx.getServer());<br />IMap map = mapso.getMap(agsMap.getFocusMapName());<br />ILayer layer = ...; //create the layer<br />map.addLayer(layer);<br /><br />//Step 3<br />agsCtx.reloadDescriptions();</pre><br />Let's discuss the 3 steps now:<br /><ul><br /> <li> <strong>Step 1:</strong> Before making stateful changes to the object graph (like adding a new layer in this case) you want to apply the current state of the <code>MapDescriptions</code> to the object graph. Calling the <code>applyDescriptions()</code> method on the <code>AGSWebContext</code> does exactly that.</li><br /> <li> <strong>Step 2:</strong> Now that you have the object graph in the current state, you can make the modifications there. In this case we add a new layer to the map.</li><br /> <li> <strong>Step 3:</strong> Once you have made changes to the object graph you want to reload the <code>MapDescriptions</code> to reflect the changed state and additionally, you also want the web controls to display this new state (in this case display the layer on the map control, on the TOC control as well as on any other components working with layers). A single call to the <code>reloadDescriptions()</code> method on the <code>AGSWebContext</code> will do all of this for you.</li><br /></ul><br />And that's about it! This sequence of steps holds true for <em>any</em> stateful changes that you want to make to non-pooled objects. The 3-word mantra is APPLY-CHANGE-RELOAD.<br /><br />If you wanted to work with dynamic layers (or make any stateful changes) in the pooled context, there's indeed more work to do because you are now sharing the server object with others and you want to return the object back to the pool in the same state that you had received it. You need to get access to the server object, apply the current <code>MapDescriptions</code> to the object graph, make the changes to the object graph, reflect the changes in the <code>MapDescriptions</code> and the web controls, undo the changes to the graph and then return the object back to the pool. You can check out the <a href="http://edndoc.esri.com/arcobjects/9.0/Samples/Server_Development/Web_Applications/DynamicLayers/DynamicLayers.htm">dynamic layers sample on EDN</a> to see this use case in action.Keyur Shahhttp://www.blogger.com/profile/12867300366770352938noreply@blogger.com1