Skip to main content

Posts

Showing posts from 2011

Advantages of working remote

There are many disadvantages of working remote   but there are many advantages of working remote. You are saving time on commute so you get more time for family if you are disciplined enough to start/stop work at office times (rarely happens on all days but still better than getting stuck in traffic) Less people coming and distracting you by just walking into cubicle to discuss office politics Less people coming and asking you things that they can just google for themeselves. Due to the above points you get more work done in less amount of time. You RTFM more than usual. If you are stuck on an issue you are the only one who can find the solution as the luxary of walking into someone's cubicle for help is gone, In a sense its a double edged sword but as you are left with no choice in the end you come up as winner and you become more and more of problem solver on your own. This way you tend to research things in detail and increase your arsenal of skills.

openoffice and NFS file saving issue

I recently integrated a file preview application in our application with team, so now users can preview most of the files without downloading them. The hard part was to deal with NFS issues due to locking and caching. We chose to buy v/s build for the file preview and bought some third party service. The third party service lets call it APreview had a http api where you can pass the source and target file path. It would have been much better if we could stream the input and it could stream the output but that option was not there. Because we can only pass paths to it the natural solution was to use NFS paths. So we ran into two major issues: 1)The APreview application internally uses openoffice to convert word/PPT/Xls files to pdf and then it converts pdf to swf. openoffice has some issues with writing to NFS and we could use vi and other tools to write files but openoffice would just refuse to save the file as pdf. Finally I found that commenting these two lines in /usr/lib64/openof

Disadvantages of working remote

Just ranting it out loud, I am working remote for the past two years for a startup and there are many advantages of working remote but there are many disadvantages also. Its hard to chase people over phone. As people wont pick up the call or they wont reply to Instant Messages. So you are stuck in your chain of thoughts cursing the monitor. You miss the coffee talk. You miss what's going on in the office and really dont know whats going in the company. In meetings you could be writing code and its easy to get distracted over phone. Some people prefer to chat over talk and thats a pain because you already miss the social connection and typing is always a pain. You miss all company dinners and lunches. Because you save time on travel you tend to overwork. Worse if you are working in a different timezone then people would disturb you off hours. Lots of time is spent in trying to screen share or do go to meeting. People will prefer to talk to person in house and unless you

Jesey writing an authentication filter

It seems there are two ways to add authentication to Jersey REST apis 1) You can add a servlet filter. public class RestAuthenticationFilter implements Filter {     @Override     public void destroy() {         // TODO Auto-generated method stub            }     @Override     public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {          try {            User user = BasicAuthHelper.authenticateUser(request);             if (user == null) {                 response.sendError(HttpServletResponse.SC_UNAUTHORIZED);             } else {                 request.setAttribute("user", user);                 chain.doFilter(request, response);             }     } catch (ApplicationException e) {             response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, e.getMessage());      }     }     @Override     public void init(FilterConfig config) throws ServletException {     } } 2) You can do it using

Jersey Rest API html documentation

There are various ways to generate REST api documentation but how can you do it with minimal effort. Unless you are looking into very sophisticated documentation like twitter https://dev.twitter.com/docs/api, the below should suffice. If you though however want to generate twitter like documentation then also 70% of what I highlighted would work. Jersey by default supports WADL generation by default. WADL is machine readable rest api description simliar to what WDSL was to soap. In order to get to default wadl generated by jersey you can just call http://localhost:8080/application.wadl . This WADL is a XML document so all you need now is to render it with a stylesheet. WADL has documentation tags that can be used to document a resource byt unfortunately jersey by default doesnt generate documentation tags. So to get real html documentation the steps are: 1) Document you rest services using Java docs for the resource class and resource method. The down side of using java docs i

Generating pdf out of html using Java

I had this requirement of generating PDF out of HTML content. Users can add any kind of html content on a Rich text editor from few lines to a full paragraph as notes for a file and save it on the server. Now I had to notify other users when some user adds a note. Now in the past I have generated PDF for reports but that was on a structured data straight out of database so I could easily use something like itext or jasperreports but  this one seems an interesting problem as user can do any free form html in the editor. Ultimately the solution came to be: Convert the html added by user into xhtml using JTidy  Use flying saucer's ITextRenderer ( http://code.google.com/p/flying-saucer/ ) to generate the pdf out of xhtml. Here is a sample code public class TextSectionConverter { private String notesContent; public TextSectionConverter(String notesContent) { this.notesContent = notesContent; } public void writeAsPdf(FileOutputStream fos) throws Exception { conve

biggest concern with moving your applications to the cloud?

wow I wrote about enterprise cloud adoption and found this poll on my linkedin profile that validates my thinking of security and reliability being biggest concerns for cloud adoption by enterpries.

Enterprise customers and Cloud Adoption

A big fear among enterprise customers who want to adopt cloud is that Is my data secure? What if this site goes down for 3-4 hours? What if this startup shutdown business, what will happen to my data? Recent downtimes from big companies as shown below instills a fear among mission critical busineses wanting to adopt cloud. Microsoft http://techcrunch.com/2011/09/09/microsofts-cloud-briefly-evaporates-leaves-up-to-365-million-users-without-access-for-four-hours/ Amazon http://eu.techcrunch.com/2011/04/21/amazon-ec2-goes-down-taking-with-it-reddit-foursquare-and-quora/ Google App Engine http://techcrunch.com/2011/09/09/google-explains-its-google-docs-outage/ While moving to cloud makes sense economically but having one bad day can make people go back to traditional methods of manging own infrastructure. Think of a hospital storing all records in cloud, it cant afford a 2-3 hour downtime. There is a better solution to this and its by adopting the hybrid model and companies

Making Junit tests faster

I had some batch of Junit tests that were taking close to 20 minutes and it was wasteful to wait everytime before checking in code, one trick I found was to use the forkmod="once" property. Using this reduce the time to 5 minute 8 sec.         <junit tempdir="build" printsummary="on" fork="yes" forkmode="once" haltonfailure="${test.haltonfailure}" failureproperty="tests.failed" showoutput="false"> Earlier Junit was forking a JVM per test and now its doing one for all test so its very fast. Here is some documentation from Junit on this property. Controls how many Java Virtual Machines get created if you want to fork some tests. Possible values are "perTest" (the default), "perBatch" and "once". "once" creates only a single Java VM for all tests while "perTest" creates a new VM for each TestCase class. "perBatch"

Selenium and ExtJS

A trick to test selenium with ExtJS is to use cssSelectors. As an element can have more than one css classes and you really dont need to define any style for that css class it can be a good locator for the element and faster than XPATH on IE. You can define cssSelector as         tbar: {             xtype: 'toolbar',             items: [                 {                     xtype: 'button',                     text: 'Send',                     cls: 'x-btn-text',                     overCls: 'x-btn-noicon',                     ctCls: 't-btn-yellow x-btn-over',                     iconCls: 't-send seleniumSendNote ',                     ...........                     ........... and you can then in your test call the button click as         driver.findElement(By.cssSelector("button.seleniumSendNote"))                 .click();

Selenium and ExtJS HtmlEditor

I had to add a selenium test for a page with ExtJS HtmlEditor and selenium wont recognize it, even tests recorded with Selenium IDE wont recognize it. The reason is that HtmlEditor uses a hidden textarea and a DIV on top of it to trap keystrokes. I tried using lots of ways to set text into it but it would complain about component not visible  and other stuff. Finally the only way I could do is to execute Javascript from webdriver. Here is the code that I used to set the text. String notes = "This is a test note from selenium"; JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("Ext.getCmp('notes').setValue('" + notes + "')");    

Java and lsof

Update : I found that using Runtime.exec was a bad idea because if you have a 2GB VM footprint then the forked process would require 2G free memory in order to run the lsof command.  We had earlier writte a simple python http rpc server that would allow us to execute native commands(like creating a hardlink or running gunzip) from Java and I changed this code to delagate to RPC call few days back. So the new code looks like public void writeTopCommandOutput(Writer writer) throws IOException { String rpcRes = Util.doCommandRpc(rpcUrl, "", ListUtil.create("top", "-n", "2", "-b", "-d", "0.2")); writer.write(rpcRes); } public void writeLsofOutput(Writer writer) throws IOException { String pid = getJvmProcessid(); if (pid != null) { pid = pid.trim(); String rpcRes = Util.doCommandRpc(rpcUrl, "", ListUtil.create("lsof", "-p", pid)); writer.write(rpcRes); } }  

Programatically extracting quoted reply from an email

When files are uploaded to our cloud file server, we wanted to send notification email per file with its own unique email address. I will discuss how to have so many unique email address without creating a user on mail server for each file and scale out the solution in some later blog. People can just hit reply button on the generated notification email and comment on the uploaded file. When reply email reaches back the server we want to extract the comment that user added after stripping out the quoted reply form the mail client and add the clean comment to file. Seems like an easy problem isnt it, but unfortunately there is no easy way to detect the quoted reply from an incoming email because different mail clients use different way to quote a reply. On top of it quoted reply of html emails are different than plain text quoted replies. Angle Brackets "> xxx zzz" "---Original Message---" "On such-and-such day, so-and-so wrote:" html email reply

Spring MVC and Unicode characters

We ran into an issue where some user tried entering Danish character ø in his first name and it was not updating properly in LDAP. First we thought its a LDAP issue but then I found that in some other page where we use DWR api the same character is getting updated in LDAP properly. Finally we nailed it down to Tomcat/Spring where even after doing < meta http-equiv = "Content-Type" content = "text/html; charset=UTF-8" / > the character encoding was not properly set on request. Adding this filter solved the issue. This has to be first filter in web.xml, otherwise it wont work, I had it earlier as the second filter and i wasted some time debugging it <filter> <!-- Filter to handle content encoding for UTF-8 encoding, this has to be the FIRST FILTER, do not move --> <filter-name>encodingFilter</filter-name> <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class> <init-param>

mysql execute immediate

I had a requirement to generate a sharded schema where no of shards and no of tables in the shard were dynamic. Basically we wanted to shard one table so we ended up creating 8 schemas and each schema will hold 8 tables that are copy of the same schema. Now I didnt wanted to hand write the 64 table/schema creation statement so came up with this procedure that allows to dynamically build and execute sql queries.  Oracle was so easy, mysql is a little bit verbose.     drop procedure if exists create_rdb_tables;     delimiter #     create procedure create_rdb_tables()     begin     declare v_max int unsigned default 9;     declare v_counteri int unsigned default 1;     declare v_counterj int unsigned default 1;       while v_counteri < v_max do         while v_counterj < v_max do             set @sql_text := concat('drop table if exists metadata_rdb_schema',v_counteri, '.metadata_rdb_t', v_counterj,';');             prepare stmt from @sql_text;

ProcessId 911

call it good luck I got process id 911 for java. :).  Hiding my laptop name to keep it anonymous.

Tomcat configurable session timeout per customer

We are a cloud file provider and more geared towards enterprise customers. We have a default session timeout of 6 hours for web ui access and recently customers had a requirement that they wanted to configure a session timeout themselves. As we host multiple customers on one node, this was an interesting requirement and we were discussing all sorts of hacks until I landed on to this api HttpSession.setMaxInactiveInterval . So now all we need to do is upon successful login, check if admin has overridden session timeout settings for this enterprise and set that on session using the above api.

Tomcat printing catalina pid

If you are hosting more then one tomcats on a physical box in production then lot of times you might want to see the process id of running instance. We dump jstacks/top command output in a folder every 5 minutes and this helps in correlating it. Here is a sample code to dump tomcat pid. public String getJvmProcessid() throws IOException { String pid = null; File pidFile = new File(System.getenv("CATALINA_PID")); if(pidFile.exists()) { FileInputStream fin = new FileInputStream(pidFile); List lines = IOUtils.readLines(fin); fin.close(); pid = StringUtils.join(lines.toArray()); } return pid; }

Mysql auto audit column

Thanks to my colleague Deepak, today I learnt a new thing. If you want a column in your table like lastModifiedTime that gets inserted when the row is created and also auto updated whenever someone updates the row then traditionally the only way to do this was to use a trigger. I have used oracle for 4-5 years so I thought this is not possible to do in DDL and the trigger is the only way. But Mysql has this magic. You can create a column like lastModifiedTime TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, And this column will have current datetime on insert and will get updated with current datetime anytime a mutation to that row occurs. Wow no more triggers and this column is self maintained so no more worry about some dba disabling the trigger and did the update to get data inserted quickly.

Scaling graphs in graphite monitoring tool

We use graphite for monitoring trends. One requirement is to spot an outlier from all nodes. So for e.g. we are monitoring apache threads at front apache and the Level2 apache. The default graph generated by graphite would show graphs but it would show all nodes in different scale. See below graph where open file handles on all nodes are shown differently Every graph is showing data and just looking at it cant show the outliers as a graph with 2K as higher limit shows simliar to graph with 12K. So passing yMin=0&yMax=75 to the graphite url did the trick. Here is an example of apache level2 threads. From this clearly we can see the 3,4,7 are the outliers.

Reliance / Tata Photon / MTS Blaze data card ubuntu 11 India

I am traveling in india and brought my laptop with me as I was working for 2 weeks from here. My host OS is ubuntu 11 and it was a pain here to get it connected to internet. As I would be traveling so the best solution was to get a prepaid datacard as it will work everywhere in india. So I took my laptop to reliance world and it was fun to see the IT people struggling to find "My computer" in ubuntu. After 2 days they gave up and I anyways purchased the card to give it a try myself. It seems it was a 2 min job to connect it. 1) Go to System->Preferences->Network connections 2) Select "Mobile Broadband" and click Add 3) Follow the steps as shown in below screenshots 4) Enter the username/password provided by the provider 5)Thats it you are all set. Enjoy!!

Java count open file handles

Encountered an issue in production where JVM ran out of file handles due to code bug. It took five minutes for file handles to build up but had there been any trending of open file handles we would have caught it as soon as release was pushed as on some nodes it didnt exhausted the file handles but the number was high enough to have caught suspicion. Now I can do lsof and put it in cron but I am not fond of crons as you have to configure it manually and if a box has 4 tomcats then you have to configure for each one of them on 20-30 nodes. So I wanted to get count of open file handles every five minutes and push it to graphite for trending. Here is a sample code to do it public long getOpenFileDescriptorCount() { OperatingSystemMXBean osStats = ManagementFactory.getOperatingSystemMXBean(); if(osStats instanceof UnixOperatingSystemMXBean) { return ((UnixOperatingSystemMXBean)osStats).getOpenFileDescriptorCount(); } return 0; }

RateLimiting based on load on nodes

We are a cloud based file storage company and we allow many access point to the cloud. One of the access point is Webdav api and people can use any webdav client to access the cloud. But some of the webdav client especially on mac OS are really abusive. Though the user is not doing any abusive action, the webdav client does aggressive caching so even when you are navigating at a top directory it does PROPFINDs for depth at 5 or 6 level to make the user experience seamless as if he is navigating local drive. This makes life miserable on the server because from some clients we get more than 1000 requests in a minute. If there are 5-10 clients do the webdav activity then it causes 100 or more propfinds per sec. Luckily the server is able to process these but it hurts other activities. So we needed to rate limit this. Now as the user is really not doing any abusive action it would be bad to slow down on penalize the user in normal circumstance, however if the server is under load then it w

Quartz stop a job

Quartz did a good job on implementing this concept. It was very easy to add this feature by implementing a base class that abstract the details of interrupt and have every job extend this class. If you can rely on thread.interrupt() then its the best way to interrupt a job that is blocked on some I/O or native call. However if its a normal job then a simple boolean flag would do the work. You would need to use scheduler.interrupt(jobName, groupName); to interrupt a running Quartz job. public abstract class BaseInterruptableJob implements InterruptableJob { private static final AppLogger logger = AppLogger.getLogger(BaseInterruptableJob.class); private Thread thread; @Override public void interrupt() throws UnableToInterruptJobException { logger.info("Interrupting job " + getClass().getName()); if (thread != null) { thread.interrupt(); } } @Override final public void execute(JobExecutionContext context) throws JobExecutionException { try { thread

Pitfalls of defensive programming

We programmers some times add too much defensive code in order to protect ourselves from the caller not asserting preconditions before making the call. So for e.g. if we have to save a file in some directory, we would first go and check if the directory exists and if it exists then create the file. Now NFS is not designed to work at cloud scale and we saw lots of calls just stuck in file.exists call in threaddumps. The solution was simple, some of these directories could be created at tomcat startup or app node installer can create them. Also some code can assume that directory exists and if if gets a FileNotFoundExcpetion then create it and retry the operation. Removing these defensive coding practices reduced a lot of unnecessary stat calls on filers and improved performance. This is just an example but similar pattern can be observed in other areas of the code and fixed. Defensive programming is good but too much of it is bad and can be improved by making some assumptions or provid

Contextual Thread dumps

Due to some business policy changes we recently started seeing some changes in usage pattern of our application leading to unexplained app node spikes. These spikes were temporary and by the time we go and try to take jstacks it might have disappeared. So we configured a quartz job to take jstack every 5 min(wrote a quartz instead of cron because cron needs to be manually configured on each node and we have tons of nodes to ops was always missing or misconfiguring it) and dump it in to a folder and we keep last 500 copies. That way I can go and correlate what was going on in the tomcat during the time of the spike (I had to get lucky for spike to happen when quartz job was running but I was lucky as most spikes spanned 3-5 mins). Now from those thread dumps I can figure out what was going on like how many thread are doing "searches" v/s how many thread are coming from Webdav or how many threads are doing add file. But one question that keep on coming was who are the customer

Jersey streaming binary data

In the previous post I showed how you can post binary data to a jersey REST api. You can also use Jersey to serve files, although its better done by apache or nginx but sometimes you might want to serve thumbnails stored in a database out of a service and put varnish in front of the REST api to cache the thumbnails. This is just a demonstration of using jersey to serve binary data in streaming fashion. @Path("/download-service") public class DownaloadService extends SecureRestService { private static final AppLogger logger = AppLogger.getLogger(DownaloadService.class); @POST @Produces(MediaType.APPLICATION_OCTET_STREAM) public StreamingOutput getThumbnail( @FormParam("securityKey") final String securityKey, @FormParam("guid") final String guid) throws JSONException { return new StreamingOutput() { @Override public void write(OutputStream out) throws IOException { try { if (!isAuthorized(securityKey)) { response.sendErro

Jersey posting multipart data

This took me sometime to figure out mostly it was because I was only including jersey-multipart-1.6.jar but I was not including mimepull-1.3.jar. So the intent is to upload a file using REST api and we need pass meta attributes in addition to uploading the file. Also the intent is to stream the file instead of first storing it on the local disk. Here is some sample code. @Path("/upload-service") public class UploadService { @Context protected HttpServletResponse response; @Context protected HttpServletRequest request; @POST @Consumes(MediaType.MULTIPART_FORM_DATA) @Produces(MediaType.APPLICATION_JSON) public String uploadFile(@PathParam("fileName") final String fileName, @FormDataParam("workgroupId") String workgroupId, @FormDataParam("userId") final int userId, @FormDataParam("content") final InputStream content) throws JSONException { //.......Upload the file to S3 or netapp or any storage service } } Now to

Logging to Graphite monitoring tool from java

We use Graphite as a tool for monitoring some stats and watch trends. A requirement is to monitor impact of new releases as build is deployed to app nodes to see if things like 1) Has the memcache usage increased. 2) Has the no of Java exceptions went up. 3) Is the app using more tomcat threads. Here is a screenshot We changed the installer to log a deploy event when a new build is deployed. I wrote a simple spring bean to log graphite events using java. Logging to graphite is easy, all you need to do is open a socket and send lines of events. import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.OutputStreamWriter; import java.io.Writer; import java.net.Socket; import java.util.HashMap; import java.util.Map; public class GraphiteLogger { private static final Logger logger = LoggerFactory.getLogger(GraphiteLogger.class); private String graphiteHost; private int graphitePort; public String getGraphiteHost() { return graphiteHost; } public void

Developing an Exception monitoring system for your cluster

Lot of times developers want to know 1) How many exceptions are happening in a 100 node cluster 2) When I do a new release are the no of exceptions growing or decreasing 3) What are my top 5 exceptions in the app that I need to focus on 4) overall are there any nodes where some exception is happening a lot of times compared to other nodes. Getting all this statistics is tricky as you have to parse logs and aggregate what not so all this is messy and time consuming. Also when nodes are added/removed from cluster you have to change the script. Solution I came up was very simple 1) 90% of the time the exceptions are logged using logger so I overrode the logger.error method and would get first 100 chars out of exception stacktrace keep a counter in a static in memory hashmap. 2) Some exceptions that are never logged so I wrote a servlet filter to catch them in a top level filter and log them to logger that way it would be counted. 3) I wrote a quartz job at the end of the day to

Side effect of using spring bean injection in your application

I realized that a good side effect of using spring in your application is that you are always creating public setter/getter(optional) for your dependencies and sometimes "init" and "destroy" methods. Lot of times in production we had to change some settings like list of memcache servers(when you add/remove a one) or list of cassandra servers(when you move them around to better hardware). The problem is that we want to avoid as much tomcat restarts when this happens as some of these are global to all tomcats and we cant restart all of them(it has to be a rolling restart). With spring its easy, you can always write a jsp that can call the setter and change the setting on the live bean and you are done. In traditional applications where you don't use spring its entirely up to developers to expose the setter method but with spring this nice side effect has save my ass a lot of time.

Spring log query execution time for SimpleJdbcTemplate

Without going into too much depth here is the way to do it. The challenge faced were SimpleJdbcTemplate has a no-arg constructor. In future the intent of this interceptor is to detect N+1 query problem by using a light weight instrumentation framework plugged in instead of logging query times in log file. public void setDataSource(DataSource dataSource) { final SimpleJdbcTemplate simpleJdbcTemplate = new SimpleJdbcTemplate( dataSource); Enhancer enhancer = new Enhancer(); enhancer.setSuperclass(SimpleJdbcTemplate.class); enhancer.setCallback(new MethodInterceptor() { @Override public Object intercept(Object obj, Method method, Object[] args, MethodProxy proxy) throws Throwable { try{ String methodName = method.getName(); if (methodName.startsWith("query") || methodName.startsWith("batchUpdate") || methodName.startsWith("update")) { String query = (String) args[0]; String prefix = extractQue