eclipse import static

Posted by anton
on Tuesday, January 26, 2010

i very much believe in a craftsman approach to software development, which, among other things, advocates the value of mastering the tools in your toolbox.

i consider using keyboard shortcuts in your IDE of choice a part of this craftsmanship approach. i do sympathize with unclebob’s plight for mouse-less editing and with stevey’s earlier posts on the subject.

back in 2004 java5 went GA and introduced static imports among several other syntactic niceties.

this feature is most useful for static helper methods – instead of writing Assert.assertEquals, i would rather use assertEquals, since i know i am writing a test, and in the domain of testing, assertEquals does not need to be qualified. same goes for many internal utility methods that i tend to use a lot (e.g. asList()) or static factory methods (e.g. newDateTime()). as a side note, using newXY() as a static factory method as opposed to create() makes it more suitable for static importing.

in my current IDE (Eclipse) i rely on auto-completion Ctrl+1 programming, so i would start typing Assert., followed by Ctrl+Space and then manually convert normal import to import static. it was very undignified.

it turns out that in Eclipse Ctrl+Shift+M that i already used to import dependencies under the cursor, also works for converting static method calls into static imports.

now all i have to do is type Assert.assertEquals once, then press Ctrl+Shift+M (obsessively followed by Ctrl+Shift+O to organize imports), and i can start using assertEquals all over the place without qualifying it with Assert.

as an additional convenience, i always set Number of static imports needed for .* to 1 under Java -> Code Style -> Organize Imports in Eclipse preferences. this way a single static import of a method from Assert triggers import static of Assert.*, which is what i want.

Public Enemy No.1

Posted by anton
on Wednesday, January 28, 2009

Alex Miller aka Pure Danger Tech has a great entry on most common concurrency bugs.

the timing is perfect – this very thing bit me today. long story short, an old app, previously perceived to be multi-threaded, was recently converted to actually be multi-threaded, and then, once traffic ramped up a bit, exhibited peculiar behavior when perfectly good dates could not be parsed. thank god it blew up, as opposed to quietly corrupting the data.

so something as innocent-looking as private static final SimpleDateFormat declaration was the culprit: java.text.DateFormat is not thread-safe.

luckily, it is easy enough to spot and reproduce (threadPoolSize and invocationCount in TestNG simplify it even further).

a pessimist would heave a mighty sigh, once again swear on the copy of JCIP to find and root out every frivolous static out there with the help of FindBugs or a simple regex.

meanwhile, there is joda-time and a promise of jsr310

but of course this whole experience still leaves you feeling cheated and dirty – why, god, why?! something so function-like and stateless in nature insists on stowing things away.

JUnit Joys

Posted by anton
on Thursday, January 31, 2008

i have been wrestling with some legacy code recently. beating it over the head with a copy of Working Effectively with Legacy Code did not do much, so i started writing tests to see how it worked.

after a few minutes of waiting for eclipse + junit4.3.11 combo to load, i furiously coded a bunch of tests, and then realized that i could not make them fail. essentially it boiled down to the following:

assertEquals(1.0, 1.1);

...which quietly and happily passes. wtf?!! these are two doubles, just compare them and let’s move on with our lives!

then after a bit of thinking i recalled that most of my tests that involved doubles were written on projects that were still on jdk < 1.5, when method above simply won’t even compile, alerting me to the fact that junit expects assertEquals(double, double, delta), where delta is the precision you need.

why they couldn’t simplify my life by creating a couple of Double instances and calling equals() on them is beyond me. this is what i would want most of the time anyway.

since i was running tests under jdk1.5, autoboxing kicked in and now we have two Doubles on our hands. fine, this should not be a big deal, simply call double1.equals(double2) and be done with it – what’s a big deal?

but nooooooooo, check out this little bundle of joy:

private static boolean isEquals(Object expected, Object actual) {
    if (expected instanceof Number && actual instanceof Number)
        return ((Number) expected).longValue() == ((Number) actual).longValue();
    return expected.equals(actual);
}

what the hell?!! you correctly detect that this instance of Double is an instance of Numeric, then take its long value and throw fractional part out. quietly. so all my tests never even squeak. why?!!!

an obvious approach is to suck it up and appease the api:

assertEquals(1.0, 1.1, 0);

this will work. but damn! my eyes!!

TestNG

we all know that junit is legacy, and version 4 was an afterthought, so let’s see what testng does. the api is the same, and under jdk1.5 my primitives get autoboxed into Doubles and assertEquals(Object, Object) gets called:

public static void assertEquals(Object actual, Object expected, String message) {
    if (expected == null && actual == null)
        return;
    if (expected != null && expected.equals(actual)) {
        return;
    } else {
        failNotEquals(actual, expected, message);
        return;
    }
}

voila! this is exactly what i expected. and guess what – it actually works and correctly fails the original test.

Lessons

  • man, this alone would scare me away from junit in favor of testng
  • make sure your test fails before it ever works!
  • what really frightens me is how many more of these quiet autoboxing errors are lurking out there. what previously would be caught by the compiler now silently works in unpredictable ways
  • yet another thing to be aware of when upgrading your app from jdk1.[34] to jdk1.5+

Note

this junit bug has been fixed in junit4.4 and up

Ruby

interestingly enough, on a few skunk works projects this year i cobbled together a bunch of existing libraries with ruby glue and used (j)ruby’s Test::Unit (which conveniently comes with ruby distro) and rspec for testing the result.

it definitely makes sense for a project that is written in (j)ruby that uses many existing java libraries; it might be a bit of a mindset shift for a java project that is looking for simplified testing. for now i am keeping an eye on projects like JtestR

1 default version of junit that ships with eclipse 3.3

dumping sybase schema

Posted by anton
on Tuesday, October 30, 2007

currently i have a privilege to work with sybase 12.5. perhaps i am spoiled with the ease of mysqldump db_name [tables] or echo .dump [tables] | sqlite3, but i expect any modern database to have a scriptable way to dump schema for selected tables in create statements, as well as data in insert statements that simply could be piped back when needed.

while dumping data is easy enough using bcp, scriptable schema extracts are a bit trickier (especially if you want it to be cross-platform).

but we are lucky – it took sybase only 20+ years to introduce a command-line utility written in java called ddlgen in version 15 of its flagship enterprise product (in my case it worked against 12.5 as well).

ddlgen

$ ls -lR c:/programs/sybase/ddlgen/
c:/programs/sybase/ddlgen/:
    ddlgen.sh
    lib/

c:/programs/sybase/ddlgen/lib:
    DDLGen.jar
    dsparser.jar
    jconn3.jar
  • rig up the wrapper script:
$ cat c:/programs/sybase/ddlgen/ddlgen.sh 
JAVA_HOME=c:/programs/java/jdk/jdk1.6.0_03
LIB_DIR=`dirname $0`/lib
CLASSPATH=$LIB_DIR/jconn3.jar:$LIB_DIR/dsparser.jar:$LIB_DIR/DDLGen.jar

$JAVA_HOME/bin/java \
-mx500m \
-classpath `cygpath --mixed --path $CLASSPATH` \
com.sybase.ddlgen.DDLGenerator $*

backup scripts

  • schema-out.sh
source env.sh

[ ! -d $OUT_DIR ] && mkdir -p $OUT_DIR

for table in $TABLES; do
    out_file=`cygpath --mixed --absolute $OUT_DIR/${table}-schema.txt`
    printf "dumping $table schema to $out_file... " 
    $DDLGEN -U $USERNAME -P $PASSWORD -S $SERVER:$PORT -D $DATABASE -TU -N$table -O $out_file
    printf "done\n" 
done
  • bcp-out.sh
source env.sh

[ ! -d $OUT_DIR ] && mkdir -p $OUT_DIR

LOG=`dirname $0`/bcp-out.log
cat /dev/null > $LOG

for table in $TABLES; do
    out_file=`cygpath --mixed --absolute $OUT_DIR/${table}-bcp.txt`
    printf "dumping $table to $out_file... " 
    bcp $DATABASE.dbo.$table out $out_file -c -t, -S $SERVER -U $USERNAME -P $PASSWORD >> $LOG
    printf "done\n" 
done
  • schema-in.sh
source env.sh

LOG=`dirname $0`/schema-in.log
cat /dev/null > $LOG

for table in $TABLES; do
    in_file=`cygpath --mixed --absolute $OUT_DIR/${table}-schema.txt`
    [ ! -f $in_file ] && echo "$in_file does not exist for $table, skipping" && continue
    printf "loading $table schema from $in_file... " 
    isql -S$SERVER -U$USERNAME -P$PASSWORD < $in_file >> $LOG
    printf "done\n" 
done
  • bcp-in.sh
source env.sh

[ ! -d $OUT_DIR ] && mkdir -p $OUT_DIR

LOG=`dirname $0`/bcp-in.log
cat /dev/null > $LOG

for table in $TABLES; do
    in_file=`cygpath --mixed --absolute $OUT_DIR/${table}-bcp.txt`
    [ ! -f $in_file ] && echo "$in_file does not exist for $table, skipping" && continue
    printf "loading $table from $in_file... " 
    bcp $DATABASE.dbo.$table in $in_file -c -t, -S $SERVER -U $USERNAME -P $PASSWORD >> $LOG
    printf "done\n" 
done

do i feel silly? yes. do i feel petty? yes. does it make me feel better about myself, given that the sybase DBA told me to contact dbartisan support to see if i could script their tool to do this? oh yes.

cafe babe

Posted by anton
on Saturday, October 27, 2007

background: 10K+ compiled class files and sources that got out of sync1; need to figure out which sources are valid, and which ones are not.

decompiling things is the last resort, since sources produced are not easily diff‘able against the sources you’ve got. the likes of diffj is not much help either, and i did not even want to go down the rabbit hole of normalization through obfuscators.

so if you do not feel like wielding antlr or javacc to normalize two sources, the obvious approach is to simply recompile and compare with the existing class files (just beware of missing class files that might not have any sources at all).

however, keep in mind that javac by default includes line number table in the class file it produces2. this means that even if you added or removed a line of comments or even a blank line before any sources, it would result in a classfile that is different from the original.

sometimes you have another class inside the .java file (not to be confused with inner classes). in this case it gets compiled into a separate class file. so if your main class’ source code has changed, it will affect the line number table of the other class as well. this means that even though another class’ source has not changed, its generated class file will be different.

in my case i also had to check which jdk compiler produced the class files. one can always opt for javap that prints minor and major versions, or if you are feeling manly enough, whip out your favorite hex editor and check bytes 6 and 7 (according to the vm spec). in general, javap is the easiest way to check the internal structure of the class file.

finally, to diff files and directories i simply used svn – check the originals into the local repo, then put your stuff in a working copy – it will do the rest. after all, this is what it’s good at.

oh and why CAFE BABE? look it up

1 this is a whole different and interesting topic – how any sort of generated content creates a possibility of this disconnect. all these xdoclets, jaxb-generated sources, and even compiled classfiles create artifacts that now have a potential of getting out of sync with the sources. yes, proper engineering practices mitigate the risks, but all things being equal, i like the fact that with scripting languages this problem is largely non-existent. what you see is what you run.

2 you could always run javac with -g:none to get rid of line numbers, but it was no help in my case.

for fun and profit

Posted by anton
on Thursday, September 27, 2007

if you enjoyed everyone’s favorite upside-down-ternet way of making new friends, this whimsical bit is right up your alley.

it is based on cross-site request forgery (CSRF) attack.

briefly, these are the attacks that trick you into submitting a potentially damaging request to the application you are logged in to. so if you receive an email with a link to http://www.google.com/setprefs?hl=ga, which you press, it will set your google language preferences to irish.

thus you could try to impress those inquisitive souls looking for things on your site with the following apache config directive:

RedirectMatch \.(php|phtml|phps|php3)$ http://www.google.com/setprefs?hl=xx-klingon

therefore any request to a booby-trapped url on your site (in this case anything that ends in php) would set their google search language to klingon.

(stolen from here)

of course, it does not have to be an explicit server-side redirect – similar behavior can be triggered with javascript, iframes, etc.

how do you protect from it? the app has to use unique tokens in the form presented to the user (or one can start lugging around those encrypted URLs again – anyone remembers IBM’s Net.Commerce?)

since i am (somewhat reluctantly and half-asleep) reading gibson’s latest, and since these days i mostly appreciate him for sensing the zeitgeist and popularizing new art forms, i cannot shake off the feeling that there is an art piece lurking in here.

rtfm or how i spent my sunday evening

Posted by anton
on Monday, May 21, 2007

dreading the impending move (or rather using it as an excuse to perform many long-overdue housekeeping tasks), i have been virtualizing various OS'es around the house (note how conveniently that eclipsed tasks like getting rid of old furniture or boxing things up - truly, one is always tempted to view everything to a technical problem).

first of all, note that there is a handy dandy jolly tool from vmware: vmware converter; it will happily virtualize your windows machine (no more screwing around with ghost/acronis/dd, like i used to).

still, i had a gentoo server confined within a wheezing old machine that absolutely had to stay up (so no cold restores). casting a nostalgic glance in the direction of dd/netcat combo, i opted for partimage and SysRescCD, but before that - dd if=/dev/hda of=hda-mbr-full bs=512 count=1 to save mbr; and to restore: dd if=hda-mbr-full of=/dev/hda bs=512 count=1; then sfdisk -d > file then sfdisk < file (nifty! i did not know that). boot from the rescue cd, create partitions, restore partimage droppings (had to do it several times, apparently it is no smarter than dd - no checksums!), mkswap, etc, etc.

so far so good, but the main reason for this post is the mysterious vmware-modules - i tried emergeing them, tried downloading them, read up on forums, installed client api/sdk available from the download sites, even read the docs! basically, being a gentoo user, i did not have generic drivers for my network card; once transplanted to vmware, the OS refused to acknowledge network hardware. vmware-modules were referred to as the ultimate answer.

finally, somewhere in the depths of google cache i found the answer - drop-down VM menu, then select "Install VMWare tools..." followed by mount /dev/cdrom /mnt/tmp gives you the ultimate joy - a vmware-tools tarball. man, how is that not obvious?!! under windows this runs an installer, but under linux it quietly slips in an iso under the guise of /dev/cdrom

now it's just typing and joyous cargo-culting - run the installer script, build the kernel modules, check'em with lsmod, symlink net.eth0 to network (boy, do i feel dirty), play some more tricks to appease the gentoo startup script gods and vmware reliance on redhat-like rc, and voila - vmware starts before net*, eth0 pops up in ifconfig after you tweaked /etc/conf.d/net, (do not forget to go under Host -> Virtual Network Settings -> Automatic Bridging and add all the crap like VPN pseudo-adapters, otherwise it will get you, like it gets me every time; then set vmnet0 as the auto-bridged interface to be used for vm).

phew! now i pick up a bottle of red and promptly forget all this nonsense.

solaris 8 threading

Posted by anton
on Sunday, August 06, 2006

another quick joyous encounter: one of the co-workers was half-heartedly beating his head for a month against a heavily-threaded java-based app from a third-party vendor. the app ran on solaris sparc 8 and with 4-way box it drove the sysload above 100 (!), while cpu utilization would remain below 20% and prstat showed more than a thousand threads. in a passing i asked him if he tried alternative thread library (another reference) - i have always used it for our java app servers on solaris 8, but never saw any notable improvement, since there would be less than a hundred threads per JVM. in this case, however, the library solved the problem - the system load instantly dropped down to 1-2.

initially i liked M-to-M solaris 8 thread library, since its complexity was quite sexiful to anyone studying it theoretically, but apparently a simpler threading model is much more effective in the long run from many perspectives, and this is why it became the default in solaris 9.

serendipitous apache tinkering

Posted by anton
on Sunday, August 06, 2006

this is something i accomplished at work in the past month that was sort of peripheral to my "main" job. it brought this much needed sense of accomplishment in the midst of fighting fires and dealing with incompetence. for once i had all the people i needed close by and i had everything i needed to get the work done.

the whole thing was merely replacing a cisco reverse proxy/ssl termination device with an apache server. i was briefly involved in the original solution, steering them in the right direction (sadly, pointing a cisco consultant at their own docs to prove that they did indeed have reverse proxy and url rewriting functionality). however this time around, when i got involved, it turned out that the cisco device was not able to handle the traffic altogether due to the firmware issues, so something needed to be done in a day or two.

it was so gratifying to be able to run the whole thing to the completion, working through firewalls/certs/nat'ting, compiling/testing and rolling this stuff out in a matter of several hours, complete with some quickly whipped-up load testing and monitoring. granted, it was just a dozen internet-facing proxying sites, something i have done so many times before, but showing the skeelz off, especially since it was not even my job, technically, and doing it all in a few hours with all of these folks watching, was a nice uplifting experience after long nights of frustration beforehand.

the sad thing is that all the folks that were working on this stuff for past three months had very little understanding of underlying technology (and that's, of course, even worse that no understanding at all). all of it was integration of packages into a portal and serving it via SSL to the end user, but all they had were consultants for each of the packages that knew only the terminology and high-level details of how their stuff worked. so what i witnessed that night was a picture that i've seen every single day on this project - a constantly growing school of fish darting back and forth; as it grows in size, the movement is becoming increasingly erratic. the primary reasons are: too many people involved, too little people actually understanding what is going on.

but the technical reason i mention this is the fact that although apache 2.2.2 has rewritten their proxying stuff and made it much better, it has some bizarre problems with handling connectivity with the backend server (most likely IIS-specific) that results in the remote clients getting a 502 proxy error and the following line showing up in the error logs:

proxy: error reading status line from remote server (null)

there are a couple of bugs filed on the apache bugzilla, but nothing confirmed yet:

http://issues.apache.org/bugzilla/show_bug.cgi?id=37770 http://issues.apache.org/bugzilla/show_bug.cgi?id=39499

since my stuff was compiled with worker mpm, the easiest workaround was to use SetEnv proxy-nokeepalive 1. other potential workarounds mentioned in the bugreports are:

  • use a prefork process model as opposed to worker
  • downgrade to apache-2.0

what happens is as follows: traffic flows for a while and there are no problems, then traffic stops for 10 minutes, then the first few requests to hit those stale connections to the backend server get the 502 error, without even hitting the backend server. there is nothing in between that kills these connections, and it is not always reproducible. since i was under time constraints, i just let it be after applying the workaround.

another thing to keep in mind that always confuses me with the apache reverse proxy docs: given a frontend server, and a backend server, this is how the rules should look like:

ProxyPass /path http://backendserver:port/path
ProxyPassReverse /path http://backendserver:port/path

in other words, both ProxyPass and ProxyPassReverse directives have to refer to the same server, otherwise the reverse proxy rewriting just would not work.

xml doctypes 2

Posted by anton
on Tuesday, July 18, 2006

something that bit me recently: editing sqlmap-config.xml for ibatis and getting strange xml validation errors during deployment:

Error parsing XML. org.xml.sax.SAXParseException: Element type "sqlMapConfig" must be declared.

the file looked perfectly fine and myeclipse happily validated it (i could change a property and get a validation error), however, during deployment it failed

it turns out that the problem was in the DOCTYPE declaration. what i had was

<!DOCTYPE sqlMapConfig
  PUBLIC "-//ibatis.apache.org//DTD SQL Map 2.0//EN"
  "http://ibatis.apache.org/dtd/sql-map-config-2.dtd">

what i should have had was

<!DOCTYPE sqlMapConfig
  PUBLIC "-//ibatis.apache.org//DTD SQL Map Config 2.0//EN"
  "http://ibatis.apache.org/dtd/sql-map-config-2.dtd">

a small typo that cost me a couple of hours of grief and confusion. reading up on it here, i learned that what threw me off was a Formal Public Identifier (FPI) that has the following syntax

"Owner//Keyword Description//Language"

therefore my description was incorrect. once it was fixed, everything worked as expected.

i suppose i knew that browsers, for instance, use the doctype description to figure out which parser to use, but i guess in this case i expected a gentle warning message in the console, a soft friendly whisper from the IDE - not globs of violent stacktraces.

rails, native mysql bindings, and different mysql versions

Posted by anton
on Friday, March 17, 2006

a simple setup: i have a system-wide mysql 4.0 install. i have some rails apps running under that. i have another few apps that need mysql 5.0.

that should be simple, right? install mysql 5.0 into its own isolated directory, point the app's driver/adapter to that database and we are done (at least this is the java way).

except in ruby the native bindings are installed in the core of the language itself (even if they were not, with the separate gem's home setup, as they should be), so i have to recompile them in a not-so-obvious manner, passing them the config parameter (not documented in the gem manual):

gem install mysql -- --with-mysql-dir=/usr/local/mysql5/current

so ideally i should be able to have my own adapter on per-application basis. i guess this is a drawback of relying on the native bindings (java does not, for example).

and more cygwin 0

Posted by anton
on Tuesday, February 07, 2006

another little cygwin gem (pun intended): if your webrick starts but outputs nothing, and never binds to a port (going through the sources reveals that it just hangs in a backticks shell command call, but works fine if you run it standalone without any rails stuff or pass your server script webrick as an argument in command line); if you get bizarre memory relocation/unable to remap fatal errors when running rake, then try the following:

  • shutdown/exit all your cygwin processes
  • run ash from start/run (it is under bin\ash in your cygwin directory)
  • then run rebaseall

this solved it for me. see this post for some details.

this cygwin/rails setup is a bundle of joy, i tell you. never a dull moment on these lovely long winter nights.

default cygwin terminal and environment

Posted by anton
on Sunday, February 05, 2006

anyone that installed cygwin and used it out of the box knows about the limitations of the default terminal: those awkward scrolling errors, resizing pain, limits on the scroll buffer, colors, etc.

i never bothered to get it fixed. until now that is. use rxvt instead (you need to install it first): create a shortcut with the following command line:

D:\programs\cygwin\bin\rxvt.exe \
-vb -sr -sl 20000 \
-fn courier \
-g 120x50 \
-e /usr/bin/bash \
--login -i
man rxvt to see what those actually mean. then edit your .bash_profile (in case your $HOME is unnatural and your .bashrc does not get read) and add the following:
alias less='/bin/less -r'
alias ls='/bin/ls -F --color=tty --show-control-chars'
you should also put the following in your .vimrc:
syntax enable
filetype on
filetype plugin on
set ts=2
set number
set ai
set si
and you also might want to grab rhtml syntax plugin for vim.

this will get the expected stuff working (ctrl-pgup/pgdn, colors, proper terminal handling when you login to remote hosts via ssh, etc). note that you copy on selection and paste with the middle mouse button or shift-insert.

rails on cygwin 0

Posted by anton
on Saturday, February 04, 2006

finally it works out of the box: update cygwin, run "rails blah", then "cd blah" and "script/server" and voila! ruby 1.8.4 and rails-1.0.0 *gasp*

well, now if you really want to use it, you also need to fix incompatibilities of rails-1.0.0 with rake-0.7.0, so cd into /usr/lib/ruby/gems/1.8/gems/rails-1.0.0 and run "for i in `find . -type f`; do grep inline-source $i && echo $i; done" and fix all occurrences of << 'option option' with << 'option' << 'option'.

but that's just details, right? who cares about those little things? oh the joy!

e-tag and server farms

Posted by anton
on Monday, November 14, 2005

most of the people do not do much with e-tag http response headers, and it's probably ok, unless one is really trying to get the most out of client-side caching.

this has been written for apache 1.3.x, but is still relevant for apache 2.x:

An ETag is an HTTP response header returned by an HTTP /1.1 compliant Web server such as Apache 1.3x. By default, Apache calculates an ETag for a requested file using a combination of the file's location in the file system (I-Node number on Unix systems), its modification time, and its size. [..]

Because the ETag is calculated using the file's I-Node, and an I-Node is machine-specific, administrators of Web server farms will experience unexpected requests if the ETag differs from machine to machine.

To work around this issue, use the FileETag directive to configure your Apache server to use only the file modification time and file size when calculating the ETag.

The following example configures Apache to only use the modification time (MTime) and size (Size) when calculating the ETag for any file contained in the /usr/local/httpd/htdocs directory or a subdirectory.

<Directory /usr/local/httpd/htdocs>
FileETag MTime Size
</Directory>

in other words, if you are running multiple web servers that serve static content for which apache provides e-tag response headers, it might make sense to use the directive above to make sure that e-tag header values are not different from machine to machine for the same content. i confirm that this is the case in unix, but i have not verified this on windows.