We've been spiking long enough to learn some key things, and it'll soon be time to create better stories and estimate them. Today we finish up the spike on density and learn a few things about those libraries everyone thinks we should use.

Coordinating and Cleaning

Chet and I met at Amer’s today, so that we’d have Internet. We spent much of the morning looking at code and trying to find better ways to do things.

For example: the BMP file definitely comes in “upside down” in the Raster. That is, the row of pixels at the beginning of the raster is in fact the row that should show up on the bottom of the picture, not the top. We spent considerable time trying to find out whether there was a way to read the file that was independent enough to allow us to read any file and get its pixels in the right order. In our research, it seemed that there’s not much powerful code in the Java libraries for processing input graphics: most of what it does is for outputting graphics.

We wound up inverting the picture in the code, and rewrote the tests to produce the right results. Here’s the code:

   public ShotPattern(String fileName) {
        Raster farian = raster(fileName);
        width = farian.getWidth();
        height = farian.getHeight();
        int yOffset = height / 2;
        int xOffset = width / 2;    

        for (int y = 0; y < height; y++) {
            for (int x = 0; x < width; x++) {
                addHit(farian, yOffset, xOffset, y, x);

    private void addHit(Raster raster, int yOffset, int xOffset, int y, int x) {
        if (isHit(raster, y, x)){
            <nc>int invertedYforBMP = -(y - yOffset);</nc>
            this.hits.add(new Hit(x - xOffset,<nc>invertedYforBMP</nc>));

As far as we know, other file formats, such as GIF, paint top down, not bottom up. No matter for now, and perhaps forever, with our app, as we own the input format.

It took a huge amount of surfing to discover that there was apparently nothing to discover.

Then we looked at my density code, which as you may recall from the previous article, was pretty ugly. We thought that using the Rectangle class, and asking the Rectangle whether it contained our point, might be a better way to count up the density. The new code looks like this:

   public int[] analyzeDensity(<nc>int width, int height)</nc> {
        int[] result;
        <nc>int numX = this.width / width;
        int numY = this.height / height;</nc>
        result = new int[numX*numY];
        int areaY = 0;
        for (int y = 0; y < numY; y++) {
            int areaX = 0;
            for (int x = 0; x < numX; x++) {
                int location = numX*y+x;
                result[location] = density(areaX, areaY, width, height);
                areaX += width;
            areaY += height;
        return result;

    private int density(int left, int top, int width, int height) {
        int count = 0;
        <nc>Rectangle rect = new Rectangle(left, top, width<nc>-1</nc>, height<nc>-1</nc>);</nc>
        for (Hit hit : hits) {
            int x = hit.getX();
            int y = hit.getY();
            if (<nc>rect.contains(x, y)</nc>)
        return count;

You’ll note that we changed the ShotPattern to know its width and height, which entailed nothing tricky. And we changed the parameters on the analyzeDensity method not to pass in the number of replications, computing that inside.

The rectangle trick itself was disappointing. Note the occurrences of -1 in creating the rectangle. It turns out that a Java rectangle is closed at the right and bottom. That is, if you have a rectangle from 0,0, size 3,3, the point 3,3 is in the rectangle. This meant that as we stepped across the picture moving the rectangle, more than one rectangle would accept the points (all of which are on integer boundaries). What we needed was an open rectangle, i.e. one that goes from (0,0) to (2.99999, 2.99999). Java seems not to provide such a beast. So we had to approximate the behavior with our -1 code.

This is irritating, and we will very likely build our own rectangle class to help with this, but today wasn’t the day.

In addition, the idea we had was that we’d “push” the rectangle idea up in the call hierarchy, making it a parameter to the analyzeDensity method, and in the tests. That would make everyone’s job easier. But time didn’t permit doing that today, so it’ll be left until another day, unless I do it this afternoon just for fun.

The Day's Results

We made only a tiny bit of progress in the code today, spending much of our time beating our heads against the public libraries and trying to find out what they do We are sure that had we built our own rectangle object we would have gotten a lot more done. But our readers have all been pushing on us about using available libraries, because it’s “easier” and “faster” to do so. As far as we’re concerned, this theory remains unproven.

We’re very interested in cleaning up the code a bit more … but it’s just spike code, so we probably shouldn’t. And we would really like to do at least one experiment on outputting a mixed text and graphical report: at this writing we have no real experience on which to base our estimates.

Overall, five articles notwithstanding, we only have about four sessions (less than eight pair hours) in work together, plus a couple of hours working on our own in the evening or over the weekend. As we often do that anyway, we’re inclined not to count those hours at all.

So we are a day, or a day and a half into the project, and we have a bunch of running tests and code. It’s time to look more closely at the stories for the application.

The Plan

Tomorrow, God willing and the snow don’t accumulate, we’ll meet in Brighton, prepare story cards, and see what we can do about estimating them. We’ll have some that are too big, or about which we know so little, that we can’t estimate them. We’ll propose spikes for those, and improve their estimates thereby.

And we’ll talk about testing. This application is very interesting in that the input is all graphical and much of the output is graphical or free-form. Furthermore, there are no hard and fast rules for identifying whether a bunch of black pixels on the screen represent one pellet’s hit, or several.

Today, Chet brought the target from which he made our bitmap picture, and we inspected it. It’s about 3 feet square (a bit less than one meter), and when we compared the black and white pixels with the holes, we found that we could see separate pellet holes in the paper, but all the picture picks up is an irregular constellation of pixels. Here’s a 4X picture of the area around (1400,500) on the target:


Notice that big blob there in the middle. That appears to the eye to be made up of three pellet strikes close together. More importantly, perhaps, there is a fourth strike just to the left of that blob, clearly visible to the naked eye. But that strike is either missing from the photo, or part of the big blob.

Note that these are the pixels that the camera actually picked up, at 2048x1536. So we’ll either need to accept this accuracy (which I suspect will give perfectly good results), or go to a higher resolution camera.

Either way, we also will have to identify these clumps as representing single holes (or artifacts that we can’t distinguish from single holes), and then perhaps a way to approximate how many pellets must have made a hole of that size. Again, perfect accuracy is probably not important, but Chet feels that we’ll need at least to approximate the number of pellets in a big hole.

We’re also finding ourselves mystified by how we’ll set up acceptance tests for this application. The app is inherently graphical (in a way like a video game), and the decisions it makes seem often to be rather subjective. Yet we know that we can’t go forward without acceptance tests, without people complaining to us that we’re not living by our own standards. So there will be some learning to do on that.

In any case, that’s my report for the day and my prediction for tomorrow. Watch for the next update, and feel free to write to us!