Friday, February 12, 2010

Scrum and the Real World

Recently, I attended a two day CSM (Certified Scrum Master) training class from a local company which both practices Scrum internally as well as serving as coaches and teachers to other companies wishing to move to Scrum. It was a good class, though I'd attended essentially the same class about two years ago when I first came to my current company.

Let me start by saying I'm a fan of Scrum, I tend to think it's a reasonably sane approach to managing a project. But I don't think it the be-all-end-all fo software development and I think there are other ways to get to the same destination, namely, high quality software, happy people, and minimal overhead.

My biggest problem with Scrum is that I feel sometimes it's proponents sweep "real world" issues under the table a little too quickly. Namely, they often lack a certain pragmatism. In fact, at least one of the two instructors this time around seemed fairly pragmatic. But I wrote him after the training and here's the email I sent.

I hate to sound like a broken record, but where I come into conflict with those who preach Scrum (being a big fan) are that they presuppose a "start state" that just does not exist in many cases. Some of the biggest mistakes I feel that I’ve made with Scrum have been to cede authority to the team, who were simply not equipped to handle such responsibility properly.

I liken it to giving a credit card to a college student away from home for the first time. This student should be responsible with his credit card. He should pay off the balance each month so there is no interest. He shouldn’t make purchases he would not otherwise make if he didn’t have the credit card. These are all absolutely reasonable
expectations to have for the recipient of our credit card.

So, we give him the credit card. He’s "empowered". What does he precede to do? Maybe he does exactly those things. But for some, even though previously they might have been fiscally responsible, the notion of a credit card is intoxicating. A person from such a group suddenly finds himself racking up all sorts of purchases he can never pay off and with no one watching over him slides deeper and deeper into debt from which he can never extricate himself.

So, is the answer then to never give this person a credit card? How do teach the person to use the card responsibly without giving him a card with a 20,000 limit off the bat?

Some of my failures with Scrum were due to placing *too* much into people who didn’t really deserve it. Oh, they seemed to deserve it, but they didn’t. And fundamentally that comes down to people. But "people" are always the hard part. As I think you guys mentioned, put a team of great people on a project and irrespective of methodology they’ll figure it out.

If the prerequisite to make Scrum work is a team of highly motivated, competent people who value the things that Scrum values, I’d submit it’s not adding much value. They would succeed anyway. Sure, Scrum’s a great way to get to that destination, but I’d submit plenty of other approaches that would work just as well.

Just teaching people Scrum and repeating the tenants of the Agile manifesto to them doesn’t make the *truly* value those things. Some percentage will really convert, but many will still value what they always valued. They will never apply Scrum with enough thought or criticality to be successful. And if they do, I would submit they’d probably succeed just as well without Scrum (or with a whole lot of Scrum-buts).

To me, the real promise of Scrum is for teams that *are* dysfunctional. What if everyone isn’t motivated? What if everyone isn’t competent? Because in more companies than not this is the case. Sure, you can say fire the bad people, get only good ones, etc (and I try to do this), but let’s be frank about the reality of many people’s lives. I would suggest Scrum still holds an awful lot of value. And that’s where the gap is. Making Scrum work with the dream team is easy. How do you make it work with reality is what is so tough and what I think sometimes is neglected.

Wednesday, January 20, 2010

Looking Backwards

Recently, my director published a new "capacity sheet". It's an Excel sheet with a high-level summary of all the projects scheduled for the upcoming year, with breakdowns in hours. The hours themselves are based on some simple formulae. A salesperson assigns the project a t-shirt size (S, M, L, or XL) and this translates to a predefined number of hours.

The intention of the sheet is not to plan capacity of our systems (i.e. transactions per second, etc), but rather see if we have sufficient resources to do the work. Let me start off by saying, I like the idea...in principle. In fact, at my old company it seemed no thought was given to actually staffing for the work coming in, which meant all of us ended up working ridiculous hours. So, the idea is a good one.

The problem is the implementation. It’s fine to assign t-shirt sizes to projects that are 9-10 months out and haven't even been defined (you can't really do anything else), but we use these boiler-plate numbers even for the projects that are nearly started. Further, as you move further out in time, the exercise becomes more and more meaningless because invariably the client changes his wishes and prioritization of work. Now, I made these points last time my group went through the exercise and that's not really the point of this blog.

The point of this blog is that no one is that I've not heard one person mention the need to look at the exercise we went through last year. Having gone through the exercise before and having voiced a number of concerns about the value of the exercise, I decided to go back and look at the sheet from last year. What I found there was very telling and hopefully will persuade others to reevaluate the way in which we're handling things this year.

Almost 70% of the projects on our plan never even happened. And for those that we did do, the boiler plate estimates we used were not even close to the actuals. Now, this didn't specifically cause us problems insofar as we delivered on our commitments. That is primarily because in general the formulae grossly overestimate the amount of resources needed for any given project. Of course, I'm not sure the actuals in our time-tracking system will prove this, because as we know from Parkinson's Law, work expands to fill time.

The problem is going through the exercise this time around using these numbers uncritically, tells us that we're short staffed. Nothing could be further from the truth. I have many more developers than I need and many of them end up billing huge amounts of time to "sustainment" activities or training. Since these guys are offshore, I really couldn't say definitely what the hell they're doing half the time. I'd say we're probably 25-40% overstaffed.

But again, that's somewhat incidental to my point (I do have one!) for the blog and that is when doing something. George Santayana made the famous statement that ‘Those who do not remember the past are doomed to repeat it’. In general, I prefer Kurt Vonnegut’s rebuttal ‘I’ve got news for Mr. Santayana, we’re doomed to repeat the past no matter what. That’s what it is to be human’.

In the case of software engineering (and most sciences), you absolutely *have* to go back and look at what went right and what went wrong to improve. That’s what of the things I think Scrum and Agile absolutely nailed, the idea of iteration/sprint retrospectives. Unfortunately, things like that seem to be the exception rather than the rule. More often than not people don’t do this, so we continue to tread a sea of mediocrity in which nothing ever really improves.

Friday, January 15, 2010

TFS Migration Or: There's Still No Silver Bullet

Recently my company started a migration project to move all of our internals tools over to Microsoft Team Foundation Server (TFS). If you're not familiar with TFS, Microsoft bills it as an Application Lifecycle Management (ALM) suite which handles everything from source code control, to bug tracking, to project planning.

Currently, tools and processes vary across teams. My company has been the result of mergers, of growth and shrinkage, and the toolset reflects this. Many tools are shared across teams, but there are differences. For example some teams use HP's QualityCenter for bug tracking, while others use Bugzilla, while my group uses an in-house bug tracking system written years ago and no longer actively maintained.

The promise of TFS is to standardize our processes and tools across all teams and to make life simpler by having a single toolset that can track any given item of work from inception through completion and after.

TFS has been the pet project of one of my fellow development manager's within the organization. In fact, he was my boss when I first joined the company. He's the one that has pushed hard for TFS, including bringing a consulting agency onsite to evaluate our tools and see how TFS might help us (funded by Microsoft it should be noted), putting together the proposal for the company’s operating committee, and serving as evangelist for the tools.

He’s also made repeated (and somewhat forced) attempts to show how TFS will revolutionize our lives.

Don’t get me wrong, some aspects of TFS seem intriguing. It has some very powerful features. One of the most interesting is its ability to recreate the environment within which a bug occurred, so that, say a tester encounters a bug while testing, he can save the state of the servers on which it occurred, and the developer can at some point later, actually spin that exact environment back up and step through the code as if it was executing at that moment. That’s pretty cool.

As a source control system, TFS seems to be fairly decent. I think it misses the boat in its reliance on a centralized repository (versus the more recent developments in distributed revision control systems), but its notion of discrete changesets, and its management of branches seems to be significantly better than CVS.

But at its core, it’s a Microsoft product. And that has some very serious implications. Joel Spolsky’s keynote address at the recent Stack Overflow conference was about the dynamic between keeping things simple and giving people the power and flexibility they need to do what they want. He came down to the conclusion that the problem isn’t necessarily in giving people too many choices, it’s in giving them choices that are meaningful to them.

To me, the TFS interface is typical Microsoft. Incredibly powerful and flexible, but for you base cases, the stuff that you want to be able to do intuitively and easily, it makes something that should be a click or two into a 10-step process, that is needlessly complex.

One example is creating a new task in TFS. So, you have a story in the system and want to break it down into tasks. You’d think that’d be simple. So, you click ‘Create Link’, then you are taken to another menu, where you select the ‘Link type’, of which ‘Task’ is one choice. Then you make a selection of ‘affected by’, ‘affects’, etc.

I don’t know about you, but to me this should be a one click operation. All the abstraction and flexibility it just annoying for 95% of the time I’m using it.

But I suspect TFS will be an improvement on what we have currently. The thing is, I don’t think it will make that much of a difference. A good toolset (and one can argue over whether TFS is a “good” toolset) is all well and good. But the real problems in software engineering are not the tools, but how they're used. And without a significant investment into making our processes simpler, having people who apply those processes with critical thought and not blindly following some set number of steps, is what makes things more efficient.

And even if TFS is the greatest toolset is the world, go back to Fred Brooks' seminal ‘No Silver Bullet’ article. Such systems only address the accidental complexity of software engineering, not the inherent complexity. As such, they cannot result in order of magnitude increases in productivity.

To me, the money which has been spent on TFS (which is not insignificant) would have been far better spent in bringing in experts to look at our processes, our team organization, at reducing the bureaucratic processes that waste so much money and time, not some shiny new toy.

Monday, January 4, 2010

Business Viability Or: Should I Open Source ZPlanner?


I've now been working on my little project tracking application, ZPlanner, for nearly a year now. It's certainly not been a full time effort, rather I work on it here and there when time allows and when my attention doesn't drift elsewhere (to blogging, the book I've been working on, and other random stuff), but I've made reasonable progress on it. It's actually getting reasonably close to being a *uable*. Mind you it's nothing revolutionary. It's just a simple web application, but nonetheless I'm fairly happy with where things are at.


I started working on ZPlanner because I thought I was going to be laid off from my job as a development manager around last March. Due to a reorg, my job was promised to someone they'd relocated from the East coast offices which left me without a position. The job market at that time was so horrible, I felt like I'd be lucky to find a position as a developer somewhere, so I figured I should brush up a bit on my programming skills, seeing as how I do so little for my job these days. Luckily, however, I was able to snag another position within the company--it was a better one to boot. But I kept working on ZPlanner anyway as I think there's a need for a simple, project tracking application.

I've toyed with the idea of actually trying to sell it--at an affordable price--as the only options for project tracking are either free (and kind of crappy) or fairly expensive. But to be honest, I'm not sure how viable it is as a product. There are a lot of entries in the market, even more if you consider that many software stacks that label themselves as "bug trackers", such as Jira, have also incorporated project planning features.

But the idea of starting up my own company, even if it barely made any money is incredibly alluring. Like many developers, the idea of being in charge of things and running a company how *I* think it should be run is an exciting prospect. And in lieu of any better product, I figured, why not try selling ZPlanner.

But the thing is, over the winter break, I had what I think is a really good idea for a piece of software. I won't go much into the details, but from what I can tell from some brief Googling, it's a niche that has not yet been solved, has the potential to be sold to larger companies and deliver measurable increases in profit and efficiency, and is a problem which would not be trivial to solve. In fact, the reason I thought of the idea was because the lack of it caused me considerable frustration. I was dealing with a company I would have assumed to have software as I describe, but they did not and as a result I ended up being one very pissed off customer. I've also found some theoretical whitepapers on the subject, but very little in the way of concrete implementations. I think the potential is pretty large.

Which brings me back to ZPlanner? The product I'm describing above will take quite some effort to implement and I'm anxious to get started. But I really want to polish off ZPlanner. I've decided I will. But then the question becomes do I try to sell it still? Or should I just open source the thing? If there really is no market for "yet another project tracking tool", maybe it's better to open source it. ZPlanner had it's genesis due to the failings of XPlanner, another open source project tracking tool. Despite it's failings, however, XPlanner has seen wide usage and even though it seems to be fairly moribund, with the last release almost 3 years ago there are still a ton of people using it...because it's free.

Even if it didn't profit me directly, the thought that Zplanner might replace XPlanner as the open-source, free tool for choice for tracking is pretty cool and might indirectly profit me in other areas. It's hard, given that I've sunk somewhere between 300-400 hours into it at this point, to just give it away. Though I certainly make use of a lot of open source stuff and maybe it's time to give back.

I don't know. I guess we'll see what I decide.

Thursday, December 10, 2009

ZPlanner Technology Brief: Maven2

A few weeks ago, I devoted a blog to talking about Mercurial, the distributed revision control system I decided to use for ZPlanner, my Agile project tracking tool. This week, I figured I'd briefly talk about one of the other choices I made, namely to use Maven2.

For most of the career as a hand-on developer, I was trapped in the world of Perl and Apache. Our proprietary system didn't require doing builds. You just created a new file, put it in an appropriate directory, and voila, it worked. All of the libraries we used were global to our servers and installation of new packages (from CPAN) was a rare thing. When they did occur, it usually just mean shooting a request to our of our admins and asking him to install it. So, for all intents and purposes we never really thought much about dependency management or builds or any of that stuff.

A bit later, I was assigned as technical lead to a new, large scale (for our company) project and we decided, for various reasons, to use Java. Immediately, we had to start thinking about how we'd do our builds. Given that I was the first person to start coding and was relatively inexperienced with Java at that time, I didn't think about it very much. I started coding in Eclipse, created a directory structure that I thought was reasonable and did builds solely via Eclipse. Dependencies were managed by plopping a jar in the lib directory, then clicking 'Add to classpath'. I didn't think about it much after that. It seemed to work.

When we brought other more experienced Java developers on board, they immediately starting talking about Ant. My boss (whom I still regard as the most brilliant programmer I've ever worked with), steadfastly maintained that dependencies and builds should be managed via Eclipse. That to use Ant (or Maven) was a duplication of what we'd be doing in Eclipse anyway. Why do we have the need to do builds via command line? We should be able to build our artifacts directly in Eclipse which would produce an artifact (in the case of the service I was writing, a jar), deploy it any which way we wanted, and create simple scripts to set the classpath when we ran it. Why muck around with byzantine Ant files, when Eclipse managed all of this for us? And I really do think he had a good point.

Eventually, though we did end up with an Ant file. I can’t recall if it was for any great reason. Thankfully, due primarily to the efforts of my boss, who whittled down the initial file one of our contractors created to about 1/3 its size, it wasn't too hard to understand. Most Ant files are not so nice. I've seen more than my share of horribly long, incomprehensible Ant files that copy files arbitrarily from one location to another, rename directories, and all sorts of other crazy, random stuff.

And the thing is, when you start creating an Ant file, you almost invariably end up reinventing the wheel. Deploying a jar or a war generally consists of pretty much the same steps every time. But because someone decided to structure the project a little differently than the last one, or made some other random decision, there are all these one-off, unique things that the Ant file has to do.

And ant *only* really manages builds--and maybe starting up your app. If you're doing a build in Eclipse, you'll still have to go into the lib and make sure all the jars you need are on the classpath. Some may be assumed to be provided by your web container at runtime, but Eclipse doesn't konw about those, so make sure you go and add those to the classpath too. It just ends up being a pain in the ass.

And that's where Maven comes in. Instead of recreating the wheel every time you start a project, you agree to some conventions. You'll always structure your directories in a certain, canonical way. This is facilitated by the notion of archetypes in Maven. Archetypes are basically just templates for how projects are structured. There available for the most of the common (and even uncommon) types of applications you’re like to build.

In return for this compliance with convention (with which you may or may not totally agree), Maven provides the basic things you always need to do: build, deploy, run tests without having to hand code them. So, in my mind, it’s already got Ant beat at this point. You don’t have to tell it to copy all the jars in your lib into some other dir, or copy some random config files you shoved in the ‘etc’ directory, into the other directories. But then Maven adds something much, much more useful: Dependency management.

Too often I’ve checked out some Java project, add all the jars in the lib path to the classpath, then try to do a build…and stacktrace.

Crap, okay, so this lib I added, has a dependency on another lib. Time to go out on Google and search around. Then you download that, add it to the classpath, then blam.

Dammit, there’s some other dependency. And through trial and error (and a bunch of time), you eventually figure out what you were missing to start with. Often, these are libraries that are only needed for compilation, or they’re assumed to be provided by your web container, so they haven’t been included in lib. But you need them to do anything. And then you have the problem that it’s quite likely that none of your jars are versions

Okay, I have hibernate.jar from two years ago that has no version information. What the hell version is it? Can I upgrade it or will that break everything? These are all lovely questions to wrestle with.

Instead, Maven manages this stuff for you. The POM (Project Object Model) file that you write, instead of a build.xml as Ant uses, is not a procedural set of steps. Maven knows about doing builds via Plugins (which I won’t go into here), and so really all you end up needing to represent are which plugins (i.e. what types of activities you’re performing, rather than the explicit steps that you need to represent in Ant) and what the dependencies are that your project relies on. You can even notate that a certain dependency is only necessary for the build step or the test step. If one of the libraries you’re using requires some other jars of which you were unaware, Maven will figure this out and download them without you ever having to muck around.

And you can specific that *only* version 1.1.2 of a library should be used. You explicit tell it what version to use, so later on down the line, you can easily make the decision of whether it’s safe to move to a new version. If you use the Maven2 Eclipse plugin, you can even reuse the POM to set your classpath, so that whole laborious process of adding jar after jar to your classpath…you don’t need to do that. Just check out the code and bam, ready to run.

Of course, a lot of this happens by magic. But magic, to paraphrase Arthur C Clarke, just means that something is going on that is more advanced than you understand. And recently I’ve seen some complaints that when you have to delve into the magic, things can get messy.

But while using it on ZPlanner, I’ve not encountered any real problems. A few little hiccups here and there, but honestly I imagine going back to futzing around with 200 line long build.xmls for Ant. Or not having the dependency information readily available so I can make intelligent decisions about what my project should or shouldn't be relying on.

If you haven't played with Maven before, I highly recommend taking a look at the excellent online Maven book, available freely from Sonatype. It clearly explains the basic concepts of Maven and let me get my project up and running using Maven in a matter of hours.

Thursday, December 3, 2009

Timetracking, Funny Math, and the Accountants

I joined my current company about 2.5 years ago. And since then the one constant has been my frustration with how inefficient the company is at times. These have included things such as an overreliance on obscenely expensive contractors (The first team of five developers I managed, cost somewhere around one million a year and contained no full-time employees), to endless meetings in which nothing is resolved, to various hopeless complicated processes that amount to nothing more than busy work.

Of course, when the company was bought out a year ago and a new executive team was brought onboard, they immediately began trying ot reduce cost. This jettisoned the 70% contractor work force (something a few of us had been complaining about for years), moved most of the real development offshore, and made cuts in various departments across the board.

But they're not done. And now, apparently, some of them have turned their gaze on our time tracking system.

Now, I've always seen corporate time tracking a curious thing. Time tracking is almost always something mandated by the accountants and managers, so they can get a handle on cost. Like many things, it's an abstraction, and in every implementation I've seen it's an abstraction meant to give the financial people the information they want.

For the people doing the real work, meaning development and QA, the abstraction is generally far less meanginful. They don't see it as something valuable because financials and project codes aren't how they think about their lives. They think about the project or task they're doing. Sometimes these overlap, but just as often they don't.

Of course, I firmly believe in tracking estimates, how close actuals are to estimates, and so on. That's a big part of what I'm trying to tackle with ZPlanner. The difference though, is that in tools like ZPlanner, the numbers come from the actual tasks and work that people have to do. Entering time in such a system gives them value, becuase they can look at a burndown and see how much work remains for their team, they can look at how many hours and tasks are assigned to them and see what's left.

Time tracking systems don't give any of this valuable feedback to a developer. So, people enter their time because they have to. As such it's validity is very questionable. I have only to open up my manager's interface in our time tracking system, and see timecard after timecard filled in with an exact 8 hours every day to know this is not "real" data. It's constructed for my benefit and the benefit of the accountants. My reports fill it out because if they don't, I send them an email chiding them for having not entered their time not because they really *care* about it.

When people are doing something just because they have to, more often than not they do a half-assed job. That's just human nature. If you have a problem with that, take it up with evolution.

And the more complicated you make the rules and more difficult you make it, the more carelessly the task will be performed. Which brings us back to the the attempts of our executives to make our company more efficient.

For 2010, we now have a series of new rules for entering time. The two most impactful changes being:
  • Every time card must have 8 hours entered per day
  • Buckets for management will be eliminated, which means all time must be charged against a specific project

Currently, when I have to file a timecard, I really try my utmost to charge the time I spent to the appropriate project. But frequently, there will be a big chunk of time on any given day, for which I can't really account. The time was usually spent in random conversation about our technology or other ongoing projects, answering email, and so on. It's in such small, discrete portions, though, it's imposisble to recollect exactly how it was spent. Currently, I log this to the "management" bucket. Now I will have to charge it against a project. Most likely this will mean, I spread it evenly across all the projects for which I have oversight, which is everything in my division. Trying to count it accurately would take 50% of my overall time and would be an exercise if futility.

Even the time I spend with my reports in one-on-ones needs to be charged to a project I'm told. I no longer have a 'management' code against which to charge it which will likely be whatever project that particular person is assigned to. The fact that I probably spent the time discussing stuff having nothing to with the project is immaterial. As flawed and inaccurate as the data is now, I'm pretty convinced these rules will cause it to be even more skewed.

My initial reaction to this was that it was a terrible thing. I kind of thought of it in an absolute sense as in, time logged should correspond to reality, and that's the only way the data can led to sane decisions. I thought the intention was to log time as accurately as possible, but I now understand that's not really the case.

Last night, I brought up my concerns about the new time tracking rules to a VP. He explained that the intention was to either pass along the cost or use use these tools to find inefficiency. I told him I thought it was dumb to mandate that everyone has to log a minimum of 8 hours per day. There are probably days they work less than that. And I guarantee you, no one is "productive" eight hours out of the day at this company. If they work 8 hours, they take a coffee break here, discuss some random thing with their colleagues that's completely non-work related and so on.

He countered that the new system would serve two goals. To make sure the cost is passed along to the customer and to make sure people are being efficient. He said that right now the buckets of things like "Management" allow people to hide things. If they try to just amortize this time across their projects, the people who are in charge of those projects will see through it.

But that presumes people know exactly (or even remotely) how long "managerial" stuff takes. I don't think the problem in our company is (primarily) inefficiency on the part of the developers or QA or anyone doing real work. It's the amount of managerial overhead. I can't imagine what requres my team to have 6 project managers for the amount of projects we have. Often a project manager has only 1-2 assigned projects. How can their be *that* much to manage? I just don't buy it.

The VPs contention is that will come out. I suspect something different, however. People will learn other ways to game the system and because now the data will be ever less accurate (if that's possible) than it is now, the decisions the executives make based upon it will be every more divorced from reality.

But maybe it all works out. I don't know. This VP did bring up some good points insofar as "reality" doesn't matter, if the cost is passed along appropriately to the customer and the margins rare. I guess my incredulity is because I don't have an MBA. Reality doesn't matter.

But I like to think it does. And the *real* way to answer these questions, to make sure that costs are calculated appropriately is to start with how the estimates are generated. To track them in a real tool that caputure how much real work is done and how much "management" is logged. To ask hard questions based on empirical observation of why so much *management* time is needed, and to cut the cord in some cases and see what happens.

Managers will always justifty their existence. They create work for eachother. It's not out of ill will, but it's just their nature. There's a great quote from Northcote Parkinson, after whom Parkinson's law is named which goes:

"But the day came when the air vice marshal went on leave. Shortly afterwards, as it happened, the colonel fell sick. The wing commander was attending a course, and I found I was the group. And I also found that, while the work had lessened as each of my superiors had disappeared, by the time it came to me, there was nothing to do at all. There never had been anything to do. We'd been making work for each other."

So to me using a tool that most people don't care about, with an abstraction meaningful only to a few who aren't actually doing any of the "real" work, and which the smart people will find simple ways to game is not the way to fix these problems.

But what the hell do I know, I don't have an MBA.

Tuesday, November 17, 2009

ReviewBoard Woes Or: Build vs. Buy

A couple weeks ago I wrote a bit about ReviewBoard, an open-source web application for doing code reviews. I now have it up and running and have introduced it to my team. Hopefully, over the next weeks and months we'll find it a useful tool for reviewing code.

But it certainly took more than a little work to get it up and running on Windows. I had to install Apache, MySQL, PHP, Django, and any number of PHP packages. Add to that the time I spent fiddling with Apache. Of course, I got it working on my own desktop fairly easily, but for some reason when I put it on a dedicated box and started trying to run Apache as a service, the program was having none of it.

My first issue was that even though I'd configured the thing nearly identically to how I had on my successful installation on my own desktop, it wasn't quite working. The first problem I encountered was that even though it said I'd successfully added my CVS repository (which is required to submit reviews) the dropdown displaying the repositories against which it would check submitted reviews would not populate.

Of course, I did the requisite Google-ing but apparently no one else had had the problem.

Soon enough, I was looking at Apache logs, had started opening up the Python code (and mind you I don't know Python), and eventually descended into putting in print outs in the source to the Apache log.

Ultimately, I traced the problem to an included library. For each possible SCM (CVS, SVN, etc), there is a file in ReviewBoard to manage the interactions. Before populating the dropdown, the appropriate library makes sure it can run the commands it needs to interact with the repository. To do this it uses an including Djblets library which has a function called is_exe_on_path. This function is passed a string literal representing the command to be invoked, in my case 'cvs'. The function then appends '.exe' if the app is running on windows and checks that the file is on the path. In my case, this resulted in a check for a file named 'cvs.exe'.

The problem is the newest version of Tortoise NO LONGER HAS A FILE CALLED cvs.exe. Oh, it used to...and that's why I didn't have the problem on my desktop where I'd used an older version. But on my new computer I'd installed the latest version of TortoiseCVS which has replaced cvs.exe with a file called TortoiseAct.exe. How it's invoked on the command line when one types 'cvs' I don't know (I think via a registry entry and DLL), but ReviewBoard was having none of it.

So, I made a copy of the stupid TortoiseAct.exe, renamed it 'Cvs.exe' and hey, presto, the dropdown was working. I left off for awhile and started playing with some other applications for my new tools box assuming everything had been resolved. In fact, subsequently I ended up installing an older version of Tortoise as the newer version didn't work with Hudson, the Java-based continuous integration server, I also put on the box. So ultimately all of my effort would have been unnecessary had I installed Hudson first.

After I’d finished setting up Hudson and prior to rolling out ReviewBoard to the team, I decided I should do a quick run through. So, I made a file diff (which is how one submits a review) and clicked 'Upload' and waited.

And waited. The thing just sat there hanging frozen on the submission page.

I quickly descended back into opening Python source, adding print outs in various ReviewBoard libraries until I found that the line that it was hanging on. It was where ReviewBoard tried to invoke a 'cvs up' command.

Okay, it's CVS again. Great.

But here there was no apparent problem. I spent a few hours, looking at environment variables, making sure all my batch files were readable by the Default Windows user (which is who services on Windows run as default) was accessible. But nothing worked. I could run the same cvs command the program was running on the command line myself just fine, though. It made no sense.

Finally, I set Apache to run not as the Default User in Windows but as *me*. Hey, presto, it worked.

I tried copying over my ssh.bat and ssh keys into the Default User home directory, tried all sorts of random stuff, but nothing worked. Ultimately, I created a new local user on Windows, gave him administrator rights, and logged into the box to start Apache while actually logged in as the user.

It still hanged.

So, I tried running the cvs command myself while logged in as my new user. And a prompt popped up saying the host needed to be added to the 'known hosts' file. I clicked 'Yes' and suddenly the thing was working. Goddammit! It was because the known hosts file was local to my user account and the Default User’s hadn’t had the IP of the CVS server added yet.

Of course, you may be asking what the point of all this blathering is. Well, I got into a conversation with my co-worker, Jim, during the middle of all of this about the age old 'build vs. buy' question.

There was a very definite cost to all this mucking around. I probably spent the better part of two days mucking around trying to get the goddamn thing to work. I don't blame the ReviewBoard people for this. They used an existing library and the people that wrote that library weren't expecting the application used for CVS access to not have a fill called ‘cvs.exe’ (I still think it's dumb code). Why did TortoiseCVS get rid of the cvs.exe file? Seems stupid to me. But again, it's all random decisions by unrelated parties that make installing an piece of software a potential pain.

The argument could be, I could have bought SmartBear code review and been spared all this pain. I don't know that that's true. It probably is. Of course, I see value in the time I spent. Coming out on the other end of all this, I'd learned quite a bit about setting up Apache2, I'd learned a little tiny bit of Python and Django, and I knew exactly how the software was set up. But again, maybe it was easier to have just bought something that *worked*.

Jim mentioned that he'd really have a reversal of the opinion he had years ago, that often it's better to just shell out the money for something that works, so you don't have to waste time or end up with using mediocre software...just because it's free. And he also mentioned that, as someone who writes software for a living, he believes in the value of software. That good software is worth paying for. I think that's a noble sentiment. Jim is one of the most principled people I know, meaning he really thinks about his beliefs and he acts in accordance with them at all times.

I'm much more of a hypocrite, contradictory, and while I write software that I hope maybe to charge people for one day, I mutter 'No goddamn way I'm paying for this!'. But there is a cost to being cheap.

I think I do agree with Jim in that if at the end, you're using shitty software just because it's free, you've made a mistake. By the same token, I think it's a huge mistake to assume that because you've been charged for something that it has value.

Is JBoss that much better than Jetty? MyEclipse vs Eclipse? SmartBear CodeReview better than ReviewBoard? Maybe.

But I think you have to put some time in and choose where you want to spend money. Look at the cost/benefit carefully. And do realize that in cases such as my ReviewBoard installation, sure I did spend a bunch of my time. But I also learned a hell of a lot. And I see definite value in that. That said, if I tried CodeReview and it was miles and miles better than anything open source, I'd like to think I'd pony up the cash.

In my case, I'm working for a company and having to get a VP's approval on any purchase, just means such things will languish in purgatory forever, whereas I already have ReviewBoard up and running and am ready to roll it out to my team.

Of course, my company has no problem in spending 20,000 on an installation of TFS (Team Foundation Server), which has been presented as the second coming, yet I wonder if people have really examined all the inefficiencies in process we have, rather than assuming Microsoft will solve all our problems. It's all a very interesting subject, but that's all I have time for...for now.