Archive for the 'Software Development' Category

Pragmatic approach to programming

Monday, February 26th, 2007

I recently had to move my rather extensive library of tech books. In doing so, I marveled at how clumped my library was. Now, in ten+ years of development experience, I’ve worked on a large number of projects. Lots of java, lots of perl, lots of c++, some php, and currently python and ruby. My library does not reflect what I’ve worked on, nor does it really reflect those topics that I worked on the longest. Why do I have four books on ruby on rails, yet only one on python? I use python in my day job, and rails is for my personal projects. There’s lots of documentation out there explaining how to use rails… why four books then? One answer is because I discovered the Pragmatic Programmers. The books from this publisher have been some of the best and most rewarding technical books I have read. The writing style is light, but not overly so. The techniques are described in a logical order. Time isn’t wasted on explaining things over and over. Non-trivial examples are used to boost the subject matter, and display the power of what is being taught.

All in all, the Pragmatic Programmers live up to their name. They teach pragmatic techniques to software development. They teach you what you need to know to get a solid foundation, and develop good habits. From there, you learn enough to get the job done. By the end of the books you may not be an expert, but you have a solid understanding of what is going on, and how to apply it to your projects. If you haven’t discovered these fantastic books, I strongly suggest you put them on your list.


Leveraging Mindshare

Tuesday, November 7th, 2006

For quite some time, a good friend of mine has been pestering me to learn the computer language known as haskell. He goes on and on about the virtues of functional programming, and the various nifty things the type system will allow. He thoroughly enjoys using it and, enticed by his enthusiasm, I figured I’d give it a shot. After three weeks, I was as lost as ever. I consider myself a reasonably intelligent human being. I have been writing software for more than ten years, and thought I knew what I was doing. But trying to learn haskell on my own, with only the web, and a few tips from my friend left me really scratching my head. I sort of understand the concepts that the various tutorials are trying to teach, but the syntax of the language is so different that I just can’t seem to see the relation to anything familiar. Upon discussing this with my friend, he disclosed that it took him the better part of a decade to really master haskell, though he could have done it in far less time had he been using it full time. Well, that may be my problem too. Poking at the language for an hour here, and two hours there just isn’t enough for me to really grasp the relationship between haskell and the various software engineering concepts that I’ve used in the course of my professional life. Until then, all the power and beauty of haskell will remain closed to me.

Another experience I had was when our company was looking to get off of cvs as our version control system. Someone decided that TLA, an implementation of the debian arch VCS system, would make a good replacement. It provided all the theoretical solutions to our problems with cvs. Sure the interface was a bit raw, but we could work around that, right? Our needs are interesting. We have several hundred projects, all of which are written by someone else. Most of these projects need to have patches applied, and we need a system to maintain which patches go with which versions of which projects. In cvs land, it was a branch/merge nightmare. In TLA, it was less so. However there are a few projects we have that are written and maintained by us, one of which is my project. I have a small team of three senior engineers. Trying to get us all using TLA instead of cvs was about twice as painful as a trip to the dentist on the day after he ran our of NOX. The base concepts may have been the same, but the entire interface abandoned anything remotely akin to cvs. In fact, this was a deliberate decision by the developers of TLA. They wanted a full fresh start, and threw away all convention. Once again, until we could redesign our development process, the beauty of TLA was closed to us. We just couldn’t apply our old processes to the new system.

Both of these experiences highlighted something very important for me. No matter how cool or revolutionary something is, if a person can’t leverage their existing experiences and skills in using it, then that item is closed to them. It remains useless until such time as they can understand it. Several cases exist where leveraging has occurred. Email, c#, java, cell phones, and countless others. Now this isn’t to say that a revolutionary thing is useless, but it will fail to gain mass adoption if it requires someone to abandon all or most of their previous concepts and start from scratch. Given that, it is important to put yourself in the users mind when looking at something new that you’ve created. Does it have some familiar concept that will allow the user to grasp what it is that you’re doing, or will it appear completely foreign? Mindshare is a valuable thing. It is how we stand on the shoulders of giants. Starting over is expensive.

Personal Webtops

Wednesday, February 15th, 2006

Recently I became interested in really customizing my google homepage. Not just having a bunch of news feeds and the weather, but really make a page that did things. So I looked at some of the tasks that I do frequently. Well, I do check the news… ok, so leave one of the news feeds. I read and write emails…. ok, so I need to find a way to track all my email accounts, not just my gmail account. I chat online… hmm, I wonder if there’s something that I can do about that. I also write software… not sure if that can be put on my homepage, yet.

Ok, so there are a few things that I do regularly. Well it turns out that the personal webpage is quite similar to all of the other personalized web pages from the likes of,, and They all use AJAX techniques to make rich web UIs. So if there is a web service, or a simple UI task that can be done, then a custom javascript widget can be written to accomplish that task. It can be anything from a game, to a calculator, to eventually a full office suite. (There are already a couple of word processors such as Zoho Writer).

So why is this interesting? Well, for one.. imagine not losing your files when your computer crashes. Imagine not having to worry about computer viruses taking over your PC. Imagine not having to drag that dang laptop with you when you go to the in-laws house, just so you can check email. If you can get to the web, you can get to your files. You can get to your desktop. Your PC doesn’t need to run anything but a browser. It doesn’t need a hard drive to hold any files. No hard drive, no viruses (well, almost).

So what’s my part in all of this? Well, I’m starting to code my own webtop apps. I’ve got a calculator already. I’m looking into creating a service for using AIM via my webtop. And in the future, I’d like to have an IDE for my development work.

So what will all this mean? I dunno, but by the looks of it, we’ll be able to have our desktop no matter where we are. Perhaps we can have a music player that ties into iTunes or another music library. Maybe we’ll find a use for that stupid “Active Desktop” feature that nobody has actually turned on for the last 10 years. In any case, we’ll see new and possibly useful ways of interacting with the web.

The Difficulty of High Volume Servers

Wednesday, October 5th, 2005

Well, it has been far too long since I’ve posted anything. My apologies. For the last several months, I’ve been working on some interesting projects in the IPTV (Internet Protocol Television) world. In particular I’ve been working on a Real-Time Encryption Server (RTES) for broadcast quality mpeg streams. A typical broadcast stream will run about 3.5-4.0 Mbps. That’s a good chunk of data. In order to encrypt it real-time, you need to have a fairly lean pipeline. Again, no surprises. The trick is getting a server to handle several of these streams concurrently. When I started on this project, the current RTES was capable of handling 20 streams, or channels. For the typical TV service, such as your local cable company, this means that between three and ten servers are needed at the head-end to encrypt all of the channels. This becomes very expensive once you take into account the need for redundant server hardware, giga-bit routers to manage everything, and so forth. Besides, when you look at the numbers, 20 channels is really only 80Mb of data per second. A lot, certainly, but nowhere near the limit for giga-bit, and really only approaching the limit for fast-ethernet.

So why could we only handle 20 channels? Well, it turns out that the previous server was starving CPU resources due to the fact that between 2 and 4 threads were being allocated for each channel. The server was falling under the crushing burden of context switches between all of these threads.

Well, here was a nice chance to dive in and see what we could do. There happens to be a nice little system call to handle querying multiple sockets for data, select(). Sure it has its problems, but for the most part, select is quite good at polling a group of sockets to see if there’s data waiting on them.

After a bit of rewriting, we got RTES to use select to query each socket and notify an object that the given socket was ready for reading. At that point a small thread-pool was used to execute the encryption pipeline on that socket. This yielded a 300% gain. We can now handle 60 channels. An impressive gain, however the story doesn’t end there. With this new design, RTES is now very sensitive to buffer overruns in the network driver. When that happens, flow control is triggered and all threads block on IO, or return a “wouldblock” error in our case. Once flow control is triggered, no sockets may be read from until all IO in the single current thread that’s handing the flow finishes ALL IO. By the time that’s done, everything has fallen so hopelessly behind that flow control is triggered on the next socket read, and the next, and so on. I describe it as the kid on a skateboard holding on to the bumper of the bus. Once he lets go, he can’t ever catch up, and he falls on his face. But I digress.

How do we avoid the flow control problem? Since this is all functionality in the network driver, we can’t really tweak too much, however we can wildly increase buffer sizes so flow control is triggered far less often. This seems to help quite a bit. But it really just pushes the problem farther away, not a real resolution.

Well, I went digging for other designs that could be used. A buddy here at work pointed me at a paper written by Matt Welsh, David Culler, and Eric Brewer at UC Berkely. They describe a modular architecture that they call SEDA (Staged Event Driven Architecture). It’s a fairly interesting read, and strangely enough, right around the time they came up with this back in 2001, a few friends of mine and I were working on a generic data processing engine, for an outfit called Create-A-Check, using a very similar architecture.

So what is SEDA? Well, basically it is a collection of objects, called stages, that each handle a small component of processing. A web server stage list might look like this.

  1. Listen for connections
  2. Read request
  3. return cached static page
  4. fetch static page and add to cache
  5. fetch dynamic page

Each stage has a small queue at the front that holds incoming events. Each stage then acts on the event, then forwards it to the next stage. The execution would look like tree traversal, stage 1 to stage 2 to stage 3, stage 1 to stage 2 to stage 5, etc.

So what are the advantages to this modular architecture? Well, for one, you can dynamically adjust your pipeline based on things like server load. For example, if a server is getting slammed, it can start to reject dynamic page requests, and just handle static requests. Perhaps it can redirect requests that would require a cache-refresh, or insert a new stage that just returns a global “Service is Down” page for all requests. The business rules could be applied in a much more fine-grained fashion, and it would be easier to change them.

So now we’re looking at ways that SEDA could possibly help us solve our problems with high-volume multicast problems that RTES is facing. We are also looking at applying this model to our key request/certificate generation/verification servers. SEDA seems like a perfect fit for them.

In all of this, I’ve learned a couple of things. When it comes to high performace servers, it’s not the first 90% of capacity that’s the hard part. It’s that last 10%. Everything gets a bit blurry, and things can degrade very quickly, and unexpectedly. Handling load gracefully is a fun trick, and there’s no real silver bullet. As computing progresses, and high volume servers become more and more prevelant, new techniques will surely develop. It should be a fun ride.