We code so you don't have to

Peeking in to the minds of users.

September 29th, 2014

No matter where you start, users will always interpret your UI differently than you intended. Even the most intuitive interface in the world gets misused. Often the confusion leads to frustration, though occasionally it leads to increased utility as the user goes about using your tool for something you hadn’t considered before.

At the end of the day, you need to peer in to the minds of your users to see what they are doing, so you can help shape your interface to provide better value. One tool that I use from time to time is “Peek”. This is a tool offered by the guys over at Usertesting.com. They provide a recorded session with a random tester interacting with whatever web page you specify.

Since Peek is a free service, it only gives you a teaser of the real value that UserTesting can offer. That’s ok though, because one thing this free service allows you to do is to perform a quick litmus test on your messaging and UI. The user will spend five minutes answering general, but useful, questions about the page they are looking at. If anything confuses them, it will get called out. Additionally, they will provide feedback on their initial impressions of your messaging. They won’t go read every word. Instead, like a real user is apt to do, they will scan the text and graphics then share what they think the page is trying to convey.

Getting a peek inside the mind of a random user has its uses. Seeing as the only cost is time, it is probably worth doing every once in a while just to sanity check and make sure you are on the right track.


Want to host your site for pennies per month?

July 30th, 2014

Here you are, once again. You just got pulled in to a new volunteer organization, and they don’t have a website. Since you’re a web guy, of course you can help solve the problem for them. They don’t have much of a budget, so they ask if there’s any way you could just cover it. Well, godaddy or bluehost are cheap, right? Well, sure… if you pay for a couple of years at a time, you only have to pay a few dollars per month. There’s a better way though, you may have heard that Amazon added hosting capabilities to their S3 service a while back. It turns out that with a bit of fiddling, and configuration, you can host a (static) website for pennies per month.

First things first, let’s talk about pros and cons. Cons first:

  • Only static sites can be hosted on S3. This means you can’t have php or rails or anything. In today’s API prolific wold, this isn’t that big of a deal, but it is something to keep in mind.
  • You can’t easily set up SSL. Amazon say they will address this, but as of this writing, there aren’t any generally good solutions.
  • You may have to do some fiddling to get your DNS set up correctly. We’ll outline the most straightforward approach, however if you get off the beaten path, be prepared for a bit of a challenge.

Now for the pros

  • You can host a website for pennies per month. S3 is priced based on two factors. The amount of storage you use, the amount of bandwidth you use. Since this is a website, the amount of storage will likely be pretty small (unless you are hosting a lot of video). Unless your site is carzy popular, the amount of bandwidth will be minuscule as well. In our experience, a 4 page site that gets 5-50 visitors per day costs between $0.05 and $0.25 per month.
  • You can host a website for pennies per month. See above for description.
  • Read the above one more time.

Now it is time to get down to brass tacks. How do you set up a website on Amazon S3?


Step 1:  Create an account with Amazon Web Services.

The link to sign up is right there at the top of the page. This process is even more streamlined if you have an Amazon.com account. Once your account is created, go to the Amazon Web Services console and click your username in the upper right corner. On the drop-down list you should find a “Security Credentials” link. Navigate to the ‘Users’ screen using the link on the left and select your user name from the list. Select the ‘Security Credentials’ tab for your user name and click the ‘Manage Access Keys’ button.
User Access Key Management
This should open a dialog window where you can create a new access key.

Access Key Dialog 1

After clicking ‘Create Access Key’ you should be presented with another dialog where you can download your credentials.

Save Access Key

Download and save your credentials as you will not have access to the secret access key again after you close this dialog box. 


Step 2. Ensure site address and bucket names are available.

In order to map a domain name to an Amazon S3 bucket, the bucket name must be the same as the domain name so that Amazon S3 can properly resolve the host headers sent by web browsers. Additionally, Amazon S3 requires that bucket names be unique across of all AWS, and so to associate the domain example.com with a website hosted on Amazon S3, you must be able to create a bucket named “example.com”. If another AWS user has already created a bucket named “example.com”, you won’t be able to associate the domainexample.com with your website hosted on Amazon S3.


Step 3. Create a bucket for your site.

You will need to create a bucket for your site. Name your bucket with the same name that you use for your site (i.e. example.com). You will need to also make a bucket with the www subdomain for the same site (i.e. www.example.com) and one bucket for log files (i.e. log.example.com). Once your bucket is created you can upload your files from the bucket management pages. Alternatively, for those who are comfortable using a command line application, there is a tool available to streamline your upload process.

Make 3 buckets for your site: one with just the domain name (example.com), one with the www subdomain (www.example.com), and one with the log subdomain to store your sites log files (log.example.com). All of these buckets will need to be in the same AWS region.

If you’re going to upload straight through the s3 website, skip down to step number 6. 


Step 4. Obtain s3cmd to allow for quickly uploading your files to Amazon s3 service.

S3md is an excellent command line tool for accessing your Amazon s3 account on Linux/Mac. Once configured to your Amazon s3 account it will enable you to create/destroy buckets, upload/download files, and perform rsync functions right from your command line.

S3cmd can be installed on a Mac very quickly if you have Homebrew installed through the command: brew install s3cmd

If you need to install s3cmd under linux they have instructions on how to add the s3tools repository to your distribution and install s3cmd here.

The first thing you need to do once s3cmd is installed is to link it to your Amazon s3 account. Remember those credentials we downloaded earlier? We will need those now.

Run the command s3cmd --configure 

You will be prompted for your access key and your secret key. Open up the CSV file you downloaded with your credentials and copy your access key and secret key over when prompted.

Next it will ask you for your encryption password, which will be your Amazon Web Services password. After entering your password you will be prompted to set up GPG and/or HTTPS encryption. I recommend at least enabling HTTPS encryption to protect your files while they’re in transit to s3.

Lastly you will be prompted to test your connection to Amazon’s services, say yes to this command and it will verify that all the information you put in is correct. If s3cmd experiences an error at this step it will give you the option to discard your configuration changes and start over. Provided all went well, you should be able to run the command s3cmd ls and be able to see the bucket we made before.

From this point s3cmd has 3 commands we’re really concerned with; put, get, and sync. Put and get result in an unconditional transfer and all matching files are uploaded or downloaded from s3. Sync on the other hand is a conditional transfer in which files that don’t exist at the destination in the same version are transferred. By default the files are checked with an md5 checksum and the file size is checked.


Step 5. upload your site’s content.

Now that we’ve got s3cmd set up and our bucket ready to go, we can begin uploading our content to s3. Navigate your console to the folder containing your index page, error page and static assets, then run this command except replace ‘example.com’ with your primary buckets name: s3cmd sync -P ./ s3://example.com

This command will sync all of the files in your site folder with s3, and set them all to public read only.

We will only be putting files into the root domain’s bucket. The subdomain will be configured to redirect to the root domain later and the log bucket will just be storage for your log files.


Step 6. Configure buckets.

Log into your Amazon Web Service console and navigate to the s3 section. Right click your root domain bucket’s name and select the properties option. Click the Static Website Hosting drop down and you will be presented with three options regarding website hosting.

Select “Enable Website Hosting” and ensure that the Index Document option has the same name as the html document you would like for your home page. Likewise also ensure that the Error Document reflects the name of you error html page you uploaded.

Next, go to the logging menu, and enable logging. With the drop down list select your log subdomain as your target bucket. Change the target prefix to “root/”, this will cause the logged data to be stored in a folder named root in the log bucket.

Screen Shot 2014-07-25 at 3.58.25 PM

Next right click your www subdomain bucket and select properties from the drop down list. Under Static Web Hosting click “Redirect all requests to another host name”. In the prompt enter your root domain address (i.e. example.com). Now any requests for the www subdomain will be redirected to your root domain address.


Step 7. Configure your domain information with AWS.

We will now configure Amazon Route 53 as your Domain Name System (DNS) provider. Go to the Amazon Route 53 console and create a hosted zone for your domain.  You will be prompted for a domain name, use your root domain address. Once the new hosted zone is set up, view the details for it. You should be provided with some name server addresses under “Delegation Set”, write these down/save them. We will need them in a little while.

While your hosted zone is still selected, click the “go to Record Sets” link and create a record set. Do not change it’s name, it’s type will be IPv4 and set ‘alias’ to yes. The alias target will be your root domain, if you click the field you will be provided with a drop down where you can select your root domain.

Create another record set. We will add www for our www subdomain into the name and change the type to CNAME. Leave Alias set to ‘no’ and save your new CNAME entry.

Now we will need the 4 addresses provided to us by Amazon from the Delegation Set that we copied down earlier. Return to your registrar that you used to secure your domain and replace the name servers in the registrar’s name server records with the four name servers that we copied down earlier. Save your changes with your registrar. It can take 48 hours or more for the changes to take effect.


That’s it. Once all the DNS servers are updated (2-48 hours on most ISPs), you should have a shiny new site up and running. Grab a celebratory cookie, and enjoy a job well done.

The 3 main weaknesses of today’s shopping carts

July 7th, 2014

Over the last couple of years, we have had several clients come to us requesting some sort of ecommerce site. My response has consistently been to recommend one of the existing services like Shopify or amazon webstore. These guys have all been around for a while, and provide a decent service. Invariably, there would be a response that went something like this: “I’ve tried XYZ, but they paste their branding information all over the checkout page.” or “I can’t display all of my product options.” or “I can’t integrate it with this tool I use.” After some discussion, they would then pay us to build a custom site.

Now, I’m one of the first to admit that the ecommerce space is pretty busy, however it is not a “solved” problem, nor is there a clear winner in all of the solutions. Given that we’ve had several people come to us with requests to spend thousands of dollars building a custom solution, there are definitely still underserved needs in this area.

After looking at the various solutions available, we have found three “deal killers” that shopping systems tend to suffer from.

1. Brand ownership. Right or wrong, when someone sets up an online store, they want their brand to be the only one present. Paypal has made adding “buy now” buttons super easy, however when you are checking out, it is clear that you are checking out on paypal’s site, not the original store’s site. Large online stores don’t have this issue, therefore smaller stores don’t want to have this issue. Small stores all want the impression given that they are big stores. Having someone else’s brand all over your checkout process comes off as “cheap” and somehow less legitimate.

2. Monthly fees. Depending on the solution that you use for your online store, you may have significant monthly fees – this is before you even sell anything. Add up the fees you pay for web hosting, merchant account, credit card processing gateway, and shopping cart service, you are likely paying well over $100. For small volume stores that are just starting out, this can add up over the course of a few months to something that exceeds the annual gross revenue of the store. There are some store owners that are unwilling or unable to have those incremental fees consistently eating in to their revenue.

3. Ease of customization. There seems to be two extremes. You either get ease of use or customization and flexibility. Services like amazon webstore, paypal, or shopify give ease of use, but have some pretty big limits on how you can customize your store’s shopping experience. On the other end of the scale you have systems like magento, and bigcommerce. These systems need highly skilled software programmers to get anything set up. You can customize them, but a lot of effort is involved. Very few systems live in the middle ground. Ideally easy things would be easy, and hard things would be possible. You may need some knowledge of HTML and maybe even Javascript, but you shouldn’t need a degree in software engineering and 10 years of coding experience.

After searching for a solution to these three areas of weakness, we came to the conclusion that there was room in the market for yet another shopping cart system.

Cartshingle, our latest service, is designed to address each of the above weaknesses. It integrates in to your new or existing website, and provides you with a straightforward way to sell stuff online. Costs are transaction fee based. We use stripe.com for credit card processing. Their simple fee structure (2.9% + $0.30, no monthly fee, no merchant account required.) allows for accepting of all major credit cards. All you need is a bank account to deposit funds in to. Cartshingle then charges a 2.1% fee, for a total transaction fee of 5% + $0.30. Finally, cartshingle provides all the api hooks necessary to integrate order processing with a third-party system.

We are running an invitation only beta right now. This will allow us to scale our system up in a manageable way, and give each of our early customers some extra attention and assistance in getting set up. We are looking for a variety of businesses, and will issue invites as our capacity and capabilities allow. During this phase, there may be some stores that aren’t a great match. In these cases, we will let the store owners know and recommend some other options. The owners that do come in to the beta program will have a hand in helping us shape new features and capabilities.

Getting started with cartshingle is easy. Head over to cartshingle.com and request beta access. You will be asked a few questions about your expected volume, type of products, and current store if any. We will then review your beta request and send you an invite to sign up. Once you sign up for an account, you can create as many stores as you want. When you create a store, you then sign up for a stripe account (or select an existing stripe account) and then add products and shipping options. From that point, it is as simple as cutting and pasting HTML in to your site.

Head on over and get started selling things on your website today!

Cucumber, what’s the point?

June 22nd, 2012

Last tuesday, FedEx dropped off my latest Amazon impulse buy. It is a book called The Cucumber Book: Behaviour-Driven Development for Testers and Developers. This is one of the recent books published by the Pragmatic Programmers. It covers a popular Ruby based testing framework called Cucumber (.) Now cucumber makes a few bold claims on their site. They claim that tests can be written in plain english, and actually test real code. They also claim that with minimal training, even a non-QA person, like a product manager, could write these tests, which would double as specifications.

Now, needless to say, I was rather skeptical. So I spent some time on their site, and saw how their magic works. Let me just say this, while I’m not completely sold on the concept for every situation, they really do have some impressive stuff there. With a minimal effort, I was able to cover a website in basic positive functional testing. I was then able to point a relative newcomer at my project, and have them contribute more testing as a way to familiarize themselves with the project. From the first hour of joining the team, they were able to produce usable and useful tests, while also learning about how this project is supposed to work — and finding new and interesting ways to make it do things it shouldn't.

Is cucumber right for every website I build? Probably not, but so far it is good at making me think about what I’m coding in a bigger picture sort of way. In other words, it makes me think about the code I’ll be writing in a different, more consumer oriented way. This helps to pull me out of my head, and prevents building write-only code that only makes sense as I’m building it, with all the context and thoughts that I have at that moment. Not only this, Cucumber makes it easy to whip out several test scenarios. More test scenarios tend to lead to better task definition, which leads to better code.

So for at least the next while, I am on the cucumber bandwagon, and spitting out tests left and right before putting finger to keyboard on a single line of code. Give it a shot at and see if Behavior Driven Development makes sense for your projects too.

TextDrive now available in the Android Marketplace

July 6th, 2011

<Insert apologetic comment about not writing for a long time here>

I’d like to announce the first android app produced and developed by CodeNoise. It’s called TextDrive. If you find yourself in situations where responding to SMS text messages is difficult or even impossible, this is the app for you. After you install it, you can select one of the built-in responses. Additionally in the full version, you can add your own response. Once you’ve done that, any time someone sends you an SMS message, TextDrive will automatically reply to them with the response. The app will also show a list of the messages that you have received so you can quickly glance at them and see if any are urgent. With a quick tap TextDrive will even read the message to you. This comes in handy if you happen to be driving and don’t want to take your eyes off the road. If you get a lot of texts, or want to unclutter the list, just long-tap any message and it will be removed from the list. Don’t worry, the message is still in your normal Message inbox.

ANother great feature available in the full-version is an auto-off timer. That way you can turn TextDrive on at the start of your commute, then have it automatically turn off after a specified amount of time. If the timer is enabled, you can even choose to include the time until your available in your auto-reply.

Go here to install TextDrive lite.

Go here to install TextDrive full version.

By the way, changes are coming to codenoise.com. Stay tuned!

The tech industry still fails to understand basic business.

September 4th, 2009

Jason Fried, founder of 37signals recently posted a reality-check for the tech industry.

This pattern — “success” based on forecasted future success instead of current success — shows up all over the tech-business press.

He goes on to questions the false measures of success that so many companies use. Things like page views, or new customers, only matter if there is a clear deliberate way to gather a return on investment. He has a very good take on the phenomenon.

It got me thinking in all kinds of tangental areas as well. How many companies have made it their model to come up with a clever or novel solution, spend a chunk of capital figuring out how to identify or create the problem that it fills, then miss that last step where they actually profit from their efforts? Oh sure, they have some nebulous future bullet-point that will magically turn all of the wasted capital into shiny new profit. However, that seems to be more for the employees and investors benefit than actually guiding business decisions. The profit from this endeavor isn’t in the business, or even for all but the earliest shareholders. The profit from this type of endeavor is to create the smoke and mirrors, spin up a good story, sell it off, and move on to the next venture. Back in the late 1800s and early 1900s, these types were called snake-oil salesmen. This practice has been legitimized, and refined such that the first groups into any business — as long as an interesting enough story, pitch, and demo can be created — will stand a reasonable chance of getting 2 to 10 times initial investment regardless of the actual viability of the business.

Fortunately with the economic turmoil, some of these “opportunities” are likely to fall flat, and the practice can become at least a little bit less popular.

An interesting side-effect of the current turmoil is the likelihood that a larger percentage of new companies are going to be required to not just bullet point the profit stage, but actually plan out to that stage and beyond. In fact, since a lot of venture funds and angels are sitting put on their current projects, these new companies will have to plan much farther out than they otherwise would. This gives reinforcement to self funded ventures. A novelty in the tech field — though by no means unheard of.
Read the rest of this entry »

Zembly makes social apps simple

March 19th, 2009

Widgets have definitely become “the next new thing”. The small snippets of functionality can be plastered just about anywhere on the web, your homepage, facebook profile, blog, etc. Widgets tend to range from completely frivolous decorations to dead useful mini-apps to games.. the list goes on.

As widgets have gained in popularity, various companies have created tools to assist in their creation. Google has created a tool that allows you to either create a widget from an existing template or create your own from scratch. The template based widgets don’t even require any programming knowledge. Just fill out the form and presto. Yahoo has a similar offering. But the most interesting so far has to be an offering from Sun called zembly.

Zembly provides an all in one widget creation solution. You can choose what kind of widget you want ot build. You can choose from a normal web widget, Facebook application, OpenSocial application, or even a Meebo application. They then provide a wizard to walk you through configuring your application. They provide an interface to most of the major data services (google maps, google translate, amazon retail search, and many others) as well as a way to add any other public service you may need.

You don’t have to worry about having a server to host the widget or application on. Zembly provides free hosting with any account. Not only that, but they do allow you to fetch the source code for your application should you choose to host it elsewhere.

One last interesting thing that zembly offers. Anybody can see, and contribute to anybody’s widget. Think of this as socially generated software. Anybody can collaborate on any widget or app. They can use any app as a starting point for their idea and so forth. Zembly does provide a way to disable this, and keep your app private if you so choose, but it appears that this feature may cost in the future.

So there you go. Zembly provides the tools, the hosting, and the collaboration to ride the widget wave as far as it can go. They have the best tools I have seen. They have fantastic integration with facebook. They provide hosting for those who need it, and source code for those who don’t. Now go build your widget!


Should web developers also design?

March 5th, 2009

A couple of months ago, The Pragmatic Programmers announced a new book. Web Design for Developers is described as,

how to make your web-based application look professionally designed. We’ll help you learn how to pick the right colors and fonts, avoid costly interface and accessibility mistakes—your application will really come alive. We’ll also walk you through some common Photoshop and CSS techniques and work through a web site redesign, taking a new design from concept all the way to implementation.

My question is this. Should a developer be trusted with design? It is definitely a profitable skill to have. I believe that developers should at least have a basic idea of what goes into design. I also believe that designers should have a basic understanding of web development. However, if someone is at one end of the spectrum or the other, how do they acquire the necessary skills?

There are several books written for each audience, but very few that are targeted at both. To my knowledge, this is the first book attempting to make a developer more capable as a designer. It appears to approach design in much the way a “logically driven”, coding brain works. It breaks down the fundamental components of design. Layout, color theory, spacing, mockups, etc. All are laid out with a logical process. In some ways it only scratches the surface. Entire books are written on color theory, or typography alone.
Read the rest of this entry »

I know it’s a tough job market but…

February 25th, 2009

It seems like every day brings a new round of layoff announcements. Companies, large and small, are having to cut back, save cash, and stop growing. In this environment, any new hires need to be as effective as possible. It’s all about the bang for the buck. This is a time where generalists may have an advantage – wearing many hats makes you more cost effective.

Given this, I’m seeing several job postings that are asking for an awful lot. It is to be expected, and makes sense from the companies point of view. There are, however, times where this “hire overloading” borders on the absurd. Take the following:

*Senior Linux/Unix Developer/Test Engineer*
Installs and configures clusters of Linux based application and database servers. Drafts and executes test plans of Linux related software on clusters. Experience with Linux application cluster design, administration and tuning (including san) required. Experience with virtualization technologies required.
Strong software development skills in multi-tiered and distributed environments using iterative development process, including 5+ years of advanced programming experience

* Application performance testing plan drafting and execution.
* Experience with usage and customization of open source application performance test tools.
* Multiple Programming Languages: C, C++, Perl, Python, Cold Fusion, JAVA
* 4 year technical degree or higher at an accredited institution.
* Linux Cluster and cluster storage design, configuration and tuning.
* Linux Kernel customization and compilation
* Databases: Mysql, Postgresql, Oracle
* Multiple Operating Systems: Linux, FreeBSD, Windows, etc.
* Experience with multiple virtualization technologies: Xen, VMWare, KVM,
* Excellent analytical and problem solving skills; with the ability to analyze business processes and create application models utilizing project-management standards
* Strong verbal and written communication skills and ability to work effectively both independently and as part of a cross functional team.
* Telecom and/or internet domain knowledge and experience with solid experience with eBusiness processes and/or back-end applications

So, here’s what I read into this request. This company is looking for a QA engineer who has been a mid/high level software engineer, with linux administration experience, fairly advanced enterprise hosting experience, and project management experience. The hiring manager is asking for a chocolate cake that has just the right amount of fish sauce and onions.

This is at least two different career tracks. Any software engineer that has professional experience setting up virtualization clusters and SAN is not likely to be very skilled in one or both areas.

Hiring managers that cram so many diverse requirements into a single job posting really bug me. If you want a generalist, then ask for a generalist. Asking for such specific and wildly different requirements does nothing to increase your chances of finding a quality candidate. It just turns away those quality generalists that are capable of learning your specific system. If you need someone who can set up test environments using a virtualization system, then either train one of your test engineers – if muddling through is acceptable – or hire a sysadmin who has experience with virtualization. If you need someone who can write test harnesses, then hire someone who knows how to write test harnesses and exercise programming interfaces. If the harnesses require an obscure language, then asking for familiarity in that language makes sense. Remember, a good engineer can learn whatever elements are specific to your system. If you want someone who can analyze your current business processes, and help to reorganize them, don’t expect the person to also be down in the code – those are two totally different disciplines. “big picture” people, who can give quality feedback on business process tend to miss the details necessary for writing a good testing suite, and vice versa.

I feel bad for any technical recruiter who sends out job requests like this because though they are just the messenger, they look foolish. The hiring manager looks just plain silly. This was a request that was likely written by a team of engineers. They all looked at what they do, and tried to “fill in the gaps”. Bad practice! Anybody who actually fits that request is either dishonest (resume padding) or probably doesn’t require the depth of knowledge that the team is hoping for. The team, or hiring manager, should request a generalist, who has experience in the main area of the job request. Then during the interview process, explore the candidates willingness and ability to learn. Perhaps mention some of the systems that are required to do the job. Focus on concepts and approaches, not on specific technologies or processes.

I know it’s a tough job market, but come on. A little sanity please.


Prospering with ruby vs. haskell

March 26th, 2008

As previously mentioned, I am learning haskell. In that endeavor, I am trying to cross the chasm from “tutorial following” to actual real projects (albeit, very small projects). My latest project is a simple simulator for my prosper.com account. For those who don’t know, prosper.com allows people to make smallish loans to each other with terms of three year repayment. Money amounts range from $50 to $25,000, and interest rates are negotiated in an auction. As a lender, I want to know what return on investment I am likely to receive given various scenarios.

Now on to the show

My first stab at the simulator was done in ruby. This gives me a working model, and the ability to compare and contrast some of the design requirements that functional programming, and specifically haskell will impose.


First I needed a function to generate random rates, simulating the auction style rate negotiation.
def get_new_rate
   return MIN_RATE + rand(RATE_WINDOW)

where MIN_RATE is defined as the minimum rate I am willing to lend at (8.0%), and RATE_WINDOW is defined as the spread between my minimum rate, and the highest rate I am interested in lending at (20.0%).

Second off, I needed a function to generate a number of loans given a certain account balance.
def add_loans(loans, account_balance)
  new_loan_count = account_balance / INITIAL_PRINCIPLE
  new_loan_count.to_i.times do
    rate = get_new_rate
    loans << {:principle => INITIAL_PRINCIPLE, :rate => rate, :min_payment => calc_minimum_payment(INITIAL_PRINCIPLE,rate)}
    account_balance -= INITIAL_PRINCIPLE

where INITIAL_PRINCIPLE is set to the amount that I am willing to lend ($50) in each loan. (Read this for an explanation of why I only lend $50.)
This function calculates how many loans I can generate from the given account balance, then creates each one. The new loans are appended to the collection of loans that was passed in as an argument. The calc_minimum_payment function simply determines what the minimum payment will be each month.

I then needed a function that would calculate the payment on the loan – particularly at the end of the loan when the payment may be less than the minimum payment.
def calc_payment(loan, months=1)
  if loan[:principle] < loan[:min_payment]
    payment = loan[:principle]
  loan[:principle] = 0
  return payment
  interest = loan[:principle] * loan[:rate] / 100.0 / 12
  loan[:principle] -= loan[:min_payment] - interest
  return loan[:min_payment]

Given these functions, I can now create the simulation
account_balance = ARGV[0].to_f if ARGV[0]
account_balance ||= 0.0
monthly_deposit = ARGV[1].to_f if ARGV[1]
monthly_deposit ||= 100.0
number_of_years = ARGV[2].to_i if ARGV[2]
number_of_years ||= 1

First grab the scenario parameters from the cmdline. monthly_deposit is how much money to add to the account balance each month (in addition to the payments from the outstanding loans)

loans = []
for i in (1..number_of_years*12)
  old_loans = loans.size
  account_balance = add_loans loans, account_balance
  print "Month #{i}\n"
  print "Number of loans: #{loans.size} (#{loans.size - old_loans})\n"
  print "Average Rate: #{calc_average(loans.collect {|i| i[:rate]})}\n"
  income= loans.inject(0) {|bal, i| bal + calc_payment(i)}
  print "Account balance: #{account_balance}\n"
  print "Income: #{income}\n"
  account_balance += income + monthly_deposit
  print "Account value: #{loans.collect {|i| i[:principle]}.inject(account_balance) {|sum, i| sum + i}}\n"
  print "\n"
  loans.delete_if {|item| item[:principle] == 0}

Then run the simulation, and print out various statistics for each month of the simulation.

So that’s the simulator in ruby. It is not “perfectly optimized” for ruby, because I wanted to keep it somewhat close to the structure that I would use for haskell. See the link below for full source.


I tried to keep the architecture of the haskell version as close to the ruby approach as was possible. As a consequence, many haskell people may look at this and balk. My apologies in advance.
First I needed some constants and a struct to keep the relevant loan data in
minRate = 8.0
maxRate = 20.0
initialPrinciple = 50.0
periods = 36
data Loan = Loan {principle :: Double, rate :: Double, minPayment :: Double}

Then comes the function used to simulate the rate auctions
-- Generate a random rate within the "rate window"
getNewRate :: IO Double
getNewRate = do randomRIO (minRate, maxRate)

Where I calculate some random number between the minRate and maxRate. Note the type – IO Double. For all you non-haskellites, that means that the function will be using a monad inside of itself. In this case, the monad is randomRIO. The monad allows you to call randomRIO multiple times, and get different numbers each time. Useful that!

Then I have the loan creation functions

-- figure out what the minimum payment will be on a given loan
calcMinimumPayment :: Double -> Double -> Double
calcMinimumPayment p i = (r * p *(1+r)^periods) / ((1+r)^periods - 1)
                         where r = i / 12.0 / 100
-- create a new loan
newLoan :: Double -> IO Loan
newLoan p = do
          i <- getNewRate
          let m = calcMinimumPayment p i
          return (Loan p i m)

As in the ruby version, create a new loan, then populate the structure with the rate and minimum payment. Note the type for calcMinimumPayment doesn’t specify IO… that means this is a “clean function” and can be called anywhere. newLoan however is a monad function – because it calls getNewRate. Since newLoan uses a monad function, it has to return a monad itself.

Here’s where things had to deviate from how I did them in ruby. Since haskell has immutable values, I couldn’t modify the loans. I had to create new loans, and collect them into a new structure. Here is where the new loan is created, given the state of the provided loan.
-- Given a loan, make a payment and create a new loan with the remaining principle
calcPayment :: Loan -> Loan
calcPayment l = if principle l > minPayment l
      then Loan (principle l - p) (rate l) (minPayment l)
      else Loan 0 (rate l) (principle l) -- mark this as the last payment
        i = (principle l) * (rate l / 100 / 12)
        p = minPayment l - i

Again, here’s a “pure function”. It can be called anywhere, and any time.

Now, given an account balance, create as many loans as I can, and return them as a collection of Loans.
-- Take the current account balance, and make as many loans as possible from it
makeLoans :: Double -> IO [Loan]
makeLoans bal = if bal >= initialPrinciple
      then do
        l <- newLoan initialPrinciple
        ls <- makeLoans (bal - initialPrinciple)
        return ([l] ++ ls)
        return []

Note the recursive call to continue building the list. I am finding that functional programming relies on recursion a lot more than OOP.

This is another portion of code where I had to deviate. Here is where I actually parse the passed in loans, and return a new array of updated loans, and a new account balance. This is probably the most un-haskellish function of the group, and definitely needs some work.
-- make payments on the given loans, and return the updated loans, and resulting total payments
collectPayments :: [Loan] -> ([Loan], Double)
collectPayments loans = (filteredLoans, payments)
        clearStaleLoans = filter (\x -> minPayment x > 0) -- remove any loans that have been fully paid back
        filteredLoans = clearStaleLoans (map calcPayment loans= sum (map minPayment filteredLoans)

Then a function that runs through each iteration of the simulation – i.e. each month. This has to be its own function so that it can recursively call itself to continue the simulation.
-- run through a loan scenario, reinvesting returns for 'term' months. Print out various statistics on the account
run :: Double -> [Loan] -> Int -> Double -> IO Double
run startingBalance loans term monthlyDeposit = if term <= 0
        then return startingBalance
      else do
        l <- makeLoans startingBalance
                              let (newLoans, newPayments) = collectPayments (loans ++ l)
        let newPrinciple = (initialPrinciple * fromIntegral (length l))
        let newBalance = (startingBalance - newPrinciple + newPayments)
        let loanValue = sum (map principle newLoans)
        let averageRate = (sum (map rate newLoans)) / fromIntegral (length newLoans)
        putStr $ unlines ["Term: " ++ show term, "Loan count: " ++ show (length newLoans), "Average Rate: " ++ show averageRate, "Loan Value: " ++ show loanValue, "New balance: " ++ show newBalance, "New Principle: " ++ show newPrinciple, "New Payments: " ++ show newPayments,"---------"]
        bal <- (run (newBalance + monthlyDeposit) newLoans (term-1) monthlyDeposit)
        return bal

Note how, although run is a monad function, a majority of its processing is non-monadic. In theory each of those ‘let’ statements could run in parallel.

Finally a “main” function to get the works rolling
main :: IO ()
main = do
        args <- getArgs
        let accountBalance = if(length args > 0) then read (args !! 0) :: Double else 300.0
        let monthlyDeposit = if(length args > 1) then read (args !! 1) :: Double else 100.0
        let term = if(length args > 2) then read (args !! 2) :: Int else 1
        putStr $ unlines ["Starting balance: " ++ show accountBalance, "Starting run", "----------"]
        endingBalance <- run accountBalance [] (term * 12) monthlyDeposit
        putStr $ unlines ["Ending Balance: " ++ show endingBalance]

Summing it up

Well, comparing and contrasting these two scripts is giving me a new appreciation for both languages. Each script could be refined to better match its underlying language, but the goal was to keep the code as close as possible to maximize comparability. If enough people ask, perhaps I’ll refine each script.
Hopefully some comparison of the two scripts will help another budding haskell developer wrap their head around this powerful, but oh so different language.

Here is the full ruby source code – prosper.rb (right click – “Save As”)
Here is the full haskell source code – prosper.hs (right click – “Save As”)