5 Technical Mistakes I Made Growing a SaaS to $40K MRR
I'm a fairly experienced developer at this point (20+ years of building) so I think I got quite a lot of things right while building Bannerbear. For example, I didn't lean on any shiny new technology, I just used the tech that I know best: Ruby on Rails.
That has enabled me to iterate quickly and without having to think too hard about the tech. But it hasn't been entirely smooth sailing, there have been some technical hiccups along the way. Here they are in no particular order.
Not Benchmarking Dependencies
Near the start of Bannerbear's journey I was using a gem from [insert famous company name here] as part of an integration I was building. I won't mention which, since I assume this is now a solved problem.
I finished the feature, deployed, and basked in my greatness. A few days later I started to experience memory-leak issues in production and managed to track it down to the above gem; and many other users were experiencing the same thing. I had to rip out the gem and start over.
Lesson learned - at the very least don't assume that all dependencies will be of high quality and not introduce new memory overhead in your app. Best practice would be to benchmark performance before and after when adding new dependencies.
This one is a pretty rookie mistake, but honestly it's easy to overlook. Most developers know you should be adding indexes on your database where appropriate. But as your users ask for more features and your app rapidly grows and evolves, you inevitably end up adding more tables. And that's where it can be easy to forget an index here and there.
This isn't a problem until it is. One day you find your DB queries are grinding to a halt and you start panicking. On the positive side, there is a certain amount of gratification in discovering that the problem is a missing index, running one simple command, and instantly fixing the problem.
Expecting Full Table Counts to Scale Forever
I take full responsibility for my naivety here. I had never built an app where tables got to over a million rows and didn't realise that counts are a very expensive operation!
So as part of the app I would be doing things like:
Request.where(:user_id => some_id).all.size
Which seems innocuous, until the Request table gets very big. Same thing as under-indexing, this eventually would just grind things to a halt. This one was harder to solve. I couldn't simply cache the number as it needed to be a real-time count.
Slimming down the query to:
Request.select(:id, :user_id).where(:user_id => some_id).all.size
Didn't seem to help.
In the end I used a separate "count table" to keep track of counts. The Request table actually never gets counted, I simply increment the number in the count table. This is known as denormalization.
Not Keeping an Eye on Postgres Limits
Here's another easy one to overlook especially if you are using a tiered Postgres service like me (I am using Heroku Postgres).
At some point I simply ran out of space on my Postgres plan. In the early days I was on the Hobby Basic tier of Heroku Postgres which has a limit of 10,000,000 rows. I'll never hit that! I thought, until one day I did.
You can imagine what happens when your SaaS app suddenly loses write access to the database. Everything breaks. Spectacularly.
This was probably the single most stressful event of my SaaS journey so far. It happened at about 9pm my local time on a Friday, and I had just come home from dinner a little bit tipsy. Everything was broken and I didn't know why - it took about 5 mins to diagnose the issue as related to Postgres limits, but those 5 minutes felt like 5 hours.
I was then frantically reading the documentation on how to upgrade to a higher tier which isn't a single command, you had to create a follower DB, wait for it to sync up, then switch over. I was doing this all for the first time, under pressure, and after a few glasses of wine so I was terrified I was going to screw things up and accidentally delete all my data.
I was also terrified that the follower sync was going to take forever since at that point the database was pretty big, over 10GB if memory serves me correctly.
Thankfully, the follower sync was really quick (less than a minute?) and switching over to the new Postgres (now at God-tier) instantly solved the problem. I am still slightly traumatized by this experience.
Relying on a Dependency for a Core USP
Outsourcing some heavy lifting to a 3rd party makes a lot of sense in many scenarios. I mean, we are an API service so that's exactly what our customers are doing - outsourcing their dynamic image / banner generation needs to us, because it wouldn't make sense to build that tech in-house.
However, if something is your core USP or competency then you have to evaluate things a bit more.
If you're outsourcing a core competency then you have to think of things like, does this tech meet all my requirements and if not, will it in the future, or can we modify it easily so that it does, etc.
For Bannerbear, one of our core pieces of tech is the template editor. It's the drag and drop interface that designers use to set up templates, which the tech teams then grab the ID of and fire API requests at.
In the beginning we were using FabricJS for this, which is an excellent library for creating WYSIWYG editor interfaces using HTML5 Canvas.
The problem was, we had so many plans for enhancements to the editor. It was our core competency after all, it was one of the main ways we could differentiate from the growing number of copyca- I mean, competitors!
I spent 2 or 3 months writing messy monkey patches to try and twist Fabric into doing what we needed it to do until I realised that this, as a core competency / competitive USP, is really something that should be built in-house. I spent a couple of months migrating to an in-house solution, and I've been very happy I did that ever since. It's been so much easier to add new functionality to the editor.