普通视图

发现新文章,点击刷新页面。
昨天以前Fred Wu's Blog

How I Built a Mostly Feature-Complete MVP in 3 Months Whilst Working Full-Time

2023年8月9日 17:28
A few weeks ago I soft launched an MVP - you are looking at it right now.

In this post I’ll talk about the features, the tech stack and the globally distributed infrastructure behind building this MVP, and of course, with a sprinkle of learnings too.

The “MVP”

Deciding what makes up an “MVP” is always interesting - I’ve heard some saying if the product isn’t embarrassing you’re releasing it too late, and also if the product is embarrassing, you’re not gonna make it.

For me, I’ve always had the idea of building all the essential features as part of the MVP, with one or two “hero” features that would differentiate the product from the competitions, and then build out more premium features over time.

If you can’t already tell, Persumi is a content creation platform with some social networking features. It may sound bland, but what I believe makes it stand out, is the desire of putting the focus back onto the content, rather than the VC-fuelled, ever increasing appetite for more ads and user hostile features.

The Long Nights and Weekends

There is no magic beans for productivity - especially when I have a full time job. Working on a side hustle means giving up on almost all social and entertainment activities. It’s not for everyone, but I didn’t mind it too much. Being an introvert definitely helped - I was happy to see night by night the MVP gradually taking shape to become more and more real.

Looking back, I spent about three months to build out most of the MVP, then another week or two on infrastructure, and another week or two for polishing, all whilst having a full time job.

It’s been a journey, I’m glad that it “only” took me 3-4 months to get to this stage, as initially I estimated for a 6+ months MVP build.

The Features

With all that in mind, I’ve set out to build the essential features that make a blogging and social networking platform:

  • Short form content like a tweet
  • Long form content like a blog post or a book chapter
  • RSS feeds
  • Communities similar to forums and sub-reddits
  • Direct messaging between users
  • A voting (like/dislike) system
  • A bunch of CRUD glue pieces to make all these things work

The “Hero” Features

Beyond these seemly unremarkable features, I’ve also had in mind two key features that would differentiate the platform from the rest:

  • The “persona” concept, whereby each user is allowed to have multiple personas to hold different content or topics of interest, e.g. a persona for professional stuff, a persona for gaming stuff and a persona for travel stuff, etc
  • AI generated audio content for text (also known as Text-to-Speech)

These two “hero” features are what drove me to build Persumi in the first place. Together, they solve some very real pain points for me, namely:

  • Following specific topics of interest from people is difficult, with the algorithms taking over people’s home feeds, there are simply way too much noise, thanks to VC-fuelled “user engagement” metrics
  • Content consumption on the go (e.g. during commute or during workouts, etc) is becoming more and more prevalent, but the traditional platforms haven’t adopted to this new lifestyle other than shoving short form content down our throats

There’s also a third “hero” feature: the Aura system. Unlike the upvote/downvote or like/dislike buttons in many social platforms that only serve the algorithm to push more content to you, Persumi’s Aura system keeps track of user’s content quality over time, and would punish the low quality content and promote high quality content using visual cues - lower quality content has a much lower contrast making them easy to ignore. In the age of social media, self-curating content becomes essential to keep a platform healthy, engaging and usable.

The Non-MVP Features, a.k.a. The Future

There are many features that didn’t make the MVP cut, most of these are value-added features that will eventually make their way into paid subscriptions - if Persumi gains enough traction to attract users who don’t mind paying for premium features.

A prime example of such paid features is ones that help users monetise their content, e.g. ad revenue sharing and paid subscribers (like Patreon).

I also have the ambition of building out Persumi’s features so it can eventually compete against the likes of LinkedIn and Tinder.

Wouldn’t it be better for the world to have a platform like Persumi that doesn’t focus on dark patterns and exploiting users? 😉

The Tech Stack

Over the past decade or so I’ve mainly worked with two tech stacks: Ruby and Elixir. So naturally, Persumi was going to be built using one of them.

After some consideration, I’ve decided to go ahead with Elixir, the main reasons were:

  • Elixir and Erlang/OTP support distributed systems out of box
  • I’ve been writing more Elixir than Ruby lately, so I’m more productive in Elixir
  • I really wanted to try and use LiveView in production
  • I prefer Phoenix’s application architecture more than Rails’

On top of Elixir, I’ve decided early on a few other things to go with it:

  • Tailwind for CSS
  • Postgres for database, preferably a serverless option
  • A search engine
  • An easy to maintain infrastructure that doesn’t cost an arm and a leg

Elixir

I first discovered Elixir in 2014 while I was still actively involved in the Ruby and Rails communities, but it was two years later that I had the opportunity to really dive into it. I built a few open source libraries to help me learn Elixir and OTP:

  • Crawler - a high performance web scraper.
  • OPQ: One Pooled Queue - a simple, in-memory FIFO queue with back-pressure support, built for Crawler.
  • Simple Bayes - a Naive Bayes machine learning implementation. Hey, I was doing machine learning before it was mainstream! 😆
  • Stemmer - an English (Porter2) stemming implementation, built for Simple Bayes.

With a decent amount of experience building Phoenix web apps over the years, I unfortunately never had the opportunity to use LiveView…

I guess that’s finally changed now.

LiveView has been amazing, not only does it drastically reduce the amount of front-end code you have to write, make the entire web app feel super snappy, but it also virtually eliminates any front-end and back-end code/logic duplications. It is such an awesome piece of technology that improves both the user experience and the developer experience.

Petal Pro

During my initial tech research, I came across Petal Pro which is a boilerplate starter template built on top of Phoenix. It handles things like user authentication which almost every web app needs, but is somewhat tedious to build.

Petal Pro isn’t free but it ended up saving me so much time. I also started contributing small bug fixes and features to it too. If you are about to build something in Phoenix, check it out!

Tailwind CSS

As I kept progressing my career, there have been fewer and fewer opportunities for me to write front-end and CSS code. Last time I rebuilt my blog, I used Bulma - that was 2019. Since then Tailwind has gained a lot more traction, so I wanted an excuse to try finally give it a shot.

There is a debate on how many new things you should try for building your MVP - the more you have to learn, the slower your MVP progresses. That said, given CSS is reasonably straightforward, I figured it wouldn’t slow me down too much, if anything, Tailwind’s flexibility might just eventually make up any time lost in learning.

I’m happy to report that it is indeed true - by using Tailwind, it became significantly easier for me to customise my components and elements. I can see why it became so popular. It’s not for everyone, but I like it.

Postgres

Choosing Postgres as a database was a no brainer, given how popular and versatile it is. I did briefly consider NoSQL options like DynamoDB but quickly wrote them off as I needed an RDBMS to get things off the ground quickly, and the DB is unlikely to be the bottleneck for a long time anyway.

In Elixir, the Ecto library works wonders for Postgres.

Later in the post I’ll touch on how I deploy and run Postgres in production.

Search Engine

For a search engine, my requirements were:

  • The ability to search across multiple fields of a schema
  • The ability to rank them
  • The ability to have typo tolerance, word stemming and other similar language features to make search more intuitive
  • The ability to search multiple languages, including CJK (Chinese/Japanese/Korean) characters
  • Simple to run
  • Cheap to run

Using Postgres’ full text search was probably going to be the simplest but it doesn’t offer all the functionalities I need without a bunch of set up and manual SQL queries so I didn’t pursue it.

Elasticsearch on the other hand, offers good search functionalities but takes a bit of effort to set up and maintain, and can be costly to run.

After doing some more research, I found the following three options that would fit my needs:

Both Meilisearch and Typesense are open source, with commercial SaaS offerings, whilst Algolia is SaaS-only.

It’s been an interesting journey. I started with Typesense as I liked what I read, but I quickly discovered that it doesn’t search Chinese characters properly.

I then turned to Meilisearch. I especially liked the fact that they offered a generous free tier SaaS to get you off the ground running. Spoiler: during my implementation they did a bait and switch and removed the free tier.

At the time the Elixir support for Meilisearch wasn’t up to date, so I ended up contributing to a community library to add the features I needed.

Curious timing, after Meilisearch removed their free tier, I discovered that even though they officially support searching for Chinese characters, the implementation wasn’t perfect. I found some edge cases where characters weren’t detected properly, making the search results unreliable.

So, my last hope was Algolia. Despite them being the more expensive option out of the three, it does offer a free tier. It turns out, their search results for Chinese characters were much better than Meilisearch’s. Luckily, re-implementing the search from Meilisearch to Algolia didn’t take too much effort, it was pretty much done in one night.

Infrastructure

Early on during the development I’d already determined I wanted to try Fly and Neon, for web and DB, respectively.

I am in no way associated with either company, I was curious about Fly due to its tie-in with the Elixir community (Phoenix Framework’s author Chris McCord works there), and Neon due to its serverless nature.

Globally Distributed Infra

With Fly, the infrastructure automatically becomes globally distributed as soon as I started provisioning servers in more than one region. As of the time of writing, Persumi is deployed to US West, Australia and EU.

Despite being simple to use, making Fly work initially actually took quite a bit of finessing due to its incomplete official documentation and flakiness. Some of the services were having issues during the course of my MVP development. Worse, they don’t report (or sometimes even acknowledge) the issues unless they are region-wide outages. To this date, I believe their blue/green deployment strategy which was recently introduced, is still buggy, I often have to use their rolling deployment strategy instead. Deployment logs were provided to Fly but I think they’re too busy with other things…

Still, I’m sticking with them for now due to the ease of use after the initial hurdle, and their globally distributed infrastructure without asking for my kidney.

To augment Fly’s web servers, I also use Cloudflare’s CDN as well as R2 to serve asset files and audio files.

Funny tangent, initially I used Bunny for asset files and CDN, as I misread Cloudflare’s terms and thought I couldn’t serve audio files from Cloudflare. Bunny worked okay but their dashboard for some reason was painfully slow - not a good look for a CDN company. Like the search engine switch, it didn’t take me too long to switch over to Cloudflare.

Serverless Postgres

There are a few options to run Postgres:

  1. Run on a standard server for maximum portability, but it requires more server maintenance overhead
  2. Run on AWS RDS/Aurora or a similar managed service, easy but can be costly
  3. Run on a serverless option such as Aurora Serverless or Neon

For my use case, I think option 2 or 3 are better fitting. As I mentioned earlier, I started the experiment with Neon.

Neon worked well initially, until I started deploying Fly instances in multiple regions. Due to Neon being only available in one region (I chose US West), and I live in Australia, the round trips between Fly’s Australian instance and Neon’s US instance were a show stopper - especially when complex DB transactions were involved. Actions sometimes took seconds to complete, yikes.

Despite Fly not offering a managed Postgres service, I ended up trying it anyway due to its distributed nature. After incorporating Fly Postgres in the app, all DB operations immediately became more responsive. Paired with LiveView, it feels like running the application locally.

The current Persumi infra looks like:

  • 1 x Fly instance in US West, always on
  • 1 x Fly instance in Australia, auto-shutdown when there’s no traffic
  • 1 x Fly instance in Netherlands, auto-shutdown when there’s no traffic
  • 1 x Fly Postgres writer instance in US West, always on
  • 1 x Fly Postgres read replica instance in Australia, always on
  • 1 x Fly Postgres read replica instance in Netherlands, always on

With this setup, I think I’m quite happy with the cost and scalability balance - it costs ~$20/m to run, with the potential of both vertical and horizontal scaling with ease.

The Missteps

The search engine and CDN swaps mentioned earlier certainly took away some of my time, but they were nothing compared to a major misstep I encountered.

And that was: the choice of how machine learning is done.

Let me explain.

Machine Learning, and Inference

Even before I started the first line of code, I already painted a picture in my head on the machine learning needed: a TTS (text-to-speech) model that I could run inference locally on the instance.

The reason being I believed it was the more flexible approach to gradually improve the inference and therefore the end result by training my own AI models over time.

Given I didn’t want to rent expensive GPU instances, I opted for fast TTS models that could do near real-time inference on CPUs. I used Coqui TTS.

The resulting out-of-box audio wasn’t great, but I kept pressing on.

The show stopper came when it was time to deploy everything onto Fly. Due to Fly’s architecture (they deploy small-ish Docker images, < 2GB each, onto their global network), I struggled to keep the Docker image file small enough to be able to deploy. With Coqui TTS, I would need Python and all the dependencies that resulted in a Docker image around 4-5GB in size.

With my tunnel vision, I then chose to offload the entire Python and Coqui TTS dependency tree onto Fly’s persistent volumes. I knew it wasn’t a great option, as that meant my infrastructure (other than the database) was no longer immutable.

Sometimes it’s necessary to take a step back, re-evaluate, and then press on in a different direction. Which thankfully I did.

The new direction is quite simple really: instead of performing inference locally, use an external service instead.

After doing a quick comparison between the offerings from AWS, Azure and GCP, I ended up using Google’s TTS. Honestly I think I would’ve been happy with any of the options, they all seem to have decent neural based TTS.

In hindsight, these giant corporations have much more resources and expertise to train better models than I ever could on my own.

The end result:

  • The TTS sounds significantly better than before
  • It’s just as cheap to run (Google offers a certain amount of free TTS API calls per month)
  • It no longer needs complex Python calls and FFmpeg calls to make local TTS work
  • The Fly infrastructure is simple and immutable again

In hindsight, I never should’ve even entertained the idea of running ML locally on CPUs, no matter how simple and efficient a model might be.

That said, with TTS, it wasn’t as simple as just calling the APIs and getting the perfect resulting audio back. Some pre and post processing were needed, but that’s a topic for another time.

More Machine Learning

The cherry on top - now that Google’s APIs were integrated into the app, I ended up also using Google’s PaLM 2 to do text summarisation (it was initially done locally too) as well as for a ChatGPT-like AI prompt service, to power Persumi’s AI writing assistance feature.

The Closing

If you read this far, thank you! I hope you enjoyed reading (or listening) to this post. Please look around and kick tyres, I would love your feedback on how to improve Persumi.

Sign up for an account if you haven’t already, and leave a comment if you have any questions. Until next time!

]]>

Tips for Writing a Good CV / Résumé

2020年5月20日 16:59
Due to COVID-19, not many companies are hiring at the moment. The company I work for therefore is in a very fortunate position to still be thinking about growth and hiring.

As a hiring manager for almost a decade now, I’ve personally reviewed thousands of job applications and CVs, and many hiring managers would probably agree, the vast majority of CVs are terrible. Let’s change that!

During COVID-19 where more and more people are either losing jobs or having their work hours cut, we are experiencing an increased amount of applicants to our job ads. I’d say on average I spend about 30 seconds per applicant due to my busy schedule - most hiring managers are busy people, it is therefore crucial for candidates to realise the importance of having a CV that is clear, easy to read and most importantly sells yourself. And if you have a cover letter, which I highly encourage that you do, congratulations you just bought yourself another 30 seconds. ;)

I’m writing this post mostly from my own perspective - as a hiring manager in a tech company in the western culture (we’re based in Australia). Understandably, different cultural backgrounds and regions may have their own conventions, but certainly in Australia and many similar western cultures, there are things that you do and don’t do on a CV, and there are things that may help your CV stand out. Let’s talk about these things.

At the end of this post I will also share a copy of my own CV to help illustrate my points.

30 Seconds? Surely It’s Unfair to the Candidates

Yes, I agree, to think that you are only given 30 seconds for your perhaps carefully crafted CV and cover letter is definitely soul-crushing. But it is unfortunately the reality. I work for a company where I can still do the first round of vetting myself, many large corporations would use algorithms and/or HR people to reject your applications based on keywords and other things.

Knowing the reality and the constraints, there are a few things I’d like to address in the hope of improving your CV and your chance of scoring an interview, and in turn, helping myself and other hiring managers out there to have a better candidate CV screening experience.

Have a Pronounceable Name or Alias

This one surely would raise some eyebrows - you might think that your name is your identity and you should not change it for anyone. True, however, the reality is that a hard-to-pronounce name discourages your profile to be shared and spoken about. Why not add a pronounceable alias if means there’s an increased chance of getting an interview?

For clarity, I personally would never reject a candidate based on their name (or their cultural background for that matter), but I know some hiring managers might, and for some of them, they are NOT doing it on purpose. However, I have on several occasions had to ask a candidate how to correctly pronounce their name.

A Short Blurb on Who You Are

As a hiring manager, I care about who you are as a person - if you can summarise who you are as a professional in a sentence or two, it will help me determine whether you might be a good fit or not.

As an example, here’s a blurb about me:

A passionate and hands-on software executive with two decades of experience and an entrepreneurial mindset.

A long time open source developer who has created and contributed to a few dozens of projects, including Ruby on Rails.

In two sentences, I explained my industry experience as well as my open source contributions - two things that help define who I am as a working professional. It also invites more questions from hiring managers: what kind of things have I done as an entrepreneur; what other open source projects have I contributed to?

Work Rights

Many companies have restrictions or policies around who they can hire based on their residency and visa status. If you are not a resident or are on a particular visa, make it clear in your job application so you don’t end up wasting time for the employer and for yourself.

List Keywords, But Don’t Overdo It

In the tech space it is important to have keywords visible to highlight your skills. If you are a software developer, your tech stacks should be clearly stated in your CV. As a hiring manager, if I am hiring a PHP developer, I expect to see PHP mentioned in your CV. There are of course exceptions, for example when we were hiring Elixir developers I did not expect to see Elixir as a keyword simply due to the supply constraint.

It is a balancing act however - I’ve seen CVs where candidates put 20-50 keywords on their CVs. I’m sorry but unless you are extremely gifted, you cannot possibly be good at all those things. Do not put keywords on your CV simply because you’ve read an article on the subject.

Oh, and unless you’re going for a data entry role, I honestly don’t care about your Excel skills…

Do Not Overstate Your Capability

Similarly, try to avoid overselling your capability. I once interviewed a candidate who claimed to be an “expert” on Ruby. We were actually hiring for a non-Ruby position, but given the candidate’s CV, I questioned him on some advanced Ruby subjects during our interview and he struggled all the way through and was sweating bullets. Suffice to say that he did not get the job.

Be confident, but also be honest and be humble. Lying on your CV to get an interview is a waste of everyone’s time.

Keep Things Short

As I mentioned in the beginning, I spend on average 30 seconds on each CV. Keep things short and easy to read! I really don’t care about how awesome you were in your last dozens of projects - these will get covered during interviews.

On a CV I expect short and concise blurbs on what you did in each role. Also, take recency into account too - if you’ve been working in the industry for a decade or two, what you did 20 years ago really doesn’t matter as much, so save yourself some time and cut things short.

For example, here’s the blurb for my current role:

Leading a department of 25+ engineers to make great child care and education software. As part of the leadership team and reporting to the CEO, helping building and turning the company into a market leader.

And here are the blurbs for my older roles:

 

Yes, the blurbs for my older roles are left empty intentionally.

Now, again there are exceptions. If something happened a while ago but is interesting and relevant, do tell! For example, here’s the blurb for my oldest “role”:

Built my first ever website using Microsoft FrontPage Express, on a Pentium 166Mhz computer, uploaded via a 33.6kbps modem.

Explain Over-Qualified Titles

There were a few times where a “CTO” or even a “CEO” applied for a developer role. In most cases it wasn’t about over-qualification, but about what the candidate wanted to achieve professionally. So, either in the CV or in the cover letter, explain what you are looking for in your next role, otherwise you run the risk of being assessed as over-qualified.

Spare the Personal Details That Are Too Personal

This is predominantly a culture thing as I’ve seen it from mostly candidates of certain cultural backgrounds. I really don’t care about your age, gender, marital status or favourite sport. These things do not define who you are as a professional - we might talk about your favourite sport and food during the interview but they are irrelevant on your CV.

Space Things Out

Look up CRAP Principles - make sure your CV has enough white spaces and contrast, and has fonts that are readable! Scrolling through walls of text is no fun and is a sure way to get your CV dismissed.

2-4 Pages

This is not scientific, for me personally I prefer to see CVs of 2-4 pages. Use the length as a constraint to cut things down. There were several occasions where I ran into CVs with 10+ pages. I guarantee you, unless a hiring manager is extremely bored, he or she does not have time to read your War and Peace.

PDF Over Word

When possible, submit your CV in PDF format instead of Word format. Now, sometimes if you use a recruiter you’ll be asked to submit your CV in Word format so they can fuck it up add their branding. A PDF formatted CV ensures the correct formatting and layout always get shown to the hiring managers.

Cover Letter

Always attach a cover letter when possible, but keep it short too. Given the amount of CVs a hiring manager needs to go through, having a crafted cover letter is another way to grab their attention and increase your chance of getting an interview.

Don’t repeat the same information in the cover letter though. Your CV is about the facts of your experiences, your cover letter should be about your thoughts on why the company should hire you. Focus on the value you can bring to the table.

Find A Referral

When possible, find someone who can refer you. A referral gets preferential treatment during the CV screening stage and does not suffer from the same 30-second fate.

Pleasing Design

This one is a “nice-to-have”: if your CV is really well designed, you would earn another 30 seconds of my attention. ;)

~

These are the main points, hopefully they are helpful. To help illustrate, here is a copy of my own CV, with contact details removed.


If you enjoyed this article, checkout my other tips articles:

]]>

Coding and Learning Should Never Stop, Open Sourcing is Caring

2017年8月27日 19:21
I’ve had a productive coding weekend, and so I decided to share my experience. Now, many developers choose to treat their career as a series of 9-5 jobs, but if you’re reading this, I assume you’re like the rest of us who love continuous learning and self improvement.

Preface

About a year ago I started learning Elixir. So as part of the learning experience, I wrote two matching learning related libraries: Stemmer and Simple Bayes. It was a great, really enjoyable experience and I learnt a lot about the concepts of word stemming, naive Bayes classification and of course, functional programming.

These topics have been interesting to learn about, but until one gets to use them on a daily basis for a while, key concepts are less likely to convert from short term memory to long term memory. Given my day job is not about writing Elixir (yet), I needed to find other ways to keep my skill-level up and to continue exploring new things.

So about a month ago, I picked up a project I started a year ago but gave up shortly after: Crawler. At the time GenStage was just announced and I was interested in incorporating it into my project as I thought it’d be a great fit. But due to varies reasons - mostly not having a firm grasp of the GenStage concept and implementation, as well as taking on a CTO role at a startup, I couldn’t find enough time and patience to make it work so I had to let it go.

Until now.

Crawler, on Steroids

I’d realised that the way I was trying to incorporate GenStage into my project was never going to work. Not because of GenStage itself, but because of the way I approached learning it. At the time I was so eager to make use of GenStage, and coming off the back of my good streak of releasing the aforementioned machine learning libraries, I thought I could take shortcuts and things would all work out perfectly.

No.

So I licked the wounds, learnt my mistakes and changed tactics. This time, I scoped out and encapsulated my learnings (just as one would in designing a software system), and eventually I came up with another library - OPQ: One Pooled Queue.

OPQ: One Pooled Queue

I knew Crawler could technically work without any queueing system, or GenStage. So I kept on building Crawler, until I felt productive in writing Elixir again, and touched enough areas and concepts that I knew exactly what I needed from GenStage.

And luckily for me too, GenStage over the past year has matured and more importantly, has better documentation with more code examples. Upon closer investigation of the code examples, I found that their GenEvent and RateLimiter examples were almost exactly what I needed. It was an epiphany moment for me after reading and understanding these examples, all of a sudden I “get it”.

If you take a look at the source code of OPQ you’ll notice that the heavy lifting logic was mostly inspired (or even copy-pasted) from those examples.

Open Sourcing is Contributing and Caring

Up until this point, as far as writing open source Elixir code goes, it had mostly been me writing my own code for my own projects. But open sourcing is much more than just writing one’s own code and publishing them on Github.

If you’ve followed my work you’ll know that I’m a big fan of contributing to other projects, some of which are well-known ones like Rails and Slim.

It just so happens, that over the weekend I ran into situations where I needed to contribute to other Elixir projects - and spoiler alert: one of which is Elixir itself.

ElixirRetry

One of the features I set out to add to Crawler, was to allow failed crawls to retry before giving up. Naturally, the first thing I did was to try find an existing package to support the retry functionality.

After some googling and digging into some source code, I’ve found and settled on ElixirRetry - a neat library that was cleverly built.

Soon I found out that since its last release, it offered both retry/2 and retry/3, the latter of which was to support an extra argument that specifies which exceptions to allow as part of the retry flow. It was a great addition, but it doesn’t effect me as Crawler doesn’t need it.

As a developer who cares deeply about code quality and clarity, I immediately thought about how the interfaces for retry/2 and retry/3 could be improved, by simply combining them and making the extra argument in retry/3 an option.

So I did, and issued a pull request here: https://github.com/safwank/ElixirRetry/pull/12 You’ll notice that I’ve also made some improvements to the test suite while I was at it. ;)

Elixir Typespec

One issue I ran into while building Crawler, was the static analysis via Dialyzer - some code that actually ran and passed tests functionally, failed the type checking.

As someone who isn’t as experienced in Elixir, I first opened an issue and phrased it more as a question seeking validity of the issue. I then jumped on Elixir’s Slack group and asked people in the group about this issue.

Fortunately, Ben Wilson immediately came to aid by verifying my issue and validating my suspicion of it being an issue in Elixir’s typespec documentation.

And so, a pull request was created and approved shortly after.

Sharing is Caring

The bulk of Crawler as well as the entirety of OPQ were built in the past month or so. I hope some people will benefit from having these libraries around. And, I hope people also enjoyed me sharing my experience, and perhaps be inspired to start sharing more too.

I will leave you all with a Chinese saying: 滴水石穿. The literal translation is “dripping water penetrates the stone”, and what it means is “constant perseverance yields success“.

]]>

History Text Analysis Over Spreadsheets - A Poker Player and Developer's Road to Agile Project Management

2015年3月10日 05:50
Ever since I started transitioning into a team leadership role over three years ago, I had been trying to find ways to eliminate waste caused by repetitive work and to keep myself on the fringe of pushing the technical boundaries.

Four months ago I started my current role where my official job title is Delivery Lead. People don’t often know what a delivery lead is, but in my mind it is a role to ensure the success of the project delivery by identifying and closing the gaps in the team and in the organisation. And in order to do that, one of our responsibilities is to measure, understand and improve our team’s agile process.

It is very tempting to rely on the wonderful and powerful Excel formulas to help record and analyse data points and generate metrics such as cycle time. However, punching things into a spreadsheet is tedious, error-prone, time consuming and violates the DRY principle.

The spreadsheet I used to use for tracking cards.

Introducing Amaze Hands

As someone who strives to keep writing code even in a non-technical role, I started building a tool called Amaze Hands to help reduce the amount of waste I accumulate as a delivery lead.

Amaze Hands’ simple Web UI.

Analyse Cards Like Poker Hands

I used to play a bit of online poker and one thing you do in online poker is to look at your hand history to understand the game and your opponents.

If you think about it, agile boards are just like poker games - there is history to what has happened in the past, and in order to optimise for future gains, we need to understand what went wrong and what to improve on a case-by-case basis.

One of the teams I am apart of uses LeanKit, whilst it is a good tool, its reporting functionalities are very limited and its XML export function is completely broken. As a result I started building Amaze Hands to parse the copy-pasted card history from LeanKit, and to eventually generate the metrics I care about.

The LeanKit strategy which consists of a parser and a transformer is able to parse the copy-pasted text from a card as shown below:

And below is the high level architecture of Amaze Hands:

    +---------------------+
    |        Text         | <- Raw text input.
    +----------+----------+
               |
+--------------v--------------+
|         Strategies          |
+-----------------------------+
|   +---------------------+   |
|   |       Parser        |   | <- Parses text into an AST.
|   +----------+----------+   |
|              |              |
|   +----------v----------+   |
|   |     Transformer     |   | <- Transforms the AST into a common AST.
|   +---------------------+   |
+--------------+--------------+
               |
    +----------v----------+
    |       Builder       | <- Builds the dataset from the common AST.
    +----------+----------+
               |
    +----------v----------+
    |       Reducer       | <- Filters the dataset.
    +----------+----------+
               |
    +----------v----------+
    |      Analyser       | <- Analyses the dataset for metrics.
    +----------+----------+
               |
    +----------v----------+
    |      Producer       | <- Produces metrics.
    +----------+----------+
               |
    +----------v----------+
    |      Presenter      | <- Presents metrics.
    +---------------------+

Incremental Analysis - Zero In On Metrics That Matter

No projects are equal, and no project teams are equal - the goal of Amaze Hands is to incrementally add intelligence to our agile process that matters to a particular project and its delivery team.

By incrementally adding and/or filtering data points for analysis, we will be able to zero in on the problematic areas of our agile process. The following is a list of potential areas we could perform analysis on:

  • cycle time
  • wait time
  • blocked time
  • knocked-back time
  • context switch (between different streams of work)
  • other factors such as meetings, attrition, etc

As of the time of writing, Amaze Hands supports the following common metrics:

  • cycle time (mean and median)
  • cycle time rolling average (mean and median)
  • wait time (mean and median)
  • wait time rolling average (mean and median)
  • standard deviation rolling average
  • cycle time scatter

It’s Just the Beginning

Amaze Hands started off as an REA Hackday project - on the technical level (hey I still see myself as a developer!), the tool was built in a way that it isn’t over-engineered (a.k.a. slow to get it out the door and validate its usefulness) but at the same time has multiple layers as shown in the architecture diagram above so I could refactor and optimise each layer independently when necessary.

It is still early stage, but I thought I’d share what I have right now to gather some feedback and perhaps inspire fellow project leaders to look into optimising your own workflow.

When I started Amaze Hands I was only leading one project team that uses LeanKit, but since last week I started leading another team that uses a physical wall - I can’t wait to adapt Amaze Hands to support the new input stream.

So, do you have any interesting tools or techniques to help you lead projects? If so, I would love to hear about them!

]]>

On Hiring: Trial Week - Yay or Nay?

2013年10月29日 20:09
Today a blog post titled “Trial Week: Our Hiring Secret“ has made to the Hacker News homepage. I naively tweeted my dislike and now I feel obligated to share my thoughts in a more meaningful and constructive way.

First of all, congratulations to the Weebly team, as this trial week strategy is clearly working very well for them.

I, on the other hand, am against using a trial week for vetting candidates, and I am going to share my thoughts.

Let this serve as a reminder to the rest of us: every organisation and team is different, so think carefully before committing to a given strategy.

One Week is a Major Commitment for the Candidate

In Australia, a full time employee typically gets four weeks of annual leave, with one or two weeks of which used up for the Christmas / New year down time. We are looking at asking candidates to spend 33-50% of their vacation time to commit to a trial week for one company - a terrible ROI (Return On Investment) from the candidate’s perspective if you ask me.

Candidates who are currently employed, with multiple offers from other organisations are more likely to skip the trial week - from experience, this is often the higher quality candidate pool.

Side Effects

  • Increases the likelihood of burnout due to the reduced vacation time
  • Shrinks the candidate pool
  • Misses top talents who are unable to make the one-week commitment
  • As a result, the overall quality of the candidate pool drops
  • Paints an image of “not-caring (enough) about the employee’s well being”

Of course, since the trial week is paid for, the employee could always take unpaid leave from their current employer.

Side Effect

  • Raises alarm bells at current workplace since one week of unpaid leave is significant

One Week is a Major Commitment for the Team

Given the trial only lasts a week - we better make it count! That means one or more current developers need to be assigned to take care of the trial developer - pairing and walking through existing systems, etc. This is assuming we are going to act responsibly, and not simply just direct the trial developers to their desks and ask them to “go for it”.

Side Effects

  • Higher pressure for the team
  • More difficult to act on other priority tasks

Developer Productivity Curve (One Week is Not Enough)

From my experience of on-boarding new developers, it typically takes 4-8 weeks for a developer to become productive and effective in a new work environment.

According to Weebly, candidates are assigned with a project that is small enough to do in a week, but still resembles what the candidate would be doing if hired. It sounds great if it works, but for many organisations this is unfeasible, for instance:

  • There is no small projects to assign, unless invented
  • Navigating documentation and source code would take days, if not weeks

Either way, with one week of trial, the candidate is unlikely to have enough time to contribute as well as to be integrated into the team culture.

Side Effects

  • Higher chance of misjudging the candidate’s ability and productivity
  • Significantly higher chance of creating solutions misaligned with the team and/or the organisation
  • Higher maintenance cost should the team decides to keep the solutions created

66% Hire Rate Suggests Deeper Hiring Issue

Weebly at the end of their blog post writes:

Our hire rate out of trial week is around 66%, which feels like the right level.

I respectfully disagree. A 66% hire rate from the trial week is a 34% failure rate on the pre-trial week recruitment process, and this is significant.

Which brings us to…

More Effective Ways to Vet a Candidate

Where I work, we have a simple, three-step recruitment process:

  1. Complete a small and fun code challenge, in your own time and with your own pace. The code challenge usually takes 2-4 hours to complete.
  2. Invited to our office to chat with our developers and founders, optionally done via video chat. This usually takes an hour or so.
  3. Pairing session, usually takes 30-60min.

Step 2 and 3 are sometimes swapped. And we also check out the candidate’s Github account if available, and their past projects if public.

In the code challenge we vet the candidate’s problem-solving ability, software design sense, code quality, code style and ethics (it’s easy to tell whether they cheated).

During the chat we vet the candidate’s project experience, depth of knowledge, breadth of knowledge, communication skill and culture fit.

In the pairing session we vet the candidate’s development practice, thought process and the ability to articulate.

By the end of the three steps we are usually pretty confident on +1 or -1 to hire the candidate. If we aren’t, it’s a -1.

But hold on, didn’t I mention one week is not enough for a candidate to be productive and effective? Yes! And that’s why most places have a three-month probation.

The difference between the long probation period and the short trial period, is not only in duration, but more importantly in commitment. In my opinion, only when both parties are committed can you achieve great result.

So, let’s hear your say, what do you think? :)

Poll: Trial Week, Yay or Nay?

]]>

Writing Sensible Tests for Happiness

2013年8月26日 22:11
Writing good, sensible tests is hard. As a Rubyist, I feel lucky to be part of a community that embraces tests. Though at the same time, I have come across too many projects that suffered from not having sensible tests.

What are Sensible Tests?

There often isn’t a silver bullet when it comes to software development. Technical stuff aside, many things contribute to the solution to a given problem - the team, the project and the business to name a few. This article does not attempt to present any insights into the best practices for testing, rather it collects a few tips I believe would benefit those who are not yet comfortable with writing tests.

To me, sensible tests often have the following characteristics:

  • it does not replicate implementation details;
  • it does not provide false sense of security;
  • it runs reasonably quickly;
  • it does not slow down the development significantly;
  • it guides the programmer towards a better architecture;
  • and, it does not make you sigh every time you want to modify your tests.

Art and Science, TDD or Not TDD

Just like writing production code, writing tests is also a combined form of art and science. It takes not only experience, but also intuition to write sensible tests. You have to remember that not all projects and programmers are equal - take what you get, practise, and reflect on your findings.

Many times I had come across seasoned programmers practising TDD, only to find themselves cornered into a bad design that ultimately had to be thrown away. TDD does not save you from writing bad code, this article is not about TDD, it’s about testing in general.

I am most comfortable with using RSpec, FactoryGirl, Capybara and Turnip, so I’m going to use these tools in the code. The principles however apply to any testing framework.

Test as Little as Possible to Reach a Given Level of Confidence

Kent Beck, the inventor (or more correctly, ‘rediscoverer’) of TDD once said:

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence.

I used to prefer testing almost everything, but over the recent years I find myself increasingly look for key areas of the system that need the test coverage the most. Typically, our systems would have:

  • unit and functional tests for model behaviours
  • unit and functional tests for services
  • integration tests for controller actions
  • request tests for API endpoints
  • isolated JavaScript tests
  • high level integration/acceptance tests in Gherkin

Model and service level tests are arguably the most important ones so we make sure we have really good test coverage for those. For controller tests we rely heavily on reusable production and test code for maintainability and sanity. For API endpoints we mostly test presented data structure - as business logic and data integrity should have been covered in model, service and controller layers. Isolated JavaScript tests take care of both presentational business logic and tricky UI tasks. And finally, acceptance tests handle happy-path user interactions.

Do Not Test Framework and Library Code

Writing application-specific business logic is difficult enough, you really should not test functionalities provided by the framework or libraries. Below is an example of such bad tests:

describe ApprovalStakeholder do
  it { should belong_to(:approval) }
  it { should_not validate_presence_of(:approval) }
end

Similarly to how you would add useful comments, i.e. describe why instead of what, these tests should be replaced by tests that cover actual functionalities, for instance the reason why an ApprovalStakeholder doesn’t need an Approval to be presence should be demonstrated in the tests:

shared_examples_for "non-approval specific stakeholder" do
  its(:action_that_does_not_care_about_approval) { should be_true }
end

describe ApprovalStakeholder do
  let(:approval) { create(:approval) }
  let(:user) { create(:user) }
  let(:role) { create(:role) }

  subject do
    build(:approval_stakeholder,
      :user_id => user.id,
      :role_id => role.id
    )
  end

  context "with an approval" do
    before { subject.approval = approval }

    it_behaves_like "non-approval specific stakeholder"

    its(:action_that_does_care_about_approval) { should be_true }
  end

  context "without an approval" do
    it_behaves_like "non-approval specific stakeholder"

    its(:action_that_does_care_about_approval) { should be_false }
  end
end

Ensure What You are Testing Makes Sense

The test case below showcases the original developer’s lack of attention and awareness on designing a functional and secure system. It actually tests the reference keys for the ApprovalStakeholder object are allowed to be mass assignable, which is a recipe for disaster.

describe ApprovalStakeholder do
  it { should allow_mass_assignment_of(:user_id) }
  it { should allow_mass_assignment_of(:role_id) }
end

De-Duplicate Test Cases

Looking at the example below, the first thing you’d notice is the amount of duplication.

describe ApprovalStakeholder do
  it "#traveller" do
    stakeholder = create(:approval_stakeholder,
      :approval => approval,
      :user_id => traveller.id
    )
    stakeholder.stub(:user).and_return(traveller)
    approval.stub(:stakeholders_as).and_return([stakeholder])

    approval.traveller.should == traveller
  end

  it "#authoriser" do
    stakeholder = create(:approval_stakeholder,
      :approval => approval,
      :user_id => authoriser.id
    )
    stakeholder.stub(:user).and_return(authoriser)
    approval.stub(:stakeholders_as).and_return([stakeholder])

    approval.authoriser.should == authoriser
  end
end

It’s true that tests act as a form of specification therefore should be optimised for clarity, in this case however, we could still maintain the clarity with significantly reduced duplication:

describe ApprovalStakeholder do
  let(:stakeholder) do
    create(:approval_stakeholder,
      :approval => approval,
      :user_id => user.id
    )
  end

  subject { approval }

  before do
    stakeholder.stub(:user).and_return(user)
    approval.stub(:stakeholders_as).and_return([stakeholder])
  end

  describe "#traveller" do
    let(:user) { traveller }

    its(:traveller) { should == traveller }
  end

  describe "#authoriser" do
    let(:user) { authoriser }

    its(:authoriser) { should == authoriser }
  end
end

Do Not Replicate Implementation Details

I am often surprised to see many seasoned developers “enjoy” writing tests that essentially replicate the production code logic without much benefit. See below:

describe ApprovalStakeholder do
  it "references a user" do
    approval_stakeholder = build :approval_stakeholder, :user_id => 1
    User.should_receive(:find).with(1)
    approval_stakeholder.user
  end

  it "references a role" do
    approval_stakeholder = build :approval_stakeholder, :role_id => 1
    Role.should_receive(:find).with(1)
    approval_stakeholder.role
  end
end

Rather than creating noisy tests, tests with actual assertions seem much more meaningful and readable:

describe ApprovalStakeholder do
  subject do
    build(:approval_stakeholder,
      :approval => approval,
      :user_id => user.id,
      :role_id => role.id,
    )
  end

  its(:name) { should == "#{user.first_name} #{user.last_name}" }
  its(:role_name) { should == role.name }
end

Reduce the Reliance on Mocks and Stubs

This is a difficult and often-debated subject. In my experience, having too many mocks and stubs even though speeds up the test suite, usually leaves too many holes in your tests and makes the test suite less accurate and effective. Fortunately, by using more service objects (described below), mocking and stubbing become more manageable as you use them mostly on external objects and interfaces.

Take Apart the System, One Service at a Time

If you’re a Rails developer, you are already familiar with MVC. But just relying on MVC to hold your application architecture is probably not going to be sufficient for an average modern day web application. Many people like Service-oriented architecture, so do I.

Services are unassociated, loosely coupled units of functionality that are self-contained.

In my experience, as long as you are disciplined in having services do one and only one thing really well, testing becomes much easier.

For instance, we have a Bouncer service that is responsible for safeguarding resources - ensuring read-only attributes don’t get overridden.

module Services
  class Bouncer
    def self.guard(resource, options = {})
      if options[:existing_resource]
        resource.readonly_attributes.each do |attr_name|
          resource.send("#{attr_name}=", options[:existing_resource].send(attr_name))
        end
      end

      resource
    end
  end
end

The corresponding tests for this service are both fast and self-contained:

describe Services::Bouncer do
  class BouncerDude
    include Mos::Entity

    set_readonly_attributes :age, :gender

    attribute :name
    attribute :age
    attribute :gender
  end

  let(:resource) { BouncerDude.new(name: 'Penny', age: 28, gender: 'female') }
  let(:existing_resource) { BouncerDude.new(name: 'Sheldon Cooper', age: 34, gender: 'male') }
  subject { Services::Bouncer.guard(resource, existing_resource: existing_resource) }

  describe "#guard" do
    its(:name) { should == 'Penny' }
    its(:age) { should == 34 }
    its(:gender) { should == 'male' }
  end
end

Recognise Common Patterns and Refactor Them into Services

One of the reasons why service-oriented architecture is so popular is because things are broken down into smaller, more manageable and more testable pieces. It is especially helpful for TDD practitioners as it significantly reduces the amount of coupling between your production code and your tests due to having simpler internals per test subject.

Take a look at the below example, which is hard to read, hard to test and error-prone:

module ApplicationHelper
  def branch_logo_options(branch)
    BranchLogo.where(branch_id: branch.id).map { |logo| [logo.file, logo.id] }
  end

  def branch_options(agency)
    BranchRepository.find(agency_id: agency.id, archived: false).map do |b|
      [b.name, b.id]
    end
  end

  def agency_user_options(agency, filtered_users)
    filtered_user_ids = filtered_users.compact.map(&:id) || []
    AgencyUserRepository.find(agency_id: agency.id, archived: false).select do |u|
      !filtered_user_ids.include?(u.id)
    end.map { |u| [u.full_name, u.id] }
  end

  def current_agency_user_options(filtered_users = [])
    agency_user_options(current_agency, filtered_users)
  end

  def current_agency_trust_bank_account_options
    BankAccountRepository.find(
      agency_id: current_agency.id,
      archived: false,
      account_type: BankAccount::TRUST_ACCOUNT).map do |b|
      [b.account_name, b.id]
    end
  end

  def code_options_for(klass)
    klass.all.map { |cc| ["#{cc.code} - #{cc.name}", cc.id] }.sort
  end
end

Let’s refactor it into something more manageable, by introducing a service ShowGirl for fetching and presenting data collections:

module CollectionOptionsHelper
  def branch_logo_options(branch)
    Services::ShowGirl.present(branch, from: BranchLogo, show: :file)
  end

  def branch_options
    Services::ShowGirl.present(current_agency, from: BranchRepository)
  end

  def consultant_options(excluded_users = [])
    Services::ShowGirl.present(
      current_agency,
      from: AgencyUserRepository,
      show: :full_name
    ) do |collection|
      collection.reject { |user| user.id.in?(Array.wrap(excluded_users).map(&:id)) }
    end
  end

  def trust_bank_account_options
    Services::ShowGirl.present(
      current_agency,
      from: BankAccountRepository,
      show: :account_name,
      filters: { account_type: BankAccount::TRUST_ACCOUNT },
    )
  end

  def code_options_for(name)
    Services::ShowGirl.present(
      current_agency,
      from: Admin::Configurations::Essential.descendants.find { |d| d.name =~ /::#{name.to_s.classify}/ },
      show: -> (item) { "#{item.code} - #{item.name}" }
    )
  end
end

Better yet, we can clean it up even further by introducing another service, BusBoy for just serving the data, and leaving ShowGirl for only presenting the data:

module CollectionOptionsHelper
  def branch_logo_options(branch)
    Services::ShowGirl.present(
      Services::BusBoy.serve(:branch_logos, branch: branch)
    )
  end

  def branch_options
    Services::ShowGirl.present(
      Services::BusBoy.serve(:branches, agency: current_agency)
    )
  end

  def consultant_options(excluded_users = [])
    Services::ShowGirl.present(
      Services::BusBoy.serve(:consultants, agency: current_agency),
      show: :full_name
    ) do |collection|
      collection.reject { |user| user.id.in?(Array.wrap(excluded_users).map(&:id)) }
    end
  end

  def trust_bank_account_options(account_type)
    Services::ShowGirl.present(
      Services::BusBoy.serve(:bank_accounts,
        { agency: current_agency, BankAccount::TRUST_ACCOUNT }
      ),
      show: :account_name
    )
  end

  def code_options_for(name, options = {})
    Services::ShowGirl.present(
      Services::BusBoy.serve(name, agency: current_agency), options
    )
  end
end

Basic Controller CRUD Actions

In one of our projects we have lots and lots of forms. Consequently we have lots and lots of CRUD actions. In order to keep our sanity as well as to make basic CRUD controllers maintainable, we have a custom DSL to make CRUD actions portable and testable:

module Profiles
  class TravellersController < BaseController
    authorize_resource class: Traveller

    datamappify_resources entity: Traveller,
                          repository: TravellerRepository,
                          filter_by: :agency_id,
                          filter_value: -> { current_user.agency_id }
  end
end

Most of our controller tests look like this:

require 'spec_helper'

describe Profiles::AccountsController do
  let(:existing_resources) { [] }
  let(:create_resource) { Mos::Data.create_account }
  let(:create_resources) { Mos::Data.create_accounts(2) }
  let(:a_resource) { assigns(:resource) }
  let(:invalid_param) { { name: '' } }
  let(:params_key) { :account }
  let(:redirect_path) { profiles_accounts_path }

  it_behaves_like 'datamappify resources controller'
  it_behaves_like 'searchable resources controller', :name,
                                                      :profile_id,
                                                      :branch_id,
                                                      :activated

  describe "permission" do
    context 'as a manager' do
      before do
        sign_in_as :manager
      end

      it_behaves_like 'with write access'
      it_behaves_like 'with read access'
      it_behaves_like 'with index access'
    end

    context 'as a consultant' do
      before do
        sign_in_as :consultant
      end

      it_behaves_like 'without write access'
      it_behaves_like 'with read access'
      it_behaves_like 'with index access'
    end
  end
end

API Endpoint Tests

One of our projects at work is an API service that is essential to our platform. Naturally, we not only need to test the models, services and controllers, we also need to ensure the API endpoints do what they are supposed to do - mostly exposing the correct data structure.

During the early stage of the development, I had come up with ApiTaster - a super useful gem for visually testing our Rails application’s APIs. Later on, as we continued to grow our API endpoints, we started utilising ApiTaster for our automated test suite too.

In essence, we have one API spec file responsible for describing which endpoints are tested and missed according to the information given by ApiTaster:

describe "API" do
  load 'db/seeds.rb'
  load 'spec/api_endpoints.rb'

  ApiTaster::Route.map_routes

  ApiTaster::Route.defined_definitions.each do |route|
    it "api endpoint #{route[:verb]} #{route[:path]}" do
      params = ApiTaster::Route.params_for(route).first
      expectation = ApiTaster::Route.metadata_for(route)[:expectation]
      setup = ApiTaster::Route.metadata_for(route)[:setup]
      verb = route[:verb].downcase
      path = parse_path_with_url_params(route[:path], params[:url_params])

      setup.call if setup

      send verb, path, params[:post_params]

      response.body.should match_json_expression(expectation)
    end
  end

  # warn about undefined definitions
  ApiTaster::Route.missing_definitions.each do |route|
    pending "api endpoint #{route[:verb]} #{route[:path]}"
  end
end

Then, we have a bunch of endpoint test files to do the actual testing, like this:

resource_response = ResponseHash[
  :response => {
    :id => Integer,
    :name => String,
    :token => String
  }
]

get '/:version/company', {}, {
  :expectation => resource_response
}

post '/:version/companies', {
  :model => FactoryGirl.attributes_for(:company)
}, {
  :expectation => resource_response
}

put '/:version/companies/:id', {
  :id => 1,
  :model => { :name => 'New Company' }
}, {
  :expectation => resource_response.with(:name => 'New Company')
}

delete '/:version/companies/:id', {
  :id => 1
}, {
  :expectation => resource_response
}

Notice that for API endpoint tests we don’t test the business logic or data integrity - these should be tested in models, services and controllers. What we do test are correct endpoints are exposed, correct parameters are accepted and correct data structures are returned.

Isolated JavaScript Tests

Many developers prefer to rely on their integration test suite to do JavaScript / UI testing. This approach is fine until you start making lots of front-end changes and constantly need to pinpoint the relevant feature spec.

Having an isolated JavaScript test suite (which should be run as part of your continuous integration process) is extremely beneficial and often saves debugging time.

I like Mocha so we use Konacha in our Rails app. Though Mocha with Chai is really not that different to Jasmine.

Custom JavaScript behaviour is obviously a good candidate for isolated testing:

#= require spec_helper

describe "form toggle", ->
  beforeEach ->
    $("body").append(JST["templates/form/toggle"])

  it "hides the collapsible field by default", ->
    $(".control-group.branch_deactivation_date").hasClass('in').should.be.false

  it "does not override if there is already a value", ->
    value = $("input#agency_deactivation_date").val()
    $("input#agency_activated").click()
    $("input#agency_deactivation_date").val().should.equal(value)

Sometimes it’s also useful to ensure library code is initiated and triggered correctly, if you have other custom JS interact with it:

#= require spec_helper
#= require bootstrap-datepicker

describe "form dates", ->
  beforeEach ->
    @dateFormat = 'DD/MM/YYYY'
    $("body").append(JST["templates/form/dates"](dateFormat: @dateFormat))

  it "has a placeholder", ->
    $("input").attr("placeholder").should.equal(@dateFormat)

  it "defaults to today's date", ->
    $("input#empty").focus()
    $("input#empty").focus()
    $("input#empty").val().should.equal(moment().format(@dateFormat))

  it "does not override if there is already a value", ->
    value = $("input#filled").val()
    $("input#filled").focus()
    $("input#filled").val().should.equal(value)

“Real” UI Tests

Isolated JavaScript tests are super fast and useful. However, there are times when having pure JavaScript tests simply isn’t enough, due to the complicated nature of DOM interaction and template rendering.

A while ago our calendar widget was broken due to a production and UAT environment issue that was not picked up by our JavaScript test suite. Since then we started adding dedicated UI tests in our acceptance test suite (we use Turnip):

@ui
Feature: UI
  Background:
    Given I am signed in
      And I go to agency consultants page
      And I click on "Add New Consultant"

  Scenario: Calendar
      When I click "#agency_user_start_date"
      And I click ".day.active" within ".datepicker"
      Then I should see today as part of the date field

Effective Acceptance Tests

Writing acceptance tests - also known to many Rubyists as “Cucumber tests”, is a double-edged sword - it’s extremely useful, but very few developers can write good, maintainable Gherkin-style acceptance tests.

Here’s an example of a badly written feature spec with too much implementation details and noise:

Feature: Session
  Background:
    Given I visit "/"
      And there is a user "admin" "password"

  Scenario: Sign in with valid credentials
      When I fill in "Username" with "admin"
      And I fill in "Password" with "password"
      And I click "Sign In"
      Then I should be on "/dashboard"

  Scenario: Sign in with invalid credentials
      When I fill in "Username" with "admin"
      And I fill in "Password" with "invalid_password"
      And I click "Sign In"
      Then I should not be on "/dashboard"

  Scenario: Sign out
      When I fill in "Username" with "admin"
      And I fill in "Password" with "password"
      And I click "Sign In"
      And I click "Sign Out"
      Then I should be on "/sign_in"

A much cleaner version with only high level, descriptive steps:

Feature: Session
  Background:
    Given I am on the homepage
      And there is a user "admin" with password "password"

  Scenario: Sign in with valid credentials
      When I sign in as "admin" with password "password"
      Then I should be signed in

  Scenario: Sign in with invalid credentials
      When I sign in as "admin" with password "invalid_password"
      Then I should not be signed in

  Scenario: Sign out
    Given I am signed in as "admin" with password "password"
      When I sign out
      Then I should be signed out

Final Thoughts

Writing good, sensible tests is hard. These examples and tips are by no means the silver bullet, and you might actually find some of them counter-intuitive in your particular situation. So again, take what you get, practise, and reflect on your findings. For Happiness! :)

Do you have any tips to share? If so please feel free to add a few comments!

]]>

Gotchas in the Ruby Sequel Gem

2013年8月21日 20:39
I haven’t really used Sequel much therefore I am definitely a newbie. However, after days and nights of frustration, endless debugging and some search-fu during the development of Datamappify, I have finally arrived at the conclusion that Sequel is a capable library, as long as you are aware of the gotchas.

Gotcha 1: Always use “select“/“select_all“, or your data records will mysteriously have wrong IDs!

In ActiveRecord, joining an associated model couldn’t be simpler:

Post.joins(:author)

In Sequel, despite having a similar API for models to declare associations and their corresponding primary and foreign keys, you cannot do a join without specifying the keys:

Not good:

Post.join(:authors)
# or
Post.join(Author)

Better:

Post.join(:authors, :id => :author_id)

You would think the version above works - it doesn’t. Even worse, the above example will give you incorrect data - the IDs of the Post records will now contain the IDs from their corresponding Author records! This is because upon a join, Sequel merges attributes from both models into a single hash.

The correct version:

Post.join(:authors, :id => :author_id).select(:posts __id, :posts__ title, :posts__body)
# or
Post.join(:authors, :id => :author_id).select_all(:posts)

Gotcha 2: Always call “all“ at the end of the chain, or the chain will present data in a different format.

In ActiveRecord, all of the below examples return an ActiveRecord::Relation collection:

Post.where(:title => 'Hello world')
Post.joins(:author)
Post.includes(:author)

And indeed, calling first on any of them returns an object of class Post (assuming the result collection is not empty).

Post.where(:title => 'Hello world').first.class #=> Post
Post.joins(:author).first.class #=> Post
Post.includes(:author).first.class #=> Post

In Sequel, the below examples all return a Sequel::DataSet collection:

Post.where(:title => 'Hello world')
Post.eager(:author)
Post.eager_graph(:author)

But let’s see what we get from calling first.class on them:

Post.where(:title => 'Hello world').first.class #=> Post
Post.eager(:author).first.class #=> Post
Post.eager_graph(:author).first.class #=> Hash

Huh? Last one is a Hash? It turns out, if you call all at the end of chains to convert them to Arrays, then the returned collections are consistent:

Post.where(:title => 'Hello world').all.first.class #=> Post
Post.eager(:author).all.first.class #=> Post
Post.eager_graph(:author).all.first.class #=> Post

]]>

The Future of Computing, The Future of Computer Programmers - An Interview with Yukihiro "Matz" Matsumoto

2013年6月29日 20:04
A while ago I translated an interview with Matz done by a Chinese book publisher. The interview and the translation were well received, so this time I am translating another interview with Matz, done by Ito, the editor-in-chief from Japanese website Engineer Type. Since I don’t read Japanese, the translation is based on Turing Book’s Chinese translation.

The Chinese translator has done a great job translating the interview, but there are still many words and sentences lack sufficient context and therefore are difficult to grasp. I have put in many hours translating the text as well as doing researches to ensure the final article is readable. I hope you will enjoy it! :)

Ito: Thank you for doing an interview with us, Matz. I have just finished reading your latest book The Future of Computing, could you perhaps talk about the future of programming and software programmers in general?

Matz: Hmmm, this is difficult to answer… but thanks for reading my book!

Ito: In the book you’ve shared your thoughts on the past, present and future of different programming languages and software design patterns. Would you like to talk about the current state of the software industry? And is there going to be another paradigm shift in software development?

Matz: As discussed in my book, to predict the future of a high tech industry such as computing is not particularly difficult. I believe in the foreseeable future the computing industry is still going to advance based on Moore’s law. Although, it is possible that in the next year or two quantum computers become a practical reality, in that case it will change everything! *chuckles* On a serious note, according to Moore’s law, the cost of computing will decrease and the performance and capacity of computing will increase - this basic principle is unlikely to change. One thing I did notice in recent years is that due to the advancement in computer hardware, the software industry is subtly changing too.

Software Development in the Era of Multi-core and Cloud Computing

Matz: It was about twenty years ago (in 1993) I invented the Ruby programming language, yet it still runs surprisingly well on modern computers.

What this means is that in the past twenty years the computing environment which the software runs on did not see any fundamental changes. In recent years, we started seeing computing power being shifted from having higher CPU frequencies to being distributed over more CPU cores. And that means software needs to move in that direction too.

Matz: Software has not seen major changes for years.

Ito: And this is covered in the last chapter from the book, right?

Matz: Yes. Similarly to multi-core, cloud computing is advancing in the same direction. The future of computing is all about utilise multiple CPUs or computers effectively.

Ito: So, how does that change software development?

Matz: In the past ten years or so we have been seeing more and more things happen on the Internet, and the Internet is an amazing application platform for extension and distribution. Compared to software engineers working on mainframe computers, web developers are naturally more familiar with the concepts of multi-core and cloud computing.

Ito: After interviewing many web and mobile startups, we realised that the number of software engineers working in PaaS and cloud computing have been increasing rapidly.

Matz: Absolutely. And I do believe that “not needing to purchase and own dedicated hardware” is going to be the mainstream. The idea and thought process of “not owning” is not only important for software development, but also important for business development.

“Owning” Becoming a Liability, Not Asset

Matz: In the past, “owning” was seen as the source of vitality of a corporation - those who own high performance mainframe computers were able to do business transactions in high volume whereas those who do not were not able to compete.

These days the landscape is changing - those corporations who do not “own” expensive hardware have more competitive edge. Let’s say it takes five years to break even from the expensive investment of servers, during that time those machines are being put in use to realise their full potential and to justify their cost. It may appear to have saved the business cost but it is not, simply because the value of the hardware decreases as each day passes by.

To put simply, we are now entering the era of “owning” being a liability rather than asset. If you had the most advanced hardware, software engineers were able to develop efficiently. On the contrary, if you didn’t, then you might want to get used to the hours-long waiting for the code to compile. *chuckles* The rise of cloud computing platforms like Heroku is making “owning” a thing of the past.

Also, “not owning” has several advantages on the development as well as the commercial front. For instance, it allows many startups to rise. In the past, in order to start a new business you would need capital for purchasing servers and/or renting servers in a data centre. These days, to get started on a platform like Heroku couldn’t be easier, for example on Heroku you could start with just one dyno for free. This new way of developing software significantly reduces costs and risks.

Years ago I read an essay called Ramen Profitable by Y Combinator’s founder Paul Graham. “Not owning”’s flexibility and agility contribute a great deal to it. And this trend has now grown beyond just relevant to startups, in fact in the recent years many large corporations have begun adapting this approach too.

In the United States, corporations like Disney and Best Buy are indeed utilising Ruby, Rails and Heroku to rapidly grow their internal infrastructure in a cost-effective fashion. What was once considered competitive edges to venture capitalists, like “rapid development” and “development flexibility” are now also possible for these giant corporations.

Ito: What about the giant corporations in Japan?

Matz: I have never worked in a big corporation so I can’t tell where they are heading. People have been optimistic, though as an observer I am concerned.

Real Benefits of Innovation in Cloud Computing Not to Be Overlooked

Ito: What makes you concerned about software development?

Matz: The traditional approach of developing software is still the norm. For example, some corporations, even though use Amazon Web Services, still rely on system administrators to handle their infrastructure. It is too common to see a software development team consists of over a dozen people.

This in my opinion defeats the purpose and forfeits the benefits of “not owning” servers. There are simply too many of these case studies whereby only on the surface of cloud computing is explored and understood.

I have to say I am disappointed by some of the so-called “private clouds” owned by large corporations. The advantage of cloud computing is to utilise multiple computers in the cloud, but those private clouds are essentially their internal data centres. Isn’t that the same as owning a bunch of servers?

Matz: Many companies barely scratch the surface of emerging technologies.

Ito: Indeed, it is too often to see the real benefits of emerging technologies overlooked or misunderstood. Anything else that makes you concerned about the future?

Matz: Nowadays the speed of development has always been a priority from big development projects in B2B to small development projects in many startups. Yahoo! Japan even coined a term “爆速化” (explosively high speed) to indicate the importance of development speed in the ever more competitive and engaging markets.

Looking at things this way, those so-called “system integrators” are becoming obsolete. Should they just give up what they do or continue? I don’t know, but I do know that the gap between them and engineers who have the capability and skills to create real value is increasing.

Career Longevity of Software Engineering

Ito: Who are those engineers who have the capability and skills to create real value?

Matz: The ones who would put in effort to create software or systems from a prototype to a final product. And this has nothing to do with whether they work in web or system integration, or whether it’s consumer oriented or corporate oriented.

Matz: Unbalanced skill combination leads to a gloomy future.

Ito: Do you mean the engineers who are capable from design to implementation?

Matz: Yes. Speaking of which, software developers have to know more than just system design - they cannot survive without knowing how to code. Just like in life, you cannot survive without being down to earth. *chuckles*

Despite the fact that it is pointless to have someone doing only the system design, and not the development as well, the System Integration industry is still going strong in Japan - and it is in fact an industry with high profit margin.

Even if the system designers came up with questionable specifications, or if the programmers were sloppy so the software was terrible to use, users would still use it despite whining. Flaws are easily glossed over under high profit margins.

But just as discussed before, as development speed increases, profit margins would undoubtedly become smaller. Flaws are therefore harder to gloss over.

In my opinion, if things don’t change, those run-of-the-mill software engineers might not survive in five years. Worse, the junior to mid-level to senior programmer corporate ladder is going to collapse.

Say, you wouldn’t want to start a VHS rental shop when DVDs were on the rise, would you?

Difference Between Those Who Control Their Destiny to Those Who Don’t

Ito: Do you have any advice for those who do not wish to be in a “gloomy future”? What can they do?

Matz: To innovate and to create new things, I suppose.

It’s not all doom and gloom. Even though many ageing technologies have been or are being replaced by the web, jobs will not disappear overnight. I think many software developers will still be employed in those jobs.

Having said that, it is always good to create new things or even invent new programming languages.

Ito: What are these “new things”?

Matz: I see three types of new things.

First of all, new services. If you can create a new service, or a service that offers superior user experience - it would be an innovation.

Secondly, new technologies. To come up with technologies better than the existing ones - and this is what I have been doing.

mruby was released earlier this year on Github.

Or, another way is to invent new algorithms.

The three ways I mentioned have different difficulties, but they share the same goal - to create something that hasn’t existed yet. Those who keep working on these kinds of challenges are the true outstanding software engineers.

The ones who do not challenge themselves to create new things are often falling behind - they learn a hip new language today and try a new web framework tomorrow, but still lack the foresight to invent and to improve.

Of course, it is important to learn and try new things, but if you see them as your ultimate goal then you will lose control of your destiny. I believe that the ones who do not get boggled down in every new trendy thing will ultimately be happier.

Software Development is a Punch to Deficiency

Ito: Here is a sharp question: be a follower rather than an inventor is always easier and perhaps makes more money too. What makes you keep inventing?

Matz: My standard answer would be “because writing and running new programs make me happy”. But the real reason is because I don’t like deficiency.

There are people who have different opinions and thought processes, I would often come up with questions like “why was it done this way” or “this will be too hard to use”.

Matz dislikes deficiency, so he invented the ruby programming language.

Ito: True, but all products more or less reflect their producers’ preferences, right?

Matz: Absolutely, and I am not saying that this is bad or anything. I just hate to point fingers at other people’s preferences - if you don’t like something, make your own! This is a basic trait of a good software engineer, and is what makes open source sustainable.

In open source projects, all the source code is publicly available therefore it is very easy to see how a program is designed. As long as you have ideas on how to improve and optimise the design, you are welcome to do so.

Now it is an entirely different story for certain things in the society. *chuckles* At least in software development, we can rely on our skills and knowledge to improve and to change. If it’s your own creation, it can be adjusted and adapted to suit the ever changing needs.

This is same for Ruby - I like programming languages and more importantly I like improving programming languages myself, and that’s why I still work on Ruby till this day.

Software Development, One of the Rare Careers that Could Make a Change on Your Own

Matz talking about developer happiness, wearing his “Ruby City MATSUE“ polo shirt.

Matz: I think I have the right personality for developing software. Only the software industry can tolerate my carefreeness - am I too arrogant for saying that? *chuckles*

In all honesty, software development is one of the rare careers that could bring positive changes to the society on your own. It’s a wonderful occupation that brings happiness and fulfillment!

Ito: Many people would predict their software future based on theories, but Matz you always use “happiness”.

Matz: That’s right. Because only you can control your own destiny. It doesn’t matter if you were told to do things in a way just becasue “Matz said so” - ultimately, I cannot be responsible for your destiny. You should make your own decisions.

I would still say things like “the future might look like this”, but these are just my personal opinions.

And this is the same even for today’s discussion - if someone thinks he does not agree with what Matz has said, he should follow his own decision and the path he chooses.

Exploring the Future: “You” Are the Only Constant

Ito: Having read The Future of Computing, I remember you talked about the inception and development of varies programming languages. But we all know that the IT industry is moving in a rapid pace, it is difficult to rely on history to guide us through to the future. If multi-core and cloud computing are only just the beginning of a paradigm shift, why did you write about the things happened in the past?

Matz: Technologies progress just like a pendulum clock.

Matz: People see things differently - and I believe the IT industry is progressing in a manner similar to the swing motion of a pendulum clock.

As more and more new programming languages, techniques and frameworks pop up, software development related technologies are progressing whilst seeking for balance.

So, how does “the most balanced from the past” become “the most balanced right now”? Think about how pendulum clock swings and in the past how technologies have emerged - you could then predict roughly what would constitute “the most balanced in the future”.

Use “centralised computing vs distributed computing” as an example, in the past there was usually only one centralised mainframe computer, later on to increase the processing capability commodity server farms were utilised, and now we are moving towards cloud computing.

There is no point to look at a particular past event. If you wanted to predict a technology in the future, knowing what has contributed to the balance of a past technology’s rise and fall is going to help.

Human’s ability is one of the factors too, because we have limited capability as a language designer it is useful to look at what others have done to cater for our ability, and therefore improve and evolve the technology.

In the book I briefly talked about Dart and Go. As a programming language inventor I find it really fascinating to explore the thought processes behind those language designers. And it has helped me to gain a deeper understanding of human behaviour.

Ito: I was going to ask why it is so important to study the past, now I know.

Matz: I mentioned this in the beginning - computing has not seen major changes for years.

Programming languages invented over fifty years ago are still in use today, and Ruby has been around for twenty years now. This proves that computing is progressing slower than what a lot of people believe.

On that note, there are many past cases whereby focuses were put on what was cool and new without understanding why. Compared to those “follower” software developers, the ones who command and understand the principles and theories behind changes and progresses have a much longer career longevity.

If you are a software developer who wants a longer career longevity, please read The Future of Computing! *chuckles*

Ito: Thank you Matz for talking to us today!

]]>

SimpleCov: Test Coverage for Changed Files Only

2012年11月13日 15:34
The other day a colleague asked whether or not it’s possible to have SimpleCov return a group that only contains uncommitted changes.

The answer is yes! After some digging around, we found the following way:

# in spec_helper.rb
SimpleCov.start 'rails' do
  add_group 'Changed' do |source_file|
    `git ls-files --exclude-standard --others \
      && git diff --name-only \
      && git diff --name-only --cached`.split("\n").detect do |filename|
      source_file.filename.ends_with?(filename)
    end
  end
end

Basically use git ls-files --exclude-standard --others for untracked files, git diff --name-only for unstaged files and git diff --name-only --cached for staged files.

]]>

Fix OpenSSL Error on Mountain Lion (and RVM)

2012年8月6日 21:25
Don’t you just hate it when you have a fresh intall of Mountain Lion, RVM and some rubies - then all of a sudden you hit this OpenSSL::SSL::SSLError error message:

SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed

The fix is quite simple actually, all you need to do is to download a CA root certificate:

curl http://curl.haxx.se/ca/cacert.pem -o ~/.rvm/usr/ssl/cert.pem

And that’s it! Enjoy!

]]>

API Taster: Visually Test Rails Application API

2012年7月2日 12:59
Like a lot of places, at Locomote we are building a platform that is API-based. As much as I like having comprehensive test suites, I often feel the need to manually test API endpoints to see exactly what the responses are.

Tools such as Postman solves part of the issue: they allow us to quickly test API endpoints without messing with cURL.

But as a lazy developer, I want more. ;)

I want something that:

  • automatically generates API endpoints from Rails routes definition
  • defines input params as easy as defining routes
  • has input params that can be shared with test factories

And so API Taster was born. Please check it out to see how you can use it.

]]>

[Rails Tip] Render views outside of Controllers or Views

2012年6月20日 17:09
Ever wondered how you could utilise the render method outside the context of Rails controllers and views? If you wonder why anyone would do that. Well, imagine you are building an awesome form builder, you need to output and/or store rendered partials in the buffer. How do you do that?

For example, what if you want to do this in your view?

<%=raw Awesome::FormBuilder.new(some_options).html %>

You could do something like this:

module Awesome
  class FormBuilder < AbstractController::Base
    include AbstractController::Rendering
    include ActionView::Context
    include ActionView::Helpers::CaptureHelper

    # set the view paths from your engine or from your application root, i.e. Rails.root
    self.view_paths = Awesome::Engine.root.join('app/views')

    def initialize(params)
      flush_output_buffer
      @_buffer = ''
      add_to_buffer(params)
    end

    def html
      @_buffer
    end

    private

    def add_to_buffer(params)
      # some logic to add rendered content to @_buffer
    end
  end
end

The idea is to mixin the render method, but also ensuring the view buffer is correctly reset with flush_output_buffer.

Hope that helps. :)

]]>

On Hiring: How To Be a Non-Technical Co-Founder

2012年1月28日 14:16
If you are looking at hiring developers, check out my article on this subject.

The goal or the dream of working on your own startup is always full of excitement. And apart from some rare cases such as Dropbox, you probably need one or more co-founders to work with you on The Next Big Thing ™.

Problem is, how do you (as a non-technical co-founder) find us? Or more specifically, how do you talk us into working with you instead of some other billion-dollar ideas?

To answer this question, we need to first ask, is there a billion-dollar idea? The short answer is: NO.

Idea is worthless.

Well, that’s not entirely true. I believe - idea, by itself, is worthless.

You will be surprised by the number of people contacting us and wanting to build a better Paypal or a better Amazon, without a concrete plan.

A more worthwhile idea should contain not only the end goal of the project, but also a plan to reach the goal. What should we ship in the Minimal Viable Product? What are our marketing channels? What metrics should we look at? How do we use social media to our advantage? etc, etc.

We Are Not Just Wozniak, Are You Like Steve Jobs?

Apple is pretty much started by Wozniak as the technical co-founder and Steve Jobs as the idea/business co-founder.

Let’s think about this for a second.

Steve Jobs did not just have ideas. Very early on, he persuaded Wozniak to produce and sell Apple I so they have some capital. Jobs was building the foundation. Without the foundation, there will be no failure or success to come.

On the other hand, Wozniak had no intention to become an entrepreneur, he was happy to stay as an engineer even after the early Apple success. Nowadays though, most of us techies are much more ambitious than that.

Ideally, as the technical co-founder, I would be doing most of Wozniak’s work, and both you and I would be doing Steve Job’s work.

Drawing from my personal experience, as a technical person, there are a few key attributes I look for in a co-founder (technical or otherwise).

Technical Ability

“Excuse me? Aren’t you the technical co-founder? Why are you looking for my technical ability?” You ask.

That is right. Even if you are not a developer by trade, having a certain degree of understanding of technologies is still crucial to most modern, web-based projects.

There has never been a better time to start learning to code. Why not give CodeYear and Khan Academy a try?

We all learnt physics and chemistry in high school even though most of us don’t require the knowledge in our day to day life. Let’s treat coding the same. Learn how to code will not only give you insights to how we solve problems, but will also close the communication gap between you and your technical co-founder.

Obsession

Wozniak is obsessed with electrical engineering and gadgets, Steve Jobs was obsessed with computer typefaces, good user experience and beautiful hardware.

What are you obsessed with?

Only when you are obsessed with something, can you answer questions like “what annoys you so much?”

As I wrote in an ealier article:

Inventions and innovations aren’t born out of happiness, they are born out of frustration, anger and sometimes, curiosity.

Curiosity

In web-based projects, it is surprisingly easy to have “what if …?” scenarios. Not sure which sign up form will have a higher conversion rate? Easy, just make two or more of them and run A/B tests.

Sometimes, as developers, we are so in the zone that we would keep on building stuff the way we envisioned. You will need to step in, pull us out, and say “hey, have you thought about …? What if …?”

Flickr as it is today would never have existed if the founders didn’t raise the question of “hey, how about doing just the photo uploading and sharing features?”

High Expectation

“This is shit!” “We can’t ship this!” If the product stinks, say so, and find ways to improve it. An MVP should always be half-polished, not half-arsed.

The original iPhone was shipped without 3rd party native apps support, or multi-tasking - it wasn’t ideal, but they didn’t effect the core user experience. Now look at PlayBook, it has the features most Android devices have, but the core user experience is so bad that the product never took off. If someone at RIM’s top management had the same obsession on user experience as Steve Job’s, PlayBook would never have shipped in such a bad shape.

Passion

Are you in this for the money? Or for something else? Wealth is rarely a good motivation for creating great products.

“It can potentially generate massive revenue and profit” is a big red flag to me when someone pitches their projects.

These are the key attributes I look for. Things like people connections and experience are also important but not essential. What about you? Do you look for any particular attributes in your potential co-founder(s)?

]]>

The Lean Startup - The Book Every Entrepreneur Should Read

2011年12月31日 16:18
Holidays period is the perfect time to gear up and learn a thing or two from the masters - and as it turned out, reading Eric Ries’ The Lean Startup is one of the most exciting and joyful things I’ve done during the holidays.

In this book you won’t find long-winded and boring theories, instead the book is full of real world use cases and practical advices.

If you are an entrepreneur, or if you are responsible for product development, I urge you to read this book if you haven’t already. :)

]]>

Blog Redesigned For 2012, And New Challenges Ahead

2011年12月27日 02:28
After four days of sketching, designing and cutting up HTML/CSS, the new design (as you are seeing now) is finally live!

The new design is structurally similar to the old design, but with a fresh header and better use of space.

A new year warrants a fresh start. Apart from the redesign, I have also started heading up the development effort at SitePoint - dozens of interesting and challenging projects ahead!

2012 will be an awesome year! :)

]]>

Skype.com - A Quick Example of UX Failure

2011年8月31日 16:46
Today I noticed that I don’t have Skype installed, so naturally, I went to Skype.com. Then I was presented with their homepage:

The problem? No “Download” action button above the fold, or below the fold for that matter. That is, quite frankly, shocking.

So I hovered on the “Get Skype” drop down menu and clicked on the one for Mac. On the new page I was presented with, I clicked on the “Download Skype” button. And then …

Oh snap! You’d have to either create an account or sign in before you could download Skype! Worse, by default it shows you the create an account section, and that section took a good 10 seconds to load for me for the first time.

Is this some kind of prank from the Microsoft enterprise? *shakes head*

]]>
❌
❌