Mar 082018

Lots of people talk about agile and scrum. I’ve heard lots and lots sides of what is now a huge, amorphous topic in software development: how disciplined is your practice of scrum.

I’ve heard the cons: “Certified Scrum Master” (CSM), people with no actual coding experience, who occupy a position on a team (or run a team of developers) strictly to the “rules” of scrum: obsessively forcing the process onto the team at each juncture, even when the team “doesn’t want it.”

The truth is, I’ve seen enough to have a bit of an opinion on this, which I feel now is a good time to expound upon: Agile is a buzzword, which fundamentally once meant “changing as you go” but now is meaningless. I leave that word on the table and focus on scrum.

Why do I focus on scrum? Because great software developers like Martin Fowler and Kent Beck pioneered the discipline, and they did it for a reason.

“Scrum” is a funny word. It comes from a play in England--you see Rugby played to this day across parks in England.

I’m not sure what it is about this particular word, or this particular software development practice --or the lineage between the two, but “scrum” is firmly where I, personally, have ended up professionally. That is, I’m all in support of most of the concepts of “Agile,” but I think the word is meaningless. “Scrum” is a software development practice that I know well, has a lineage to some of the best minds in software development, and is a set of practices that work. It’s also not a buzzword.

Sure, the heart of it is to work in short iterations, but I want to list out a few key elements, for Product managers, which often get easily missed. This post is aimed at Product managers (or “Product Owner” in scrum’s more formal terminology.)

First, I want to think about what scrum really is about because I think a lot of the people who loose sight of it don’t get this central point: Scrum is about increasing the operational efficiency of your team. If you don’t get that, or you aren’t getting that from your practice of Scrum, you aren’t doing it right.

Here the three most significant anti-patterns, or problems, I see on software development teams. These are like red flags you aren’t practicing scrum, or you don’t get it.

1. Single queued developers. (DON’T) Developers should focus on one story at a time. Period. If you are queuing up a single developer you are doing it wrong. Each dev should finish their assignment and then come back to the team and take the next highest-priority item off the list. Ok, so I know lots of teams and devs work with a ‘back burner’ story in case you get ‘blocked’ on your first story. I get it, and I concede for your time this may not be an absolute. Nonetheless, what this principle gets at is efficiency mechanics, and the blocker for the dev on the first story represents cog in your wheel. (This is where the Kanban principle of pulling the ‘STOP’ chain on the assembly line comes in. Sorry, I threw in a little Kanban but for now let’s just pretend I didn’t say that and focus on Scrum.)

2. Unclear definition of done (DOD). (BAD!) Ok, so I didn’t make up “definition of done,” but I like to emphasize it. Traditionally, “definition of done” means the definition by which the team considers the story done. On teams I’ve been on and managed the concept of “definition of done” expands to each step of the SDLC (software development lifecycle), in this order:

Design/Wireframe or Spec
QA or UAT (user acceptance)
Delivery to production

This is a specific order. I didn’t make it up. I did combine some steps so your team’s process might look a little different. But the concept is the same: You work on each little piece until you’re done, and then your start again. Wash-rinse-repeat, as they say.

However your team does it is fine: Scrum is not about rigid dictation of process. Scrum is about the concept that each player will hold themselves accountable (or be held accountable) to getting the ball (the story, in software development) to the next step. That’s why DOD (“definition of done”) can and should happen at each and every step. The business owner or CEO signs off on the concept; marketing and design sign off on Design; you (the product manager) sign off Wireframe or Spec (with the developers involvement to make sure they can actually build what you want them to build). The QA person confirms the feature works and you confirm it is acceptable to the customer, client, or will work for the end user. Finally, it is deployed to produdction. Get it? That’s the SDLC. That’s the whole thing. (Agile/scrum secret: scrum looks just like waterfall but you do this whole 5-step process in short, quick, iterations. More on that in another blog post)

Too much formal process? Ok, fine. Then invent an informal process. You can change the rules of scrum! That’s fine. Nobody every said scrum dictated a formal process and whoever said that shouldn’t be speaking about it. But, please, understand this process works for a reason. If you’re going to make your process more informal, know why the five steps are important and make sure your informal process at each of the steps actually works.

Get everyone to agree to this, including the stakeholder, the designers and of course the developers. Sometimes a designer or a stakeholder might say “nothing’s ever done” or “everything is a moving target.” Maybe it feels that way to you on your team. I can sympathize. But if the stakeholder or designer doesn’t want a clear “sign-off” or “thump-up” or “it’s ready now” step then that’s a sign they haven’t bought into the very heart of scrum. Go back to the beginning and start again.

(It’s rare that you have to deal with a developer not wanting a clear definition of done-- good devs I’ve worked with are goal and completion-oriented by our nature. Sometimes jr. developers can drag their feet, let projects slip, and not finish things. As the PO, disciplining this kind of developer isn’t your responsibility, but be on the lookout for it and understand the DOD applies all across the whole pipeline, at each and every step. That’s why the two steps--both of which really you can and should be involved with--are so important.)

“Definition of done” (DOD) is such an important thing to scrum. Really, I didn’t make it up. Look it up; it’s a thing. Lack of discipline around “done” is the single most significant team anti-pattern I see.

3. Developers complain about lack of prioritization (BAD!) Ok, so this is a very common thing on many software teams. You have lots and lots of ideas and stories. New ones come up all the time. You have meetings or stand-up and developers say, “I’m looking at list of unprioritized stories.” If developers say this in your meetings, especially if they say it a lot, you’re doing something wrong.

It is the Product owner/manager’s job to represent the business’s interests, or the customer’s interest, in prioritizing the worklog, or in formal scrum known as “backlog.” This is really, really important. Yes, developers do lots of things related to prioritization of code and code debt that you (the product manager) may not understand. And yes, sometimes developers can be welcoming to prioritization (or, as is often the case, re-prioritization), and other times, developers can be very unwelcoming of re-prioritization. That’s software development. If you want to work with the best dev teams, these are the kinds of nuances you need to navigate as a successful Product manager.

Some other tips

Six more tips for Product managers. Keep in mind these are written from a developer to a Product manager, so take with a grain of salt. These are opinionated and based on my several years of working on scrum teams.

4. Don’t skip user stories. This one seem obvious but I’m amazed how many Product people are so quick to skip formal User Stories. They’re so easy! I’m pretty formal about them, preferring the exact style:

As as ___
When I ___
And ___
I should ___

There are a few variations on this style and any of them is OK. But learn User Stories and use them. In strict scrum, you have only 1 user story per ticket or “story.” This often doesn’t work. If you have a ‘ticket’ or ‘story’ system. I’ll give you permission now: It’s ok to have more than one User story in a single ticket.

When I write them, I just string them together, right after the other, sometimes lettering or numbering them.

As a User,
When I type the right username & password,
And click “Login”,
I should be successfully logged in

As a User,
When I type the wrong username & password,
And click “Login”,
I should see a message tell me my username & password are incorrect

That’s fine! So there’s a little repetition. For QA people, repetition is A-OK. By writing the stories up front you are setting up the QA step for success (more on that later).

Don’t skip the user stories. Write them, get stakeholder by-in on them, and believe in them.

5. Don’t skip wireframes. It’s amazing to me how many Product people try to just have ‘meetings’ with developers. I know some people are better verbal communicators, but your job as Product owner is to document, document, document those meetings.

Like User Stories, Wireframes drive the conversation about the software -- which is most of the work!

6. Do QA yourself. The best product people do QA themselves, even if there’s a separate “QA sepcialist” on the team. Yes, Product & QA are distinct skillsets. Yes, on some small teams, they are the same person. Don’t be one of those Product people who eschews doing QA yourself. If you’re not the best at it, learn to get better at it and QA the projects you are managing. It’ll be better for the team and it’ll make you a better Product manager.

7. Get exec, or ‘stakeholder,’ buy in early and before the devs write a line of code. Really, don’t skip this step. This is your primary job.

8. Think like a dev but don’t think like a dev. This one is hard. Know some things about what you think is possible with the technology you have and separate that from the technology you think you could have. Understand there’s a constant pay-it-now-or-pay-it-later tension in software development. Give just enough technical prowess without stepping on developers toes.

9. Ask for what you need. Don’t be afraid to ask the devs, but try to do so politely and without making any assumptions you know anything. Just report the facts, ask for what you need, and offer any helpful information.

10. Don’t ever say “but it worked before” or “but it used to work” or anything sounding like that. This is never something a developer wants to hear. If you are in a position of software development management, don’t lean on this trope.

Yes, developers are responsible for regressions that happen on their code deploys, sure. And yes, sometimes, when a developer deploys some code, it introduces a regression. But 95% I hear these words coming from non-technical people it is in fact not related whatsoever to anything any developer did. I also happen to think it is an unfortunate trope used by nontechnical people who fundamentally do not understand a concept we call in software development called software entropy. (Really I’m not making this up!).

If you really think it “used to work” just before a code change, it’s OK to report that in an evidence-based way when you report the bug to the developer. (As in, “Last successful one was at before the deploy may or may not be related.”) That’s totally cool. In fact, developers WANT you give them as much information as possible. Just do it an nice way and evidence-based fashion. Use your words, use your screenshots, and throw in “may or may not be related.” Go ahead, it’ll work like magic. I promise. 

(Some really passive-aggressive version I’ve seen of this is to NOT tell the developer the key piece of information the Product manager has related to the regression thinking they will ‘test’ or ‘challenge’ the dev to find the bug. This, too, is also disrespectful.)

What’s not OK, and downright disrespectful is: “This worked before developer X did Y.” That’s accusatory.

As a nontechnical person, let me tell you something that the developers you work with really want to scream at you: You understand a fraction of what’s actually going on under the hood. You already know this. I don’t need to tell you this and the developers don’t need to either.

Leaning on “it used to work” is an accusatory sign of an amateur manager who just doesn’t get it. If you find yourself doing this, put yourself in check and ask if this career is right for you.


Alright, end rant as they say. Being a great Product manager, like all things in life, takes compassion. As a boss and Agile mentor of mine (Rob Rubin) once told me I think smartly: the Product owner is the most leveraged individual on a Scrum team. That is, if you’re in a company and not on the Product team (like you’re the stakeholder), to get what you want out of the Engineers you should make friends with the Product people. (Thanks Rob!)

You have a great road ahead of you should you heed the disciplines core principles. Fight them, and you may have a rocky time, especially in the areas of code debt, incorrect estimation process, missed deliverables, and mismatches between what the developers are doing and what the client or company needs.

 Posted by at 9:49 am
Mar 072018

I am pleased to announce Version 1.3 of my gem nondestructive_migrations. With this update, nondestructive_migrations is ready for Rails 5.1.

The Gem’s Github page can be found here: and you’ll find a sample app for Rails 5.1 here.

Version 1.3 is officially pushed to Rubygems as of this morning.

I know I’m slightly behind the Rails release schedule itself. (We just had Rails 5.2 officially released last month.) Thanks to some help from Mikls Fazekas from Hungary hopefully this gem should be ready for Rails 5.2 soon. Watch this space for updates!

 Posted by at 8:49 am  Tagged with:
Feb 102018

1. Create an ErrorsController in app/controllers

class ErrorsController < ApplicationController
 def not_found
  respond_to do |format|
   format.html { render template: “errors/not_found”,
              layout: “layouts/application”,
              status: 404 }

 def server_error
  respond_to do |format|
   format.html { render template: “errors/server_error”,
              layout: “layouts/application”,
              status: 500 }

2. Add these to your routes.rb file

match “/404”, :to => “errors#not_found”, :via => :all
match “/500”, :to => “errors#internal_server_error”, :via => :all

3. Add this to your application.rb file

config.exceptions_app = self.routes

4. Delete public/404.html, public/422.html, and public/500.html

5. Remember while developing you should change this to false in config/environments/development.rb

config.consider_all_requests_local = false

If you fail to perform this step, Rails will show you full stacktraces instead of your error page.

Sample app can be found here

Sep 032017

Today I’ll take a moment to expound on how web development has changed over the last two decades. Long ago, when we started back in the 90s, connections were slow and web pages didn’t change much.

In the design of the internet itself is something you should be familiar with if you are reading this post: browser caching. Continue reading »

Jul 172017

Sometimes in the life of a hybrid Rails-Javascript app you may want to do something unique: have a config file written in YAML available to you in your Javascript code.

A simple trick will make Sprockets, the Asset Pipeline in Rails 4+, do this automagically for you. This example comes from

I’ve created an example app that you can read the source or see live demo here.

First, we’ll need to create a special hook for Sprockets called “depend on config”. Create a file at lib/process_depend_on_config.rb

Sprockets::DirectiveProcessor.class_eval do
 def process_depend_on_config_directive(file)
  path = File.expand_path(file, “#{Rails.root}/config”)

Now, in your Sprockets-managed Javascript, use this directive before you include Ruby evaluation inside of javacript


//= depend_on_config ‘my_configs_in.yml’

ExampleApp = {};

ExampleApp.MyConfigs = {
 getConfigs: function() {
  var mySettings = <%= YAML.load_file(“config/my_configs_in.yml”).to_json %>;
  return mySettings;

Finally, for demonstration purposes, create a file at config/my_config_in.yml, you actually have the YAML configuration you want to port from Ruby to JSON, something like

 world: 12345
 country: 678
 state: 90
 city: 11

The depends on config directive tells Sprockets to invalidate the cache for the resulting JS file when the YAML file changes, hence why it is needed here.

Voila! Your Ruby-based output (here, a YAML config but theoretically could be anything) is now included in each build during the Sprockets compilation phase.

Mar 062017

This is to fix build, slug, and caching problems related to asset compilation during slug compilation.

Everyone needs a little spring cleaning, right? No, really. Sometimes when I switch around buildpacks or change up assets I run into a strange asset cache problem. Here’s the five magic commands to purge your slug compile process, from less-invasive to more-invasive. When you have a nasty asset cache problem, usually #4 is the one you need.

To use the Heroku repo plugin, I think you’ll want to install it following these instructions.
Note that the Heroku plugins are installed LOCALLY on your machine not on your environment. They are simply shell scripts that perform a series of actions on a remote Heroku environment.

Use with caution. After each one, re-push your branch to Heroku.


heroku run rake tmp:cache:clear -a appname

(then push your branch to Heroku)


heroku run rake assets:clobber -a appname

(Rails 4+, for Rails < 3, use heroku run rake assets:clean instead)
(then push your branch to Heroku)


heroku repo:gc -a appname

This will run a git gc -agressive against the applications repo.

(then push your branch to Heroku)


heroku repo:purge_cache -a appname

This will delete the contents of the build cache stored in the repository. This is done inside a run process on the application.
(then push your branch to Heroku)


heroku repo:reset -a appname

This will fully empty the remote repository.
(then push your branch to Heroku)

Feb 202017

Site speed can be said to be the number one issue facing web developers today.

Whether it’s this KISS Metrics block post, another KISS Metrics block post, study after study show that delivering your content fast, fast, fast is make-or-break factor in today’s web economony. That’s why it’s so important that your images are optimized for the web.

Photoshop and other tools export notoriously large files -- well over 1 MB. This is unacceptable in today’s world, where 33% of mobile users in the US are on 3G connections.

If you’re on Rails using Paperlcip, I’ve got a great solution to explore for you today: Image-Optim. You can automagically compress all your images, inside the Rails pipeline and also the ones you upload with Paperclip. On Heroku, you’ll need to use two special buildpacks to make this work. As well, because Heroku uses an ephemeral file system, Paperclip needs to be configured to use an AWS bucket as its storage.

First, refer to my blog post from last year, about how to add the ImageMagick buildpack to your Cedar-14 herokubuikd.

The instructions above will direct you to do add this buildpack first:

heroku buildpacks:add -i 1

Then add another buildpack to your Heroku environment

heroku buildpacks:add -i 2

(you’ll note here you are using the index flag to put this buildpack into position 2 because you already should have the Imaegmagick buildpack at position 1)

You should now have 3 buildpacks, which can be check with heroku buildpacks like so:

$ heroku buildpacks -a your-heroku-app
=== your-heroku-app Buildpack URLs
3. heroku/ruby

Then add to your Gemfile these 4 gems, (for the sake of this post I will assume you already have gem ‘paperclip’ in your Gemfile).

gem ‘paperclip-optimizer’
gem ‘image_optim’
gem ‘image_optim_rails’
gem ‘image_optim_pack’

To get this working on Heroku, you’ll actually need to work through a few more steps: database setup, AWS. For the lazy, check out the example which you can find at the end of the blog post.

Here’s my has_attached_file. In this example, I’m creating only two styles: a thumbnail, and an optimized version.

Notice that I’ve turned off lossless compression, in other words, allow_lossy: true

With this safeguard on (allow_lossy: false, which is default), I’m usually able to only get an image down to about 75% of its original size.

A large 909KB file was only reduced down to 730 KB; whereas Optimizilla was able to get it down to a whopping 189 KB.

With the safety guard switched off allow_lossy: true, I get much better results but much worse quality.

1st Example
Here, I define a thumb and a optimized.

has_attached_file :attachment, {
 styles: {
  :thumb => ‘125×100>’,
  :optimized => ‘%’
 processors: [:thumbnail, :paperclip_optimizer],
 paperclip_optimizer: {
  nice: 19,
  jpegoptim: { strip: :all, max_quality: 10, allow_lossy: true },
  jpegrecompress: {quality: 1},
  jpegtran: {progressive: true},
  optipng: { level: 2 },
  pngout: { strategy: 1}
 convert_options: { :all => ‘-auto-orient +profile “exif”‘ },
 s3_headers: { ‘Cache-Control’ => ‘max-age=31536000’}

2nd Example
Here, I define a thumb and a large.

Remember, when configured together the whole thing looks like this, see the “Per style setting” on this paperlclip-optimizer doc:

(this is an example that mimics the paperclip-optimizer docs)

 has_attached_file :avatar,
          processors: [:thumbnail, :paperclip_optimizer],
          paperclip_optimizer: {

          styles: {
           thumb: { geometry: “100×100>” },
           large: {
            geometry: “%”,
            paperclip_optimizer: {
              jpegrecompress: { allow_lossy: true, quality: 4}},
              jpegoptim: { allow_lossy: true, strip: :all, max_quality: 75 }

The Magic Sauce

The docs say you should have allow_lossy set to its default, which is is false. Using this setting this way means your images come out with no quality loss. In my tests, I’ve found that this setting should be turned on, overriding the default.

I recommend paying attention to two important settings
jpegoptim max_quality – 0 through 4, with 4 being best quality
jpegrecompress quality – 0 through 100%, with 100% being best quality

In my tests, I’ve found that the following are acceptable for production websites with high-quality images.

Option A
jpegoptim max_quality quality: 4; jpegrecompress quality: 80
this yields 20-40% compress images of the uncompressed JPGS

Option B
jpegoptim max_quality quality: 3; jpegrecompress quality: 60
this yields 10-20% compress images of the uncompressed JPGS

As far as I can tell, jpegoptim max_quality setting appears to have very little effect on the file size, where as the jpegrecompress quality setting has the most dramatic effect, especially on larger files. The values for jpegrecompress quality are 0-4, with 0 being the least quality (most savings) and 4 being the best quality. With a settle of 4, you can’t perceive any quality loss, but you don’t get the benefit of extremely optimized files. I recommend a setting of 3, which is barely noticeable in terms of quality loss but a significant boost in file size.

Test App

I threw together a test demo here. It lets you upload your own JPGs and see how they compress. It’s important to examine your own files, weighing the quality loss with the file size gain (that is, speed gain in having smaller file sizes).

You can read the source of this demo app on Github.

Please note this Heroku (production) app is configured with a few extra goodies:

AWS setup for a basic Amazon S3 bucket
Postgres setup for Heroku

This app is configured to use an Amazon S3 bucket called jasonfb-example1. Because I pay for this bucket, please do not abuse. This demo app is provided for developer testing purposes only; I reserve the right to delete any images uploaded for any reason, including copyright infringement or simply lack-of-space. Please do not upload any inappropriate photos or photos you do not own.

You can hit the “Destroy” button on any image you upload.

The jpegoptim max_quality and the jpegrecompress max_quality

You’ll notice my example app here creates 5 different versions, using the same jpegoptim setting (jpegoptim: { allow_lossy: true, strip: :all, max_quality: 75 }, but 5 different quality settings on the jpegrecompress setting (be sure to note the jpegrecompress takes a quality parameter of 0-4; the jpegoptim setting takes a max_quality setting of 0-100)

In my example app I’ve split the settings for jpegrecompress and jpegoptim into a global setting and a per-style setting. Its setup differs from the examples above.

In my sample app, I’ve set the jpegoptim max_quality setting to 75 and created five different jpegrecompress settings: 0, 1, 2, 3, and 4, named:


(you’ll see these in the has_attached_file in app/models/asset.rb)

So go ahead, upload a color-rich un-optimized image. In my experiments, I found that quality settings 4, 3, 2, and 1 yield approximately the same file size, with only a small dip in file size when you went down to 0.

However, the noticeable loss in quality begins to happen even at quality setting 3, so it seems to me why not use quality setting 4. You will be baking in an automatic guard against very large un-optimized images coming into your app. You’ll need to play around with these two settings.

Important Addendum (2017-03-09)

I am adding an important addendum to this post. After switching around my buildpacks on Heroku, I ran into a strange Sprockets error:

undefined method `dependency_digest’ for #<Sprockets::StaticAsset:0x007fefb93d0d28>

The only way I found to fix this was to purge my assets in slug compilation. This will mean your 1st push after purging will take an extra long time to slug compile.

If you run into that error, do this before you push to your environment:

heroku repo:purge_cache -a appname

Also see this Stack overflow post. I corresponded with the maintainer of Sprockets regarding this issue, and he suggested later versions of Sprockets may have addressed this issue (we are on Rails 4.1 with Sprockets 2.12.4).

Feb 152017

In your Google Trusted Store set-up, there is a process where you need to use a special link to validate your GTS badge.

You find this special link in the popup after the blue “Test” button where you see your store listed. Here, you see a panel called “Browsers to test” and an instruction to “Copy and paste this URL into your browser window”

Once you do this, you’ll see a beige bar like so:

This lets you preview your GTS integration so they can certify your website in their program. Annoyingly, this bar appears not to go away by itself, nor can I find a way to disable it in GTS.

To remove it, you must remove the cookies in your browser associated with


In Chrome, go to Advanced > Content Settings > All Cookies and Data and search for the specific domains above.

Then delete those cookies completely from your browser.

Feb 122017

My colleagueReid Cooper and I discovered a nice little trick of controller concerns, something we sometimes call “behaviors” in our app (typically implemented as modules). We found a trick from this link that lets us mix in behavior into both a controller and view helper, but first a brief introduction to controller concerns.

Concerns were born in Rails 4 as a nod to the limitations, vis-a-vi the domain model, of a a “strict” interpretation MVC as implemented by Rails. Around 2012 or 2013, most experience Rails developers would explain that the MVC structure created by default doesn’t necessarily dictate a strict MVC paradigm. Thanks in part to DCI architecture -- which complements but does not replace MVC -- a more modern understanding of larger apps includes a domain layer, i.e., where you put the business domain that is not in the traditional Rails models.

There are various options, and in a small nod to the problem the Rails core team added a blank empty folder to default Rails installs. You might notice this folder at app/controllers/concerns. What, the Rails newbie says, am I to do with a blank empty folder?

Good question. You would do well do study the excellent work of Sandi Metz and James Coplien, who cover domain abstraction (and a specific pattern the latter calls “DCI,” or domain-context interaction) in two excellent books (POODR and Lean Archtecture, respectively). The scope of these is well beyond this blog post, but since they are such my heroes I want to take an opportunity to plug these excellent books.

Reid and I wanted a behavior, a-la “concern”, that we could mix into a controller to inherit instance methods for the controller. We also wanted a view helper automagically mixed into our views for the view to access while it is rendering. To the rescue: the obscure included hook that gets called after modules are included into controllers, where you can re-access the controller itself and add both helper and actions (formerly known as filters.)


class AbcController < ApplicationController
 include FancyConcern
 def index


and here’s the magic, in app/controllers/concern/fancy_concern.rb

module FancyConcern
 def self.included(base)
  base.helper FancyConcernViewHelper
  base.before_action :set_my_instance_variable
 def set_my_instance_variable
  @my_instance_variable = “_instance variable value_”


Hello world!

<legend>An instance variable set in a before_filter</legend>
<%= @my_instance_variable %>

<legend>A call to the view</legend>
<%= my_view_helper_method %>

Check out the full test app.

I don’t have a demo up & running, but it works (I took a screenshot below). If you want you can pull it locally and run it yourself to see.

 Posted by at 11:14 am  Tagged with: