We recently touched on some potential pitfalls of hardware manufacturing and the magnification of those issues at scale. In this vein, Andrew “bunnie” Huang posted a fascinating piece drawing on his wisdom and experience with the testing phase. Even though its not the focal point of the post, one of the most enlightening parts of bunnie’s commentary is the impact a test jig can have on manufacturing efficiency.
In case you don’t know, a test jig is essentially a jig, or contraption that allows you to test many physical and software components of a board all at once. Adafruit has a guide about it if you want to learn more or build your own! But suffice it to say these are extremely important for large scale manufacturing. Bunnie’s post shows just how much goes into the design of these jigs, and the many considerations that have to take place.
Test jig complexity is correlated with product complexity, which is why I like to say the test jig is the “product behind the product”. In some cases, a product designer may spend even more time designing a test jig than they spend designing the product itself. There’s a very large space of problems to consider when implementing a test jig, ranging from test coverage to operator fatigue, and of course throughput and reliability.
It seems obvious when you read it, but if you’re a first time product designer and your spending all your time on the product itself, you might forget just how much time and effort you’re going to have to spend setting up an ideal test jig.
One of the more surprising concerns is the human element, like the fatigue mentioned above. The fact that humans are often the thing making final calls on the viability of a product makes the feedback mechanism of a test jig extremely important. Bunnie recounts a production run he worked on that didn’t quite nail this part:
Furthermore, the lighting pattern of units that failed testing bore a resemblance to units that were still running the test, so even when the operator noticed a unit that finished testing, they would often overlook failed units, assuming they were still running the test. As a result, the actual throughput achieved on their first production run was about one unit every 5 minutes — driving up labor costs dramatically.
Once the they refactored the UX to include an audible chime that would play when the test was finished, aggregate test cycle time dropped to a bit over a minute…
Thus, while one might think UX is just for users, I’ve found it pays to make wireframes and mock-ups for the tester itself, and to spend some developer cycles to create an operator-friendly test program.
Bunnie goes as far as to recommend to test the testing process! It sounds a little comical, but if you can cut 5 minute tests to 1 minute tests for thousands and thousands of units, it’ll pay off exponentially.
The post has a lot of other good tips and insight into testing. Read the whole thing over at bunnie’s blog.