At JustTalkTo, we realized early on that the idea is not deep tech. There isn’t really a good justification for machine learning, deep learning, a kernel driver, or low-level optimizations. What we do have is a kick-ass product and the tech to back it up.
One of the more interesting pieces of tech that we built early on is our text mechanism, coupled with the feature flags mechanism. I’ll cover both of them in this short article, as I think they can be very helpful to other people as well.
Feature flags are already a pretty familiar design pattern. The idea is that when you create new functionality, instead of throwing away the old behavior, you selectively enable or disable the new functionality based on a boolean flag that sits somewhere in your database.
This last bit is important – it must be in a place that is easily modifiable by someone outside the dev team, so just creating a constant in your header file will not cut it.
Also, another important thing is to decide on the scope of the feature flag – is it global to all of your users? Or is it user specific? The most common choice should be to make the feature flag user-specific.
Most new features that change the behavior of your product should be enabled or disabled with a feature flag. This does not include bug-fixes, or even all features! Essentially – if you think that some users might want the old behavior – just keep it around.
Once you have this in place – your approach to product development changes. It’s suddenly much easier to say yes to user requests that would normally impact all users – suddenly adding a feature has low risk to other users as you can selectively enable it. You can also enable it only for users who pay for it. You can decide to enable it based on the results of an A/B test. And the people responsible for making that call are not developers, instead it’s usually a combination of sales/marketing/product, trying to optimize for business value for your customers.
Some products require translation (AKA internationalization and localization, or i18n and l10n). Usually this is done with static lookup tables. Early on, I decided that we need something similar, but more powerful. In JustTalkTo, every string in our product comes from a dynamic lookup table that is manageable by non-developers. This table has the following fields:
- Language code, e.g. he_IL
- type, e.g. prompt, input, web-text
- name, e.g. “didnt_understand”
- text, e.g. “I didn’t understand your message”
- description, e.g. “The response for sending a message that could not be parsed by our system”.
The table is initialized from code using a bit of declarative Python based on a class I wrote:
class Prompts(Texts[Text]): didnt_understand = Text("I didn't understand that", "The response for sending a message that could not be parsed by our system")
After being initialized, the text value for a given language code, type and name is taken from the DB. This makes our product incredibly malleable, and with some clever editing it can be changed to fit many use cases. It took us just a few days to realize that the language code is not just about human languages such as English, Hebrew or Spanish, but rather, the languages our users need – so language codes now refer to particular use cases.
In addition, the text displayed can also be a template, so that relevant values (e.g. “user_name”) can be interpolated inside, and in some rare cases, even more complex logic.
Putting it all together
Right now at JustTalkTo, we consider ourselves to still be in the discovery phase – we are still exploring use cases and trying to find our product-market-fit. One approach we had early on was interviewing many users. Our CEO Shira did just that – she interviewed hundreds of them!
Another decision we made was to build what our users asked us to. If a user asked us to build some relevant feature – we’d do our best to build it.
BUT – we prioritized aggressively, and each new feature was built using our customizable texts and controlled by a feature flag. This meant that for the first users, when a new user would join, Shira would interview them and tailor the product to their needs using a combination of language and feature flags. Some users would even get their own language code!
Now, as our offering is starting to mature, we don’t interview all of our users, only some of them. We identify broader use-cases, and configure a set of feature flags and languages to fit these use cases, and automatically sign up new users to a “ready made package”. Our approach allowed us to effectively discover these use-cases and quickly support them for our new users to try them out, and to collect feedback and statistics on their usability, attractiveness and relevance to our success.
If you think this might be relevant to you as well do let me know in the comments, I’ll be happy to discuss practical considerations and answer questions about our implementation. In a coming article I’ll write a bit about automated testing and also cover how it touches this customizable approach.