Murmel had its first birthday a couple of months ago. This is a perfect time to gather our learnings and position ourselves closer to our ultimate mission: reduce information overload and help people get the bigger picture.
Inside our team, we have had long-lasting experience with natural language processing and text and media analysis. Some of us have worked in the field since the early 2000s - before the current wave of AI research and development. Naturally, we were looking for ways to put our knowledge to good use.
While building Murmel, we added a touch of it to a feature our users know and love - the worth reading badge.
The heart of this feature is a machine-learning model that we trained with the help of a group of avid readers, including ourselves. Every link that gets processed by Murmel receives a score that gives the reader a glimpse of whether the content behind this link deserves some more time and attention. Note that this does not mean that the text leans heavily in one direction or another. We tried to be as unbiased while training as we could. Thus, it is not uncommon to see articles marked as roadworthy that few people would agree with, but they are nonetheless worth thinking about.
From a product feature to a standalone service
The read-worthy badge is a nice feature in and of itself, but we did not want to stop there. We knew from the beginning that the technology behind it would be even more helpful in the hands of people that need it the most - journalists, researchers, marketers, analysts, and anyone else dealing with large amounts of online content.
The real power of AI is in supporting one's workflow by modeling the automation to match the exact criteria. Professional knowledge workers need a simple tool that helps them train and fine-tune ML models to help classify and label incoming content. A service where the user gets complete control of what drives the model's decision-making. This can be invaluable in scenarios we have probably not even thought of, but here are just a couple:
- segmentation based on political stance, tone of voice, similarity to a topic, sensitivity, or just about any other criteria of one's choosing.
- fact-checking and fake news detection
Combine this with the ability to pull live data from millions of sites and integrate the results in various ways (trigger alerts, perform actions, integrate with 3rd-party services).
Today, we are opening the doors to a new service, which we codenamed GROUPD. The service is so fresh that even the name itself may change depending on the feedback we get from early users. We want to create an end-to-end solution for analyzing and monitoring incoming information with the help of AI and making educated decisions based on the output.
Are you interested in helping us out? Join the upcoming beta today.