Over 20 years ago, I first encountered neural networks, the foundation for modern large language models (LLMs) and AI technology.
Back then, I dreamt of algorithms tackling complex math problems. Since then, I’ve worked on systems using outlier detection, image classification, data extraction, and even content generation - all with varying degrees of success. This experience gives me a unique perspective on the current excitement surrounding AI.
What concerns me is how the spectacle hijack our collective amygdala created by recent advances in image and content generation has captured our collective attention, potentially overwhelming our ability to think critically.
Spectacle Is Good
Not all spectacle is created equal. Good spectacle inspires, introduces new ideas, challenges our existing solutions, and can even be entertaining. Used judiciously during events or presentations, it’s a great way to emphasize key points. However, spectacle for its own sake has limited value, except perhaps for companies solely focused on generating buzz.
In my day job as a machine learning principal engineer in cybersecurity, delivering outcomes is paramount. Here, spectacle plays a role in exploring ideas and running experiments.
Focusing on Impactful Outcomes
One of the most valuable perspective shifts I offer my teams is to view AI as a toolbox, not a magic hammer. It’s tempting to get caught up in the magical thinking that ignores challenges when building a new product or project.
A major challenge often arises post-proof of concept. Frequently, proof-of-concept solutions don’t scale without significant engineering effort, data, or both. Experimentation is key to gathering data for validating assumptions. Iteratively running experiments until they achieve real-world scale is crucial for collecting enough data to evaluate the solution.
AI as a Capability
When machine learning exploded a decade ago, it required a lot of experimentation and engineering to build the processes and engineering tools needed to operate at scale. Similar to any programming language or database with new features or capabilities, let’s shift our perspective. We can view AI (Large Language Models) as another capability with trade-offs, just like any programming language or new piece of infrastructure.
Iterating on The Inaccuracy Problem
One of the lessons learned from the ML boom of a decade ago is handling inaccuracy. Consider this: my team could identify a critical piece of data for users with 99% accuracy. This sounds great, but with only one significant evaluation per day (the rest being noise), running the algorithm 100 times per hour would lead to false user notifications every hour.
Instead of viewing this as a failure, we can frame it as an opportunity for iteration. By running the algorithm’s output through a secondary logic stage, we might be able to eliminate false positives.
Thank You
Ben Weintraub for asking a question that inspired me to write this post.