My Current Approach to AI

Soup du jour

My Current Approach to AI
Photo by Sigmund on Unsplash

I have been getting asked constantly, as I am sure many of you have, what my current approach to AI, Machine Learning (ML), and Automation is. Here are a few, short, and sweet thoughts.

My main goals for AI have been:

  1. Collection of wins from our teams, (for the purpose of...)
  2. Advocating for efficiency wins and adoption of AI&ML (by way of...)
  3. Education and tackling stigma and bias of AI&ML (ultimately for the purpose of...)
  4. Efficiency (one of the core mandates of my role); Reducing time on repetitive tasks, (and...)
  5. Aiding in building a culture of experimentation and innovation (another of my core mandates)

Currently, this overlaps significantly with my current Product Operations work.

An example from a single week:

  • Seeking a new tool to help unblock teams doing more discovery and research in general
  • Seeking a way for us to stop repeating the same research over and over again (institutionalizing knowledge)
  • Finding ways to remove manual headaches for our product pods

What does this mean in practice? 10 points you can use yourself

Key points have been bolded for lazy skimmers.

  1. First and foremost, I’m always coming back to problems to solve and goals to hit, rather than starting with a solution in mind. This means that I, and by extension, the teams I coach, cannot start with AI, ML, or automation as the primary goal. That’s putting the cart before the horse. This is how we get distracted by shiny objects and forget what’s essential: customer value and experience.
  2. Ensuring people know which tool to use at the right time has been a constant challenge. I am forever conflicted by this. If I could get everyone to avoid generative AI in its current form, I would. So many people keep using it for search and research, not realizing how much it hallucinates, making things up to appease the end-user. Maddening. I cannot stress enough that you must tackle this early by teaching your teams the differences between various AI/ML/Automation tools, which ones to use, when to use them, and when not to use them at all.
  3. Educating on the ethical implications of AI/ML/Automation. If people are going to be using new tools, they should understand the implications of doing so. Early on, I brought in guest speakers, such as my former colleague Rodica Ivan (highly recommended), to speak to our team about ethical AI frameworks. She helped the team explore responsible AI use, risks and concerns, and navigate how to solve them positively. We educate on and navigate these hard topics because I want to ensure the AI that teams build or use is in alignment with human values and the organization’s values. Exploring topics like this is also why I am so against the use of generative AI in any creative aspect, like illustration, creative writing, or other areas of the arts. I try to help teams do their homework, though, so they can come to their own conclusions and use the tools at their disposal reliably and responsibly.
  4. I’m always seeking reliable team-based data and insights (E.g. efficiency, outputs, time sucks). Much of what we seek to improve with AI/ML/Automation relies heavily on what’s input into it, as well as understanding what it is we’re trying to improve. This means constantly seeking more meaningful information to fuel us.
  5. Using automation to remove manual work (E.g. Zapier, N8N). I want our teams focused on meaningful, engaging work that’s going to help push on important outcomes. This means helping remove the slog. Utilizing data and insights (which may just mean talking to people), I find out where the manual time sucks are and work to help remove them. Sometimes that comes by way of AI or automation. Sometimes it doesn’t.
  6. Enhancing day-to-day operations should always be a goal of any organization. We can’t get stuck in the traps of “We’ve always done it this way,” or a fear of change. We must always challenge the status quo and push for better. Again, though, this must be viewed through a meaningful lens, reminding ourselves of what the problem we want to solve is or the goal we want to achieve, rather than focusing on the shiny, distracting solution we already want to use.
  7. Many teams are exploring various tools and solutions independently, and that’s okay. We need to seek meaningful overlap when there are teams using a particular tool or approach, or creating a useful byproduct with one of those tools. This primarily translates to interdepartmental and interfunctional transparency and communication. A normal ops person is constantly seeking to smash siloes, and this is no different. We must ensure that teams learn from one another and capitalize on each other’s successes, so we don’t constantly reinvent the wheel.
  8. Building reliable institutional knowledge (E.g. fixing outdated or inaccurate Confluence, Notion, etc.) has been increasingly important. Teams constantly input items from our internal knowledge bases like Confluence into AI & ML tools, seeking quick answers or synthesis. This can be a major issue when what they’re inputting is no longer accurate or never was. We’re slowly building up to cleaning up our internal knowledge bases, as well as putting in controls to aid employees making use of AI & ML tools. How do they know what’s okay to add as an input? Which tools are secure? Can we help the tools understand which information may be more trustworthy? How do we safely archive knowledge without losing it? These are all things I am helping the teams explore.
  9. The former point leads to the need for building new data, insights, and knowledge paths vs. cleaning the existing ones in general. This is where the wins from successful AI/ML/Automation can start to compound. If you know you have trustworthy inputs and you’re starting to achieve good outcomes, what if they talk to one another? What other data or insights could we add into the mix? Have we been collecting something we didn’t even realize was useful (a sad reality of many tech ecosystems)? We must explore all the data at our disposal, enhance its quality and reliability, and determine its usefulness and lifecycle.
  10. Better leveraging the tools we have that already streamline data and insight collection (E.g. Clari, Marvin, Stravito) as well as collecting and building smart repos of multiple sources starts to have compounding effects. We don’t have to have all teams using AI/ML/Automation tools. But we should make the benefits of those tools available to those who could make use of them. Certain tools like Stravito and Marvin are automatically helping synthesize information from multiple sources so that other teams can make use of it.