AI value or vanity? How SaaS companies are approaching innovation
Download the report
Request a DemoLog in

4 steps to practical AI implementation

Panintelligence
Publish date: 3rd April 2024

Fueled by the need to stay competitive, there seems to be a rush to implement AI. But this hype often prioritizes speed over substance, leading to solutions that don't actually add value.

For example, if you’re looking for enhanced Business Intelligence, you need to combine AI with human intuition. This makes sure we are driving the best understanding of what’s really happening so we can get the best possible outcome—as opposed to something that's just statistical.

In a recent webinar with industry experts, we uncovered four key steps to successfully leveraging AI—which we'll dive into in this blog post.

Prefer to listen? Find the recording below.

AI implementation: why now?

Whether it’s chatbots, image detection, or predictive modeling like Causal AI—the media is rife with stories about AI application. AI has moved beyond the "Peak of Inflated Expectations" in the Gartner Hype Cycle and is now reshaping operations, decision-making, and customer interactions across industries.

Source: Gartner.com

But, this pressure to adopt is causing issues.

We're seeing a push for first being first. So there's a lot of top-down initiatives being driven by boards, shareholders, and executive teams to understand the application of AI in their organizations and what those practical applications might be that could drive value."
Zandra Moore, Panintelligence

Zandra goes on to say: "However, these projects are driving limited value. They're not necessarily creating great experiences internally or externally, and that's not creating confidence or trust in AI. So there's a real friction between being first and doing something quickly, versus driving value”.

So, what should businesses do to operationalize AI?

4 steps to AI implementation

Step 1: Define business goals and objectives

Like most things, at the top of the list is defining clear business goals and objectives. What do you want AI to help you achieve? This is important because organizations can get dazzled by AI's capabilities and adopt it without a thoughtful vision for what they want to achieve. This approach often leads to AI projects that are technically impressive but fail to provide real business value.

So think about your use cases, the objectives, who would be using it, and how they need to use it.

You've got to start by trying to define this clear objective, finding out from the people who know what actually matters… People at the front line that are going to be applying the model."
Ken Miller, Panintelligence

Otherwise, it's just AI for AI's sake rather than a means to achieve meaningful outcomes. Anchoring AI to strategic business priorities from the start is vital to operationalizing it in an impactful way.

But the objectives can’t just be agreed by the front-line.

In order to get really effective implementation in this area, we need board level engagement and buy-in. And I think that comes back to a few things. It’s really important to get a baseline of technical understanding of how this technology could be used within their organization so they get a genuine sense of what the benefits could be."
Katherine Holden, techUK

Katherine goes on to say: "If you don't have that kind of level of awareness and buy-in at the board level, you're losing strategic competitive advantage in the market. And it also really helps the company to be able to develop their risk appetites”.

Step 2: Identifying AI-ready use cases

Popular AI use cases include predictive maintenance, fraud detection, personalized marketing, and customer service chatbots. Then, once you’ve determined your need, the most important factor is assessing risk.

Why? Because the risk implications of AI significantly impact the potential use cases.

For instance, using AI for medical diagnoses is obviously much higher risk than using it for workplace task automation. That’s why organizations need to carefully consider the potential impact and public trust implications when deploying high-risk AI applications.

It’s fair to say trust is fragile when it comes to AI. Despite its potential to overcome human bias, the concept of it replacing humans is daunting. Plus, there’s the “black box” issue. If the technology seems unreliable, trust erodes quickly. In fact, according to Forbes Advisor, 42% of Brits are concerned with a dependence on AI and loss of human skills.

One thing we definitely know when it comes to trust is it's incredibly hard to build, but incredibly easy to lose... And therefore, I think it goes back to the importance of use cases for some of those examples where we feel like there may be a higher risk or that we don't have the data quality that sometimes would be helpful to be able to determine some of those decisions."
Katherine Holden, techUK

Ultimately, the user experience of AI solutions also needs to fit the specific context.

Overall, organizations must account for the risk level, build in human oversight for high-stakes AI, and tailor the user experience appropriately. Carefully considering these factors allows you to leverage AI's capabilities, all while maintaining ethics, reliability, and public trust.

Along with priority and risk, you’ll want to consider these key points when assessing AI use cases:

  • Technological infrastructure: Does the current AI technology effectively tackle the specific challenge at hand?
  • ROI potential: What is the expected return on investment? Prioritize projects that have driven the greatest benefits.
  • Ethical considerations: Make sure the use case aligns with established ethical principles and does not compromise customer trust or go against privacy standards.

And data availability…

Step 3: Consider data

Once you’ve decided on your use case, your biggest hurdle then is making sure the data is ready to power the AI solutions effectively.

Data quality is crucial. It must be: accurate, complete, consistent, and unbiased.

Some key questions to consider include:

  • Is our data in proper shape to drive the value we want from AI? For example, is it accurately labeled, structured, and comprehensive enough to effectively train AI models?
  • How can we verify the data's quality and lack of bias? For instance, is there a fair distribution of data? This is key because inherent biases could skew the AI’s outputs.

Fundamentally, if the data isn’t good enough, the outputs of AI won’t be valuable—and could even be damaging as it creates false insights. And that’s where proof of concept comes in.

Step 4: Run Proof of Concept

Let’s be frank: the first models that anyone creates might not be very predictive. In fact, you might even say they're going to be rubbish. But they'll tell you something. And it's the feedback loop and the speed of that feedback loop that becomes so important. Because that's how we create much, much better models.

You just need to start somewhere. The issue is, sometimes businesses feel like they need to solve the entire problem from the get-go.

Panintelligence’s Ken Miller says it best; “my experience has taught me this over the years: You don't know which characteristics you should be collecting and which ones you should pull down on to build a more effective model, until you get started”.

You need to test pilot processes to get a feel for what’s needed.

Simple proof of concept example: Amazon

When Amazon was much younger, when you ordered from them, within an hour, they'd turn up on your doorstep with your shopping.

This was powered by someone sitting in a warehouse with a laptop with a queue of people in bike gear behind them. They would print out the order, the person on the bike would wheel it over, someone would collect the items, put them on the backpack and out you go.

This not only showed them whether this was going to be something that people wanted to use, or whether it was effective, but it actually helped them understand how they needed to build the engine and the elements that they needed to scale this. Key elements they probably wouldn’t have thought they needed before they started this venture.

Proof of concepts are key to driving ROI because, let’s be honest, there are cost implications of failed implementation:

What I'm seeing as a bit of a problem is people are spending quite a large chunk of money on quite a long-term project when actually I think what we really need to see is a bit of a return on investment or what that value add is in a shorter cycle, in a quicker time frame and iterating in the open and tweaking that as we go along… Rather than just waiting six months and then, it's not the result we were hoping for."
Katherine Holden, techUK

When drawing up your proof of concept, you can ask more direct questions, like:

  • What's their experience like?
  • Do they understand it?
  • Are they trusting it? If not, why not?
  • What additional information do we need?

As Zandra says, this helps to shape the roadmap for implementation; “So that observation of ‘what people do with things in practice’ is super important to driving the value of it”.

Now, we’ve covered the key steps but there’s something else we need to cover: Regulation.

Preparing for AI regulations

With the popularity of AI and the worries around reliability and ethics, the approach has been questioned by regulators across many industries.

So, how will AI change with regulation? How will AI be regulated? And what should we keep an eye on?

Katherine says it best: "It’s very much going to be down to individual regulators to regulate this technology, which I guess is a good thing… But I think over time, it's worth recognizing that for the slightly higher risk AI, we'll see greater scrutiny, so greater levels of auditing, and also greater levels of regulation as well.

Hopefully, though, with the approach that the government's taken, what I would probably classify as quite mundane AI technology operating at a very low risk level, I’d think you should be able to continue unobstructed, which is a brilliant thing. So it's an area to definitely keep an eye on but we'll probably see movement on this sort of at the latter end of this year”.

While regulating AI sounds a little vague at present, there is one key thing to know: The steps in this blog help prepare organizations for regulation. And here’s why:

If we're piloting things in a kind of sandbox environment and getting the stakeholder groups around the table, I think we're in a much better position to be able to prove that we've gone through a rigorous and effective process of assessing the value and risks around AI projects—ready for when regulation comes."
Zandra Moore, Panintelligence

Now, we’ve covered quite a bit of ground in this blog, so let’s round up the key points.

Key takeaways: How to operationalize AI

Successfully leveraging AI starts with identifying the challenge you want to solve so that you then choose the right type of AI. From there, it’s about ensuring you’ve got quality data to power it.

Human oversight is critical to keep AI models transparent, ethical, and aligned to objectives. The "human-in-the-loop" approach ensures AI outputs can be explained, understood, and corrected if needed. This allows you to scrutinize AI to maintain integrity and value.

As AI use cases evolve, maintaining human involvement will guide adoption strategies and meet regulatory/societal expectations around AI. The goal is to enrich processes through explainable, auditable, and inclusive AI that serves society's interests and supports businesses in achieving their potential.

Keen to get started?
If you’re looking to run a quick pilot project around causal AI predictive analytics, get in touch. In just days, we can get you up and running with a pilot project so you can really start to prove value and test your use cases.
Contact us
Topics in this post: 
Panintelligence, Panintelligence, a UK and USA [Boston] based embedded analytics platform, helps SaaS businesses expand ARR and accelerate their product roadmap with engaging, secure, embedded analytics. Built specifically for embedding, Panintelligence is a leader in SaaS data integration, deployment, and embedding with features such as user authentication, auditing, flexible deployment options, and seamless integration and embedding, making Panintelligence invisible as a 3rd party tool.View all posts by Panintelligence
Share this post
Related posts: 
Embedded Analytics

Debunking the myths of embedded analytics for SaaS

A popular misconception about embedding analytics and BI into your SaaS application is that it’s a big project that requires a lot of time and resources. We debunk the myths of embedded analytics and BI and show how you could add this functionality to your SaaS application and see results in less than 30 days.
Read more >>
Embedded Analytics

Panintelligence Named a 2023 AWS Partner Award Finalist

Panintelligence recognized as Rising Star Partner of the Year (ISV) – EMEA finalist, one of many AWS Partners around the globe that help customers drive innovation.
Read more >>
Embedded Analytics

Buy vs build: where should you focus your development efforts?

Businesses will face the same dilemma when making decisions about technology investments, do we buy or build? We outline some common concerns that arise during the buy vs. build debate and how to overcome them.
Read more >>

Houston... we've got mail.

Sign up with your email to receive news, updates and the latest blog articles to inspire you and your business.
  • This field is for validation purposes and should be left unchanged.
Privacy PolicyT&Cs
© Panintelligence