Content

About

Fisent adds confidence rating capability to boost generative AI process automation

Michael Hill | 08/14/2025

Fisent Technologies has added a confidence rating capability to the Fisent BizAI solution to advance generative artificial intelligence (AI) process automation. The new feature provides insights to businesses using generative AI to automate repetitive decision-making and review processes.

Unlike typical generative AI confidence metrics that focus on internal model certainty, Fisent’s approach leverages multiple techniques to assess confidence, prioritized for efficacy depending on the customer use case, according to the announcement.

It comes as global organizations continue to invest in and adopt generative AI tools to optimize processes. A recent forecast by Gartner predicted that worldwide generative AI spend would reach US$644 billion in 2025, marking a 76.4 percent increase from 2024.

Multi-model confidence ratings for generative AI process automation

A primary approach of Fisent’s confidence rating capability utilizes multi-model similarity to determine a confidence rating for process outcomes, a press release stated. 

If multiple large language models (LLMs) agree with the base model’s outcome, confidence is high. If there’s a lack of consensus between models, confidence rating is low, signaling that organizations should take additional measures such as human review to ensure decisions are accurate and complete.

Based on the process being automated, the confidence rating is tuned to minimize the risk of false negatives, without creating a productivity drag produced by reviewing excessive false positives, according to Fisent.

Key features of the Fisent BizAI confidence rating capability include:

  1. Multiple confidence assessment methodologies: Different techniques to assess confidence are leveraged based on a customer’s use case and priorities. Multi-model similarity compares the base model’s outcome against responses from multiple peer or superior LLMs to calculate a confidence rating.
  2. Predictive confidence for human review: Helps users determine if a generative AI outcome needs human intervention, effectively identifying items in an automated workflow that require additional scrutiny.
  3. Dynamic weighting: Allows confidence rating to be sensitized based on the criticality of the task.
  4. Semantic confidence ratings: Provides intuitive semantic assessments rather than potentially numerical scores that may be more challenging to interpret.

Register for All Access: AI in Business Transformation 2025!


Generative AI models often lack reliability mechanisms

Generally, generative AI models lack a mechanism to indicate their own reliability, said Adrian Murray, founder and CEO of Fisent.

“Our confidence rating capability solves this by providing a clear, intuitive way for businesses to identify what decisions should be triaged for human intervention,” he added. “It’s about evaluating generative AI outcomes with the same rigor you’d apply to human work, but at a speed and scale not possible with traditional methods.”

Generative AI integration introduces significant security and compliance challenges. Content and prompts submitted to tools may be shared with training databases, raising privacy concerns. Almost 97 percent of surveyed organizations reported breaches linked to generative AI in the past year, while 13 percent of employees share sensitive information with generative AI apps.

Upcoming Events


21st Annual OPEX Week: Business Transformation Summit

October 13 - 15, 2025
Intercontinental Hotel Double Bay, Sydney, NSW
Register Now | View Agenda | Learn More


Business Transformation Europe

28 - 30 October 2025
Amsterdam, Netherlands
Register Now | View Agenda | Learn More

MORE EVENTS