The approach to AI models must be specialized

AI will be all the rage in 2023. Online, at conferences, in articles like this, you cannot ignore the subject. But AI has been around for a while. So, beyond the hype and headlines, what’s behind the sudden rise of AI as a concern for businesses around the world?

We have reached a critical mass of global connectivity and the computing power now available is fueling the emergence of massive data sets. With extreme computing power, extreme networks, and large data sets (like those used to train large language models (LLMs)), AI has gone mainstream. It’s now both more available and more necessary, which is why there’s so much buzz around it.

And the buzz seems to go beyond the normal buzz when a new technology comes onto the scene. AI seems poised to shape all aspects of the future. Not only what it means to do business, but also what it means to be human.

These are the big, esoteric questions behind AI. But what does this all mean in practice, in everyday practice?

The basis of AI, as I said, consists of enormous amounts of data. And now, managing this constant deluge of data has become one of the biggest information challenges companies must overcome. And while interacting with AI may seem simple from the user’s perspective, it involves many advanced technologies working together behind the scenes: big data, natural language processing (NLP), machine learning (ML), and more. But integrating these components – ethically and effectively – requires expertise, strategy and insight.

Mark Morley

Senior Director, Product Marketing, OpenText.

Specialized versus generalized: making the most of AI

The most talked-about AI tools, such as ChatGPT or Bard, are examples of generalized AI. These work by taking data sets from publicly available sources – that is, the entire internet – and processing that data to create output that seems plausible to humans.

But the problem with using generalized AI models in business is that they are subject to the same inaccuracies and biases that we have become accustomed to with the internet more broadly.

Therefore, for maximum impact, companies should not use generic AI models. Instead, deploying specialized AI models is the way to most effectively manage the flood of data that comes with AI. Specialized AI tools are similar to general tools in that they are also LLMs. But the major difference is that they are trained in specialized data, which is verified by subject matter experts before being entered into the LLM.

Specialized AI algorithms can therefore analyze, understand and execute content that can be trusted for specialist accuracy. These kinds of capabilities are crucial to avoiding the kinds of pitfalls we’ve seen so far with generalized AI, such as lawyers including inaccurate ChatGPT-provided information in legal filings. But the question remains: how can companies best deal with the massive amounts of data that arise when they take a specialized approach to AI?

Manage the data flood with specialized AI models

Any successful approach includes effective strategies for collecting, storing, processing and analyzing data. As with any technology project, defining clear objectives and governance policies is critical. But the quality of the data is perhaps even more important. The old saying ‘garbage in, garbage out’ applies here; the success of any specialized AI model depends on the quality of the data, so companies must implement data validation and cleansing processes.

Data storage infrastructure, lifecycle management, integration between systems and version control should also be considered and planned before deploying a specialized AI model. Ensuring all of this is in place will help companies better deal with the large amounts of data generated at the other end, with continuous monitoring also required to assess model performance.

But companies must also consider the ethics of AI here, just as they would with general AI. Specialized AI models can be prone to domain-specific biases, while what is considered ethical in one sector may not be so in another, requiring judicious use of specialized AI output. Also, specialist LLMs may find it difficult to understand nuanced or context-specific aspects of language. This can lead to misinterpretation of the input and generation of inappropriate or inaccurate output.

This complexity obviously dictates that human input and continuous monitoring are critical. But it also reinforces the importance of collaboration between departments and the sector to ensure that any use of AI is both ethical and effective. Sharing data and knowledge can be an important step in improving the quality of the underlying data and, if done well, can also help keep that data secure.

As AI becomes more integrated into our daily work and lives, we will ultimately need to develop processes to handle its results in a scalable and ethical way. Partnership and collaboration are at the heart of this, especially with a technology that affects so many of us at the same time.

We have presented the best data visualization tool.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post