Could generative AI work without online data theft? Nvidia’s ChatRTX aims to prove that this is possible
Nvidia continues to invest in AI initiatives and its latest, ChatRTX, is no exception thanks to its most recent update.
According to the tech giant, ChatRTX is a βdemo app that allows you to personalize a GPT Large Language Model (LLM) linked to your own content.β This content takes your PC’s local documents, files, folders, etc. and essentially builds a custom AI chat box based on that information.
Since it doesn’t require an internet connection, it gives users quick access to the answers to questions that might be hidden under all those computer files. With the latest update, it has access to even more data and LLMs, including Google Gemma and ChatGLM3, an open, bilingual (English and Chinese) LLM. It can also search for photos locally and has Whisper support, allowing users to talk to ChatRTX through an AI automated speech recognition program.
Nvidia uses TensorRT-LLM software and RTX graphics cards to power ChatRTX’s AI. And because it’s local, it’s much more secure than online AI chatbots. You can download ChatRTX here to try it out for free.
Can AI escape its ethical dilemma?
The concept of an AI chatbot that uses local data from your PC, rather than training (read: stealing) the online works of others, is quite intriguing. It appears to provide a solution to the ethical dilemma of using copyrighted works without permission and hoarding them. It also seems to provide a solution to another long-term problem that plagues many PC users: finding long-hidden files in your file explorer, or at least the information stuck there.
However, there is the obvious question of how the extremely limited data pool could negatively impact the chatbot. Unless the user is particularly skilled at training AI, this could become a serious problem in the future. Of course, it’s fine to only use it to locate information on your PC and most likely use it appropriately.
But the purpose of an AI chatbot is to have unique and meaningful conversations. Maybe there was a time when we could have done that without the rampant theft, but companies powered their AI with stolen words from other sites and now it’s irrevocably linked.
Considering that it’s highly unethical for data theft to be the crucial part of the process that allows you to make chats well-rounded enough not to get caught in feedback loops, it’s possible that Nvidia could be the middle ground for generative AI. If fully developed, it could prove that we don’t need the ethical violation to empower and shape them, so let’s hope Nvidia can get it right.