Technology leader seeks to combine MAGIC with AI incubator and research collaboration

Humberto Farias is closely following the explosion of generative AI.

Farias is co-founder and chairman of Concepta Technologies, a technology company specializing in software development and programming in the areas of mobile, web, digital transformation and artificial intelligence.

For example, he noted that Apple is making generative AI central to the lives of hundreds of millions of iPhone users. But with recent data breaches, patient privacy issues and other IT problems, he said he worries that healthcare IT teams are starting to view AI as a threat rather than a help.

The question is: How can healthcare systems protect valuable patient data while capitalizing on the benefits it provides? What are the benefits of generative AI?

Farias launched the Concepta Machine Advancement and General Intelligence Center, or MAGIC, a collaborative research program, virtual incubator and service center for artificial intelligence and advanced technologies.

Healthcare IT News recently spoke with Farias to learn more about MAGIC and to understand the concerns he’s heard from healthcare CTOs about implementing artificial intelligence. He offered tips and real-world examples for implementing AI and learning securely, and described what he believes should be the primary focus for CIOs, CISOs, and other security leaders in hospitals and health systems as AI and machine learning continue to transform healthcare.

Q. Describe your new organization, MAGIC. What are your goals?

A. Our mission is to push the boundaries of AI research and development while providing practical applications and services that address real-world problems. At MAGIC, we strive to promote groundbreaking research for both fundamental technologies and applied solutions, support and nurture early-stage AI ventures, educate and train professionals in AI skills, provide advisory services, and build a network of collaboration.

Some of our first partnerships include healthcare companies that are focused on improving healthcare for patients, hospitals and clinical teams. They combine assessment, analytics and education, then measure it all to improve healthcare for everyone. Through our partnership, we are Implement AI to make programs run even more efficiently and cost-effectively for their teams.

We are open to working with large healthcare systems on some of the key issues they face when it comes to AI implementation. We have worked with healthcare systems like Advent Health on other software technology and are well-equipped to address the unique regulatory and patient safety issues that healthcare faces.

Q. What are some concerns you’ve heard firsthand from healthcare CTOs about implementing AI into their business structures?

A. I’ve heard from healthcare CTOs that their biggest concerns regarding implementing AI into their business structures are still data privacy and security. Healthcare executives want to ensure that the privacy and security of sensitive patient data is a top priority, given the strict regulations of HIPAA and other mandates.

There are also concerns about how AI solutions can be integrated with legacy systems and whether they are compatible. There is also concern about how AI solutions should be coordinated to ensure they comply with all relevant laws and guidelines.

There are also costs involved implement AI, and many healthcare CTOs are uncertain about the return on investment this technology can provide. I am always looking for ways to reduce these costs by collaborating with colleagues and ensuring we are not operating in isolation – learning from mistakes and building on the successes of other industry leaders.

Coupled with this is a lack of skilled personnel to develop, implement, and manage AI systems. Healthcare systems are already struggling with tight budgets and budget cuts, so partnering with an AI research program can fill this need and help advance the use of AI in their institutions.

We are educating healthcare systems on how AI can be used for simple things, like minimizing repetitive administrative tasks, as well as large-scale projects that can improve workflows for healthcare providers and the care of real patients.

Finally, there are always ethical concerns when it comes to AI. Healthcare CTOs want to ensure that AI is used ethically, especially in decisions that directly impact patient care. The biggest concerns in this area are informed consent and data bias.

Patients need to be aware that AI is part of their care, and they need to ensure that the data used to train AI algorithms does not lead to biased healthcare decisions that increase disparities in healthcare outcomes across demographic groups.

Q. What tips and best practices can you provide for deploying AI safely and reliably, especially with regard to sensitive medical data?

A. There are several ways healthcare leaders can safely and securely deploy AI. One of these is through data encryption. It is important to always encrypt sensitive medical data, both between networks and when stored in systems of record, to protect against unauthorized access.

Another tip is to implement robust access control mechanisms to ensure that only authorized personnel have access to sensitive data. Large healthcare centers should use multi-factor authentication, role-based access controls, and a 24/7 monitoring system. Performing regular security audits is another way to ensure security and safety through continuous monitoring to quickly detect and respond to potential threats.

Regulating compliance is another tip to ensure trust; you could do this by: Align AI implementations with regulatory frameworks such as HIPAA and GDPR. Another tip is to prioritize developing and adhering to ethical guidelines for AI use, ensuring you focus on fairness, transparency, and accountability.

For example, Stanford Health Care has an ethics committee that reviews AI projects for potential ethical issues.

Q. As AI becomes increasingly popular in healthcare, what do you see as the primary focus of CIOs, CISOs and other security leaders at hospitals and health systems?

A. The use of AI is inevitable in healthcare, so the primary focus for CIOs, CISOs and other security leaders should be to continue to ensure data privacy and security, and protect patient data from breaches. The highest priority should be to ensure that programs comply with regulations.

Healthcare leaders must also focus on developing a scalable and secure IT infrastructure that can support AI applications without sacrificing performance or security. To support this system, you must continuously train staff at every level—from front-line to providers to the C-suite—on the latest AI technologies and security practices to mitigate risks associated with human error.

To ensure a watertight plan, healthcare leaders must develop and maintain a comprehensive risk management strategy that includes regular assessments, incident response plans, and continuous improvement.

Collaboration is essential to create the best team ready to tackle the challenges of the world we live in. We encourage collaboration between IT, security, and clinical teams to ensure AI solutions meet the needs of all stakeholders and that security and compliance standards are maintained.

The HIMSS AI in Healthcare Forum will take place September 5-6 in Boston. More information and registration.

Follow Bill’s HIT reporting on LinkedIn: Bill Siwicki
Send him an email: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.