Microsoft doesn’t want us to be scared of AI – but is it doing enough?
Microsoft has become one of the biggest names in artificial intelligence and brought us the quirky and sometimes strange Bing Chat AI. The company has heavily invested in AI, and has now come up with three commitments to keep the company, and the technology, in check. Laws and regulations are rushing to catch up with AI, falling so far behind where we need them to be that Open AIâs CEO bounced around government institutions to beg for regulation.
In his statement to Congress earlier in the year Sam Altman was clear that the dangers of unregulated AI and diminishing trust are a global issue, ending with the strong statement that “this is not the future we want.”
In order to help keep AI in check, Microsoftâs âAI Customer Commitmentsâ aim to act as both self-regulation but also customer reassurance. The company plans on sharing what itâs learning about developing and deploying AI responsibly and assisting users in doing the same.
Antony Cook, Microsoft Corporate Vice President and Deputy General Counsel shared the following core commitments in a blog post:
âShare what we are learning about developing and deploying AI responsiblyâ
The company will share knowledge and publish key documents to allow consumers to learn from, including the companyâs internal Responsible AI Standard, AI Impact Assessment Template, Transparency Notes and more. It will also be rolling out a training curriculum that is used to train Microsoft employees to give us insight into the âculture and practice at Microsoftâ.
As part of the information share, Microsoft says it will âinvest in dedicated resources and expertise in regions around the worldâ to respond to questions and implement responsible AI use. Â
Having global ârepresentativesâ and councils would boost not just the spread of the technology to non-western regions, but would also remove language and cultural barriers that come with having the technology heavily based and discussed in the English language. People will be able to discuss their own concerns in a familiar language, and with people that really understand where these concerns are coming from.
“Creating an AI Assurance Program”
The AI Assurance Program is basically there to help ensure that however you use AI on Microsoftâs platforms it meets the legal and regulatory requirements for responsible AI. This is a key factor in ensuring people use the technology safely and securely, as most people wouldnât consider legality when using Bing Chat AI so having transparency allows users to feel safe.
Microsoft says it will also bring customers together in âcustomer councilsâ Â to hear their views and receive feedback on its most recent tools and technology.
Finally, the company has committed to playing an active role in engaging with governments to promote AI regulation and present proposals to government bodies and its own stakeholders to prop up appropriate frameworks.
“Support you as you implement your own AI systems responsibly”
Finally, Microsoft plans to put together a âdedicated team of AI legal and regulatory expertsâ around the world as a resource for you and your business when using artificial intelligence.
Microsoft taking users who use their artificial intelligence capabilities for their businesses into consideration is a pleasant addition to its AI commitments, as many people have now slowly incorporated the tech into their ventures, having to figure out and balance their approach on their own.
Having resources from the company behind the tools will prove to be incredibly helpful for business owners and their employees in the long run, giving them steps and information they can rely on when using Microsoft’s AI responsibly.
Too little too late
Microsoft publicizing its AI commitments not long after cutting its pioneering Ethics and Society team that was involved in the early work of software and AI development is a bit strange, to say the least. It doesnât fill me with a lot of confidence that these commitments will be adhered to if the company is willing to get rid of its ethics team.
While I can acknowledge that artificial intelligence is an unpredictable technology at the best of times (we have seen Bing Chat do some very strange things, after all) it seems like the AI Customer Commitments Microsoft is now putting in place is something that we should have seen a lot earlier. Putting the technology out into the world and then discussing how to care for the people using it after the fact is a failure on Microsoftâs part.