Motorola Solutions providing clear, consolidated information about use of AI

0
39

The speed, scale and sophistication of today’s safety threats can outpace human capacity to keep up. The implementation of artificial intelligence (AI) is essential to keep people safer and must be designed and deployed transparently.

The complexity of today’s AI applications is expanding in physical security, with more AI models leveraging data to tune performance and using third-party application programming interfaces (APIs). In this context, it can be difficult to find clear, consolidated, transparent information about AI’s use in enterprise security technologies.

Motorola Solutions is taking a layered approach to help users easily understand how AI is used across the company’s technologies, starting at a high level with AI labels affixed to products, and getting more granular with structured information about AI testing, assessments and more.

“We are aiming to provide clarity to the user and those they help protect to increase trust and transparency in AI innovation,” says Hamish Dobson, Corporate Vice President, Avigilon and Pelco Products, Motorola Solutions.

The AI labels, which have been compared to nutrition labels used on consumer products, are designed to be clear and easy-to-read at a glance. Each label explains the type of AI used, who owns the data processed, human controls, and the purpose behind the product’s specific application of AI.

“We regularly seek input from external stakeholders – including customers, partners, consultants, investors, policymakers, and community members – on many aspects of our business,” says Dobson. “We briefed select customers and industry groups on our AI labels initiative and received positive feedback.” He continues: “Like us, customers and industry groups understand the importance of AI transparency and are looking for ways to clarify AI usage in public safety and enterprise security.”

Across industry, AI transparency efforts have taken on a variety of forms such as model cards, transparency notes and, in Motorola’s case, AI labels. The company pledges to continue to engage with industry participants on efforts to advance the overall pace, adoption, and maturity of responsible innovation initiatives across the industry, says Dobson.

The AI labels help to increase dialogue and understanding of AI’s use in the security technologies that help to keep people safer. Motorola Solutions is looking to inform customers about how AI is being used to automate mundane tasks and prioritise information that may be critical to performing their roles. For example, a business using Motorola’s L6Q licence plate recognition camera in its parking lot could view the AI label and see that AI is used to help recognise licence plate characteristics of vehicles within its view.

The label would also help the customer understand that they maintain ownership and control of the data AI can process and can determine the data retention period. “By providing knowledge of where and what type of AI is being used, our customers can better understand what they are deploying, configure settings appropriately and inform their constituents,” says Dobson.

AI-assisted experiences should be designed to be accountable and transparent, according to Motorola Solutions. AI outputs should have human oversight, and a user should understand the sources of data from which suggestions were drawn. AI labels highlight that the data AI can access is customer-owned and controlled, helping to increase confidence that AI outputs are based on that customer’s specific data.

The AI label’s “first or third party model” section explains the source of the AI model. A first-party model is developed in-house by Motorola Solutions. A third-party model is developed outside of Motorola Solutions and made available by a third-party vendor; however, it may be customised by Motorola Solutions.

“This section aims to foster dialogue with our customers around Motorola Solutions’ role in testing, training, and refining the AI model or models used in our products,” says Dobson.
Helping to propel the AI Label initiative, the Motorola Solutions Technology Advisory Committee (MTAC) is a multidisciplinary group that advises the company on the responsible and ethical use of technologies, including data and AI.

MTAC continuously explores new ways to enhance trust with customers and the communities they serve, while helping to keep Motorola Solutions a step ahead of industry trends in technology’s responsible design, development, and use, according to the company.
The MTAC “Blueprint” sets out the core principles that drive the approach. “We’re excited by the opportunity to continue to lead in this area through additional innovation, thought leadership, and stakeholder engagement,” says Dobson.

Rather than replacing human decision-making, AI technologies will, in fact, augment it. Human-centered design is a core principle of the responsible technology “Blueprint” Motorola Solutions is developing.

New capabilities specifically augment human skills and capacity with the goal of helping humans spend time on what matters most during a safety or security incident, applying their unique judgement, knowledge, and oversight in high-stakes environments.

“We purposefully deploy AI to augment human focus, effort and performance,” says Dobson.
“We design AI to maximise human strengths like judgement and reasoning and to adapt to changing roles, tasks, risk levels, and cognitive states while keeping AI outputs traceable and transparent, whereby the user can easily see, check and override AI’s recommendations.”