top of page

A Pro-Innovation Approach to AI Regulation: Impacts and Opportunities


The UK government has officially adopted a pro-innovation, light-touch regulatory approach to artificial intelligence (AI). This strategy aims to foster innovation while addressing potential risks associated with AI technologies. This article will explore the possible impacts of this regulatory framework on model developers, application developers, companies using AI within their business flows, and considerations for businesses before initiating AI initiatives. Additionally, we will align these insights with our PersonPlus.AI strategy, which focuses on augmenting human creativity and productivity.


Possible Impact on Model Developers

The UK's regulatory approach targets the most powerful general-purpose AI systems, emphasizing transparency, safety, and accountability. Model developers may need to adhere to new dynamic thresholds based on computational usage and capability benchmarking to ensure compliance (A pro-innovation approach…). These measures aim to mitigate risks associated with competent AI systems, ensuring responsible development.


Implications:

  1. Increased Accountability: Developers must ensure their models are safe, transparent, and fair. This involves rigorous testing and benchmarking to meet regulatory standards (A pro-innovation approach…).

  2. Adaptation to Dynamic Thresholds: Developers must stay abreast of evolving benchmarks and thresholds set by regulatory bodies, which may change as AI capabilities advance (A pro-innovation approach…).

  3. Focus on Risk Mitigation: Emphasizing pre-deployment testing and risk assessment will require developers to integrate robust risk management practices throughout the AI model lifecycle (a pro-innovation approach…).


Implications for Application Developers

This regulatory framework will also impact application developers' integration of AI models into their products. They must ensure their AI models comply with the UK's safety, transparency, and accountability standards. We should soon see Models with a "seal of compliance," giving application integrators comfort that the model builder has done this due diligence.


Implications:

  1. Compliance Assurance: Developers must verify that their AI models meet regulatory standards (A pro-innovation approach…). While model providers may provide a seal of assurance, the application integrator is still responsible for ensuring that their application and use of this AI comply.

  2. Transparency Requirements: Clear documentation and transparency about how AI models are used within applications will be crucial (A pro-innovation approach…). A standard for this transparency does not exist. I would strongly recommend that such a standard "labeling" system be developed to provide comfort and ease of confirmation for end-consumers. Similar to other industries who's created the "Good Seal of XXX" with details living within the T&Cs of the software documentation. Consumer advocacy organizations must support the general consumer here if full adoption is expected. There doesn't seem to be clear leadership around this regulation component in the industry.

  3. Risk Monitoring: Continuous monitoring of AI models post-deployment to ensure they do not introduce unforeseen risks or biases into applications (A pro-innovation approach…). This is another industry opportunity. A neutral watchdog group where consumers can report and provide evidence of potentially harmful behavior. This organization would also become a one-stop-shop for consumers to learn about AI Applications, similar to the BBB (Better Business Bureau).


Possible Impacts on Companies Using AI Models

Companies leveraging AI within their business processes must consider several factors to align with the new regulatory framework. This includes ensuring the AI technologies they deploy comply and do not introduce significant risks to their operations, customers, or employees.


Implications:

  1. Due Diligence: Companies must perform due diligence on the AI models and applications they use, ensuring they comply with the regulatory standards (A pro-innovation approach…). This should encourage the Model Builders and Application Integrators to follow the recommendations outlined above in this article. This would create a practical on-ramp for business adoption and will accelerate the adoption of this technology into business processes. This standard would fit neatly into the existing SOC I and SOC II compliance models.

  2. Training and Awareness: Employees need to be trained on the implications of using AI technologies and the regulatory requirements associated with them (A pro-innovation approach…). For me, this is the most significant barrier to adoption. The best technology in the world is transparent. Your average person has no idea HOW a phone works but does know HOW to use one. As I'm called on to help small business owners understand and adopt AI technologies, segregating the hype from reality and the practical applications of AI vs. "Pie-in-the-sky" applications is a difficult hurdle to overcome. Unlike knowing how to use a phone, users of AI need to be vigilant of a system that is both compelling and compelled to answer, meaning not everything it produces is correct but certainly looks that way. Early adopters of this technology will need to know much more than technology has demanded of its users in previous innovations to be effective.

  3. Accountability Structures: Establish clear accountability and governance structures within the organization to manage AI-related risks effectively (a pro-innovation approach…). Proper accountability measures can only be implemented once your organization understands how to use AI effectively. How can you hold someone accountable who doesn't truly understand the tool they're using?


Considerations for Businesses Before Starting an AI Initiative

Before embarking on an AI initiative, businesses should consider the following to ensure compliance with the UK's regulatory framework:


  1. Risk Assessment: Conduct thorough risk assessments to identify potential risks associated with AI deployment (A pro-innovation approach…). This includes fully understanding the nature of your model, how your software vendor has integrated that model, and what measures your vendors are taking to protect against risks. This becomes the basis of your risk assessment, and then you document and plan for items specific to your business. A risk assessment plan should include model-level mitigations, integration-level mitigations, and company-level mitigations. Each level becomes more specific to your use cases and business culture.

  2. Regulatory Compliance: Ensure that any AI technology adopted complies with the UK's regulatory requirements on transparency, safety, and accountability (A pro-innovation approach…). Once you have your risk assessment, perform a functional alignment with the compliance framework and make any required risk mitigation revisions to align you with the compliant framework. This is an iterative process.

  3. Continuous Monitoring: Implement systems for continuously monitoring and evaluating AI systems to promptly identify and mitigate emerging risks (A pro-innovation approach…). Regularly, you should re-assess your risk assessment plan, re-align with the regulatory framework, and provide adjustments as required. The results of this regular review should be a public report demonstrating your devotion to transparency. This requires a robust data collection system to perform this monitoring and an issue management in-take system for end-users to report issues as they occur. You should also have a First-Responder plan to address high-risk events as they arise.

  4. Stakeholder Engagement: Engage with stakeholders, including employees, customers, and regulators, to foster trust and transparency in AI initiatives (a pro-innovation approach…). All the exercises listed here should be well documented and cleansed for public reporting, similar to a Pentest report. Treat them as auditory items, similar to SOX, SOC I, SOC II, and security testing. We can ensure the strong adoption of this technology by developing high consumer confidence.


Positive Look-Ahead and Person Plus AI Strategy

The UK's light-touch regulatory approach provides a balanced framework that supports innovation while ensuring safety and accountability. This approach aligns well with the Person Plus AI strategy, which aims to enhance human creativity and productivity through AI. By focusing on augmenting human capabilities rather than replacing them, businesses can leverage AI to drive innovation and improve outcomes.


Regulatory Structures Supporting Person Plus AI:

  1. Transparency and Explainability: Ensuring AI systems are transparent and explainable helps users understand how AI augments their work, fostering trust and adoption (A pro-innovation approach…). In scenarios where AI has evolved to eliminate redundancy in a job, a transition plan should be available for those workers to identify their place in the value chain. Pre-thinking through these transitions will foster trust, accelerate adoption, and create an incredibly positive outcome that provides for the growth this technology shall produce for businesses.

  2. Safety and Robustness: Emphasizing the safety and robustness of AI systems ensures that AI tools are reliable and enhance productivity without introducing significant risks (A pro-innovation approach…). Develop your framework for adoption. Demonstrating you've adopted the Person Plus AI strategy will alleviate fear and allow focus on improvement and effectiveness. You can realize significant gains with this technology, but achieving them will require considerable thought, care, and compassion. Set down your focus for safety and robustness from the beginning. Be accountable and trusted to adopt this technology responsibly. You may be slower to market, but you're more likely to succeed and realize exponential growth while the competition burns out if they do not apply this governance rigor.

  3. Fairness and Accountability: Fostering fairness and accountability in AI systems ensures that AI augments human efforts ethically and responsibly (A pro-innovation approach…). Fairness and accountability are fundamental to understanding and getting right. Leveraging this tool to enhance human productivity is an infinite game. Using it to diminish or replace humans has a maximum, finite potential for gain that eventually becomes unsustainable.


Regulation: A Better Way Forward

The UK's pro-innovation regulatory approach to AI presents an opportunity for businesses to innovate while ensuring responsible AI development and deployment. By aligning with this framework, companies can leverage AI to augment human creativity and productivity, driving positive outcomes and fostering a culture of innovation and trust.


Time and again, we see disrupting technology go through this adolescent stage of development, where universal standards are created, rebelled, and finally agreed upon. The charging port for EVs, VHS tapes, compact discs, USB ports for connected devices to computers, ATM Technology for accessing cash, and the list goes on. The only difference here is that, in most cases, the marketplace tends to enforce these standards' requirements, selection, and adoption. However, the accelerated rate of change and innovation surrounding the development of AI makes this revolution unusual. Most technology takes years to reach this adolescent stage once it has reached the GenPop (General Population) in sufficient quantities to require it. I have a box of cables of various types and standards in my closet to prove it. Another box is filled with old cell phones, each wrapped tightly around its charger.


At the speed at which this technology is entering the market and the profound effect that adopting it or not adopting it could have on virtually every facet of your business and personal life, we should welcome a regulatory framework to help guide its development and use. But, the key to this adoption is the creation of new industries to monitor, grade, and provide assurance that the framework is being met with the highest standards of rigor, transparency, and trust.


So, who's stepping up to do this critical part?


7 views0 comments

Comments


bottom of page