This is Icebreaker One’s response to Ofgem’s call for input on the use of AI in the energy sector. A google document version is here. Icebreaker One is a neutral non-profit that works on data sharing and sustainability, our mission is to make data work harder to deliver net zero. Our consultation response reflects our mission and our goal to make it easy to find, access, and trust the data needed to reach net zero.

Please note that throughout this consultation, Icebreaker One uses the terms Open, Shared and Closed data as defined here.

If you have any questions about our submission or require clarifications please do not hesitate to contact us via openenergy@ib1.org. Thank you for considering our submission.

Call for input response:

1. Do you agree with the overall approach to identify how the five AI principles are captured by the current legislative and regulatory framework that applies to the energy sector?

We are particularly interested in your views around the extent current licence obligations capture either directly or indirectly the five AI principles.

Icebreaker One comments on the five AI principles laid out:

Accountability and governance principle: 

Responsibility does not explicitly mention the data ecosystem used around AI (though data is mentioned separately under auditability and it is tacitly inferred under liability). IB1 recommends that the data ecosystem, and integration with the data governance landscape, is explicitly rather than tacitly acknowledged.

There is no mention of AI, energy and the connection with the UK’s net zero / climate targets. IB1 believes this is a key part of governance and we would encourage codification of this. We recommend that the developing AI governance landscape codifies a requirement for AI use in the energy sector to demonstrably contribute to the UK’s net zero targets and for this requirement to be open to monitoring and audit. Without codification of this principle there is a risk that AI systems are established to optimise non-environmental goals, while creating negative environmental impacts

There is also a risk of AI systems generating large – possibly unnecessary – increases in energy demand. Both the impacts and the demand profile of AI use in the energy sector should be subject to scrutiny and appropriately governed to ensure they contribute meaningfully – or at the very minimum do not actively harm – the UK’s net zero targets.

Contestability and redress principle

The role of data governance will also affect this principle. Data governance is complex, and is likely to become even more so with AI. This should be better integrated into the principle to ensure that parallel mechanisms are not created which risk assigning different stakeholders different rights and processes that could overlap or contradict each other.

Fairness principle

Ofgem’s information session suggested that AI governance could adopt definitions of fairness that already exist in energy sector regulation. This is problematic for two main reasons:

  1. There are multiple, different, definitions of fairness used in different parts of the sector and energy governance. This holds the potential to create confusion or gaps.
  2. Existing regulatory definitions of fairness focus primarily on economic fairness, conceptualising a view of people as primarily individual ‘consumers’. IB1 strongly advocates for the adoption of a broader concept of social sustainability in defining fairness. This must conceptualise people in a manner beyond their economic roles and should also be capable of viewing people in terms of groups. This approach is vital to assessing a more holistic range of AI impacts beyond the individualised economic sphere. The approach can also be actioned in AI governance to ensure that governance is co-designed through processes which are adequately representative of different stakeholders or stakeholder groups, ensuring that the approach is collective rather than ‘done to’ from the top down.

There is a separate risk that fairness and bias become concepts that are conflated, when in fact they are very separate. For example, fairness is a contextual value judgement that can coexist (or indeed encourage) some degree of bias (e.g. positive discrimination) depending on the envisaged end goal. Further work is required here to produce clear and separate definitions of the two concepts.

2. Do you agree with the initial findings around the potential issues or challenges of applying the AI principles in the energy sector?

We are particularly interested in your views around the novel issues we have identified, the multi-regulatory framework and monitoring and enforcement implications.

We recognise this is a standard approach to risk management. However, there is some issue around definitions of ‘some’ or ‘most’ etc which do not leave much scope for identifying risks that are particular to certain groups/parts of the sector (i.e. not a majority, but still important). This has the potential to not account well for potential other single points of failure. 

3. Do you have examples of AI use cases within the energy sector in Great Britain or elsewhere that we have not included?

N/A for IB1

4. Do you agree with the factors we have identified that could inhibit the adoption of AI in the energy sector?

IB1 would include the concept of trust as either an enabler or inhibitor of the adoption of AI in the energy sector. A lack of trusted data flows into AI systems could lead to poor, potentially unaccountable, decisions made or informed by machines and human-machine systems. This increases risk and difficulties in quantifying and investing in the transition to net zero. Better data infrastructure – including licensing, assurance, and security – will make it easier to make net-zero decisions at speed, with confidence and at a global scale. This includes decisions made by machines and human-machine systems.

5. Do you agree with our proposed approach to evaluating the risks associated with the use of AI in the energy sector?

Currently, there is a lack of attention to the data foundations and their transparency and accountability. This is key for use of AI within the energy sector. 

6. Do you agree with how we have approached evaluating risks from a consumer perspective?

We would particularly be interested in your views about the issues of fairness, ethics, transparency and explainability.

No, we would support integration with data governance at both consumer and sector levels. IB1’s approach to data governance is designed to support the long-term, cohesive and sustainable development and delivery of data-sharing programmes. We believe it is important for data governance to establish principles, structures, roles and responsibilities, agreed upon by market participants, that enable accurate and timely data sharing at a market-wide scale.

As previously touched on in question 1, IB1 raises the concept of social sustainability to be included. IB1 supports co-designed data governance, with the voices of stakeholders represented to ensure the approach is collective, rather than ‘done to’ from the top down. We advocate for this approach to also be taken to the development of AI regulation in the energy sector. 

7. Do you agree with how we have approached evaluating risks from a market perspective?

We would particularly welcome your views about the issue of algorithms and collusion, and interoperability with international markets.

IB1 would encourage interoperability with international markets. 

8. Do you agree with how we have approached evaluating risks from a company perspective?

We would particularly welcome your views about the issues of governance, accountability and redress, safety, security and robustness, and cyber.

N/A for IB1

9. Do you agree with how we have outlined the risks from a sustainability perspective and the need for guidance for the energy sector on its sustainable use of AI?

As outlined in question 1 for the accountability/governance principle, there is no mention of AI, energy and the connection with the UK’s net zero / climate targets. This is a key part of the emergent AI governance landscape and we would encourage codification of this accordingly.

10. Do you agree with our proposed recommendations?

Icebreaker One supports collaboration being a core aspect of developing Ofgem’s AI Strategy, and echoes the importance of a cross-industry forum to co-design the strategy. 

11. Are there any issues that are not covered by our recommendations?

As outlined throughout this response, IB1 advocates strongly for two additional areas of inclusion. Firstly, we suggest Ofgem’s work on AI governance to integrate with developments in data governance, both within the energy sector and in the cross-economic space (e.g. Smart Data Roadmap, approaches to consent or permission). Secondly, we suggest that Ofgem codifies the relationship and responsibilities of the AI governance landscape in support of the UK’s net zero and climate targets.

IB1 also offers one more general comment. On reviewing Ofgem’s Call for Input and AI Strategy, it is clear that the boundaries of this regulation are both highly political and yet to be determined. Both the AI landscape and the underlying data landscape are highly fluid and increasingly driven by cross-sectoral actors, data flows, use cases, value chains, and supply chains. IB1 suggests that the exact scope of AI regulation in the energy sector, and its integration with cross-economic developments in the data and digital governance spheres, requires further discussion which must involve an appropriately broad range of stakeholder engagement. Without this process of boundary setting and cross-sector interoperability of rules, there are risks that energy sector regulation could either lack sufficient ‘teeth’ to be effective, or could leave gaps which enable ‘bad actors’ to exist in unregulated but impactful spaces. We note that similar conversations are being had in other sectors – e.g. financial markets – where the use of AI is accelerating and that there would be benefit in coordination in this space. 

12. Should certain recommendations and issues be prioritised over others?

Icebreaker One supports collaboration and sector buy-in being a core aspect of developing Ofgem’s AI Strategy, and echoes the importance of a cross-industry forum to co-design the strategy.