This is Icebreaker One’s response to the Department for Energy Security and Net Zero’s Data for AI in the energy system: call for evidence.

Please note that throughout this response, Icebreaker One uses the terms Open, Shared and Closed data as defined here.

If you have any questions about our submission or require clarifications please do not hesitate to contact us via policy@ib1.org. We have omitted questions which we did not answer. 

Call for evidence response:

1. What energy problem do you want to solve? 

There is a wide range of energy use cases identified and highlighted in IB1’s response to DESNZ’s Developing an energy smart data scheme: call for evidence (question 14). There are core principles IB1 recommends embedding. 

Smart Data becomes effective for use in AI and in decision making when it is connected

In terms of prioritisation of sector, use cases requiring cross-sector interoperability and cohesion offer the greatest immediate ability to create impact, with a manageable degree of complexity involved in rollout. 

Regardless of use case, codify a requirement to contribute to net zero

As mentioned in IB1 response to Ofgem’s AI in the energy sector guidance consultation, we acknowledge and appreciate Ofgem’s commitment to encouraging innovation while helping the UK to meet its net zero target and other associated targets. As mentioned in the IB1’s May 2024 AI consultation response, IB1 recommends that the developing AI governance landscape codifies a requirement for AI use in the energy sector to demonstrably contribute to the UK’s net zero targets and for this requirement to be open to monitoring and audit. Without codification of this principle there is a risk that AI systems are established to optimise non-environmental goals, while creating negative environmental impacts. 

IB1 acknowledges the risk of AI systems generating increases in energy and water demand. Both the impacts and the demand profile of AI use should be subject to scrutiny and appropriately governed to ensure they contribute meaningfully to the UK’s net zero targets.

2. What kind of data is needed? 

All identified IB1 use cases, detailing the value, kind, potential users, and scale of the required data, are outlined here: https://ib1.org/energy/reports/ 

3. What work is needed to create or enable a useable dataset, including making sure it can be easily combined with other datasets? 

IB1 observes that this list concentrates on technical barriers to data use, but fails to highlight legal, licensing and commercial considerations. It is very possible that cost, usage and IP conditions will hamper otherwise technically possible uses of the data. We recommend early surfacing of this information to mitigate five risks: 

  1. Regulatory and compliance complexity: Data licensing must align with compliance rules around grid data, market data, and critical infrastructure. It is important to ensure data inputs to AI systems, and the outputs of the AI, remain compliant.
  2. Third-party data dependencies AI models in energy often rely on weather feeds, satellite imagery, market pricing, and sensor data from multiple vendors. Each source carries its own licensing terms around permitted use, commercial exploitation, and AI training rights. Identifying these dependencies early prevents data supply chain disruptions during development, or worse, after deployment.
  3. Intellectual property and model ownership: Who owns the AI model trained on licensed data? Many data providers now include clauses that restrict or claim rights over derivative works, including trained models.
  4. Onward data publishing and monetisation: Energy sector companies typically want to share or sell AI-derived insights. Licensing terms set upstream can block valuable downstream opportunities. 
  5. Long-term data access and continuity risk: Many foreseeable AI systems in the energy sector (e.g. predictive maintenance, load forecasting) need consistent, long-term data access. Identifying long-term data rights is critical to operational resilience.

Governance to enable usable datasets which can be combined with other datasets 

As mentioned in IB1 response to Ofgem’s AI in the energy sector guidance consultation IB1 encourages cross-sector collaboration and learning wherever possible. We recommend engaging with cross sector (i.e. water, transportation, local authorities, etc) and working with citizen advocacy groups to learn from best practices, ensure guidance is consistent for cross sector use cases (hydrogen, electric vehicles, electrifying public transport, etc), and understands the impact of AI guidance on different socio-economic stakeholder groups.

As described in IB1’s AI positioning statement, IB1 supports a hybrid governance model, combining robust oversight with decentralised data sharing, including smart contracts and digital identity solutions

AI must be designed to mitigate bias and discrimination, ensuring fair access to economic opportunities, financial services, and public resources. We support governance which is co-designed through processes which are adequately representative of different stakeholders or stakeholder groups, ensuring that the approach is collective rather than ‘done to’ from the top down.

To mitigate risks and enable data sharing at scale for AI use, the industry must consider more than just the dataset. For an identified use case, it needs to collectively determine:

  • User needs & impact: commercial priorities, business cases, and prospective new products and services to be unlocked.
  • Technical infrastructure: shared ontologies, APIs, schemas and standards to support data exchange. 
  • Licensing & legal: data sharing agreements, modes of redress and liability frameworks.
  • Engagement & communications: common language, stakeholder engagement and recruitment.
  • Policy: alignment with corporate policy and industry regulations.

At IB1 we do this through a robust governance process and Icebreaking to drive groups of organisations to make the critical decisions required to exchange data with one another.

4. Who would the users of the dataset be? 

All identified IB1 use cases, detailing the value, kind, potential users, and scale of the required data, are outlined here: https://ib1.org/energy/reports/ 

5. What scale does the dataset need to be? 

All identified IB1 use cases, detailing the value, kind, potential users, and scale of the required data, are outlined here: https://ib1.org/energy/reports/ 

6. What would enabling AI use of this dataset unlock? 

All identified IB1 use cases, detailing the value, kind, potential users, and scale of the required data, are outlined here: https://ib1.org/energy/reports/ 

7. What would be the arrangements for ongoing maintenance, governance and curation of the dataset? 

As noted in IB1’s response to Ofgem’s AI in the energy sector guidance consultation

IB1 encourages cross-sector collaboration and learning wherever possible. IB1 recommends engaging with cross sector (i.e. water, transportation, local authorities, etc) and working with citizen advocacy groups to learn from best practices, ensure guidance is consistent for cross sector use cases (hydrogen, electric vehicles, electrifying public transport, etc), and understands the impact of AI guidance on different socio-economic stakeholder groups.

AI must be designed to mitigate bias and discrimination, ensuring fair access to economic opportunities, financial services, and public resources. IB1 advocates strongly for AI governance to integrate with developments in data governance, both within the energy sector and in the cross-economic space (e.g. Smart Data Roadmap, approaches to consent or permission). 

IB1 believes it is important for data governance to establish principles, structures, roles and responsibilities, agreed upon by market participants, that enable auditable, accurate and timely data sharing at a market-wide scale. As mentioned in the IB1’s May 2024 AI consultation response, IB1 recommends that the data ecosystem, and integration with the data governance landscape be acknowledged.

IB1 notes that in training a model it is highly likely that training datasets will contain sensitive data (it is also possible to use only anonymised data within a training dataset to retain privacy in the model itself), but it is possible to implement techniques where sensitive data is significantly better protected in the training of the model such as aggregation, pseudo-anonymising personal data. A good example of this that has been accepted by Ofgem as appropriate for maintaining privacy is the creation of datasets in energy that aggregate data down to a few households based on which properties are on different Low Voltage Feeders. If there are clear controls on the training data which datasets can and cannot be used to train AI models, then we can expect the produced AI model to be privacy preserving. If you implement data protections after an AI model has already been trained, it is harder to control. If a model has used training datasets with potentially identifiable data within them, the model may provide outputs using this data and can end up linking datasets together to make it personally identifiable