UK plans to bring rules to increase transparency of AI training data

The creative industry is said to be benefited at large

Reading time icon 3 min. read

Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

UK plans to bring rules to increase transparency of AI training data

The UK government is working on plans to increase transparency of AI training data. The news first spotted by Neowin comes via the Financial Times today’s report that highlights UK ministers are working towards bringing rules on how tech giants train their AI models.

The decision comes as rights holders raised concerns over their work being copied and used without permission or fee. Cultural Secretary, Lucy Frazer, says AI not only poses a huge problem for journalism. But, also for the creative industry in general.

She says the UK government would first bring rules around TV programs, books, and music used by AI companies.

New rules will force companies to be more transparent about AI training data

Once these rules are enforced, companies will need to be more transparent about what content they are using for AI training. Besides, they will have to offer remuneration for data used for AI training. And, give options to users whether to opt in or out of data collection.

While Frazer outlined the general plans, she didn’t mention how right holders would check if companies used their content to train AI models.

As per the report, the government might issue a proposal regarding the new rules before the election which could be held in autumn. Frazer said she is working with industry on all those things when FT inquired about the timing.

Worth noting that the EU could soon introduce similar rules under its AI Act. The rule will force companies to detail a summary of the data/content they used for training AI models. Also, AI companies will have to implement a policy to respect the EU’s copyright law.

As companies are aware of the plans of several governments, they are already securing deals with data providers. OpenAI recently closed deals with Financial Times, Reddit, and Stack Overflow to use data to train its AI models.

While the creative industry would welcome the rules laid by the UK government around AI training data, it could be a bittersweet situation for users going forward. Lack of training data could cause knowledge gaps, eventually, downgrading the quality of results generated by AI. All we can tell now is to wait until the proposed rules turn into law.

What do you think about the UK government’s decision to bring rules around the transparency of AI training data? Feel free to share your thoughts with our readers in the comments below.

More about the topics: AI