Japanese AI experts raise concern over bots trained on copyrighted material

Regulation

Japanese artificial intelligence experts and researchers are urging caution over the use of illegally-obtained information to train AI, which they believe could lead to “a large number of copyright infringement cases,” job losses, false information, and the leaking of confidential information.

On May 26, a draft from the government’s AI strategy council was submitted, raising concerns about the lack of regulation around AI, including the risks the tech poses to copyright infringement.

According to Japanese lawmaker Takashi Kii on April 24, there are currently no laws that prohibit artificial intelligence from using copyrighted material and illegally-acquired information for training.

“First of all, when I checked the legal system (copyright law) in Japan regarding information analysis by AI, I found that in Japan, whether it is for non-profit purposes, for-profit purposes, or for acts other than duplication, it is obtained from illegal sites,” said Takashi.

“Minister Nagaoka clearly stated that it is possible to use the work for information analysis regardless of the method, regardless of the content,” added Takashi, referring to Keiko Nagaoka, the Minister of Education, Culture, Sports, Science and Technology.

Takashi also went on to ask about the guidelines for the use of AI chatbots such as ChatGPT in schools, which also poses its own set of dilemmas, given that the tech is reportedly set to be adopted by the education system as soon as March 2024.

“Minister Nagaoka answered ‘as soon as possible’, there was no specific answer regarding the timing,” he said.

Speaking to Cointelegraph, Andrew Petale, a lawyer and trademarks attorney at Melbourne based Y Intellectual Property, says the subject still falls under a “gray area.”

“A large part of what people don’t actually understand is that copyright protects the way ideas are expressed, it doesn’t actually protect the ideas themselves. So in the case of AI, you have a human being inputting information into a program,” he said, adding:

“So the inputs are coming from people, but the actual expression is coming from the AI itself. Once the information has been inputted, it’s essentially out of the hands of the person, as it’s being generated or pumped out by the AI.”

“I guess until the legislation recognizes machines or robots as being capable of authorship, it’s really sort of a gray area and sort of a bit in no man’s land.”

Related: Microsoft’s CSO says AI will help humans flourish, cosigns doomsday letter anyway

Petale added that it poses a lot of hypothetical questions that first need to be solved by legal proceedings and regulation.

“I guess the question is; are the creators of the AI responsible for creating the tool that’s used to infringe copyright, or is it the people who are actually using that to infringe on copyright?,” he said.

From the perspective of AI companies, they generally argue that their models do not infringe on copyright as their AI-bots transform original work into something new, which qualifies as fair use under U.S. laws, where most of the action is kicking off.

Magazine: Moral responsibility’ — Can blockchain really improve trust in AI?

Articles You May Like

Analyst Reveals When The Ethereum Price Will Reach A New ATH, It’s Closer Than You Think
Deribit Moves $783M in Ethereum To Cold Storage: A Bullish Signal for ETH?
Ethereum Sees Neutral Netflow On Binance: What Does This Signal?
Ethereum Price Repeats ‘Bullish Megaphone’ Pattern From 2017 – Why $10,000 Is Possible
Is Ethereum Undervalued? Investors Hold Firm While Price Targets Rise