Sergey Ponomarenko, Senior Manager of LLM Products at MTS AI, spoke about how to protect intellectual property. As soon as the company had its own large language models Kodify and Cotype, the developers began to think about how to protect their intellectual property and began to actively study this topic. In this article, we looked at macedonia whatsapp phone number software and hardware methods for protecting large language models and neural networks from theft.
Subscribe to RB.RU in Telegram
Why is it difficult to prove rights to LLM
There is demand for LLM in all areas - in industry, retail, telecom, banking.
Since not every company has the budget, staff of developers and prompt engineers to create a large language model, many opt for ready-made solutions. For vendors, this is both profitable and dangerous.
One of the risks is the theft of a large language model. There are cases when, after the pilot is completed, the customer does not delete the model from the server and continues to use it, or a dishonest employee copies the LLM.
Proving your rights to an LLM is quite difficult for several reasons :
Models may generate similar responses because they were trained on the same data, and proving that a text was generated by a specific LLM is very difficult.
Typically, the training process is a trade secret, and it is virtually impossible to know whether borrowed data or algorithms were used to create the model.
Due to similar training methods, models may have similar characteristics, making it extremely difficult to confirm that they have copied one another.
On the topic. A practical guide to implementing artificial intelligence in banking
Free up your time and earn more with AI! Take the course and get the best solutions for solving business problems as a gift.
How Developers Protect Their Language Models from Theft
-
- Posts: 298
- Joined: Thu Jan 02, 2025 7:07 am