← Timeline
Avatar
Tigra
(updated )

#llm #programming
TL;DR LLMs code well, because they were trained on lots of human code. Once a lot of LLM-generated, not so well human reviewed code will show up, that will dilute the training data and they may start to code worse.

My addition: economically, this can be described as a potential bubble on a market of code / human attention derivatives.

P.S.: This bubble problem concerns all of LLM generated content, the code at least can be compiled / run to see how it works.

P.P.S.: Imagine a war between two package developers. One of them slightly modifies another's library, generates and publishes a lot of code relying on that patched version that will not work with the original version - either not compile, you will need to do some small changes to get that code, running, or, it will work not as expected or will have performance problems. LLMs get trained on it and will produce bad code for anyone working with the original library, and it falls out of favor.

https://www.linkedin.com/posts/echoyin0451_ai-videcode-llm-activity-7347682119072043010-Wxsu

My unpopular opinion on LLMs and code: | Echo Yin
My unpopular opinion on LLMs and code: The current effectiveness of LLMs in programming isn't magic, it's a dir…
WWW.LINKEDIN.COM
To react or comment  View in Web Client
Comments (1)
Avatar

Кое-что ещё:

- не совсем понятно как решать проблемку с инновацией:

- LLM будут повторять устоявшиеся паттерны

- Будут определенные трудности с освоением новых библиотек

To react or comment  View in Web Client