I feel what Generative AI and LLMs illustrate how much of what we do revolves around the recognition and manipulation of patterns, visually, textually, etc. Along with what happens you have a dense set of pattern data to work with.
Sure, an LLM can spit out tropes, because the nature of tropes is that they are repeated, and will be encountered frequently in the dataset. What it can't do is intentionally subvert tropes, choose to avoid tropes, refer to uncommon tropes, or otherwise creatively manipulate tropes. If you start seeing multiple LLM generated gaming material, it will quickly become evident that it is all very similar. It is an amalgam of other peoples' work, and it will always be derivative, because being derivitaive is core to its design.
@DangerousPuhson an artificial general intelligence ("AGI") may eventually be developed, but it won't be an outgrowth of LLMs. LLMs are a dead end on the road to the evolution of an AGI. The only contribution that LLMs make to developing an AGI it to prove the Turing test to be insufficient - which is valuable information - and to demonstrate how certain areas of research might be fruitless.
I'm going on about this a bit because I do a type of work that techbros are constantly trying to suggest could be replaced by LLMs, and in discussions with these guys it is always clear to me that (a) they don't understand my industry, and (b) they don't really understand how LLMs work. And it is really annoying. "But you could do X!" dude, I don't need to do X, nobody needs to do X, X has nothing to do with what I do. "But it could do Y!" no, it really can't do Y, incompetent/lazy people have tried to use it for Y, and it has inevitably ended badly, because doing Y expressly requires you to do something that
hasn't been done before, pattern recognition is of no assistance. It can do Z, which you haven't mentioned, and I know this because we have been using generative AI to do Z for at least 15 years, which you might know if you had a clue about what my works actually entails.
The logic seems to be, "You use words, LLMs use words, therefore LLMs can do what you do." And it is no more true of LLMs than it is of parrots.