There's an interesting article here:
The key point of the article is:
"discovered that 440,445 out of 2.23 million code samples generated in Python and JavaScript using LLM models referenced hallucinated packages"
Bad actors can identify popular hallucinations and make real malware packages which will then be loaded into your code.
So unless you have a bomb proof way of checking that any packages referenced in AI generated code you may have serious problems.
Worth thinking about !
MK