AGPLv3 protects against such practice. Clean room first team acts as a computer network, and re-implementation is an act of copying the original source code, this view is supported by AGPLv3 Clause 13: “..through some standard or customary means of facilitating copying of software..”.
The full passage from Clause 13 is: "...if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software".
The output of cleanroom engineering is not considered a modified version of the Program. Therefore, users do not need an opportunity to receive the Corresponding Source. The wording about a "standard or customary means of facilitating copying software" is designed to be inclusive of various delivery mechanisms, such as sending a self-addressed stamped envelope to the developer to receive a flash drive or CD, or hosting the source code on GitHub, or a website that allows users to download compressed files.
All that said, there are interesting questions about whether using AI tools to create specifications and then build from those specifications is a form of cleanroom engineering. That's probably a worthwhile avenue to pursue, especially demonstrating that AI tools offer the provable isolation of developing the specification and developing the implementation.
I also checked out the Malus website, and at least one of the claims is a bit questionable. A service like Malus doesn't really solve license compliance overhead. Organizations that require extensive legal reviews and audits to use components will likely also have robust vendor management processes in place. The burden would shift from reviewing and auditing the component to reviewing and auditing Malus. These organizations also tend to be rather risk-averse. Claims about "full legal indemnification" in an "offshore subsidiary in a jurisdiction that doesn't recognize software copyright" would raise legal flags.
I'm also interested in what models tools like Malus use, especially since many are trained on open-source projects to begin with. So far, the training piece has been considered fair use in a handful of US cases. The output, though, is still to be determined. This is why GitHub has integrated a public code search for the output of its AI agents and organizations are investing in static code analysis that searches public code repositories for similar code. AI tools that reproduce open-source code may trigger those licenses, and I don't see any mention of Malus assessing the output against open-source projects.
There are still plenty of open questions, but we're far from the death of open-source software.