Today the European Parliament’s responsible committees voted by a large majority to protect Free Software in the AI regulation. The plenary is called upon to uphold the idea. Likewise, this principle must be anchored in the ongoing Cyber Resilience Act and Product Liability Directive and their upcoming votes.
Could this be clarified/expanded a bit more with references to official EU votes/documents. How do they want to protect Free Software? I am asking because I just saw this post: EU AI Act To Target US Open Source Software and it tells a different story.
@JLP analysing the technomancer’s post is probably a good idea. They seem to not like the risk checking requirements primarily. Overall it will be challenge to understand how commercial, professional Free Software use will fair regarding the new regulation.
For more details, you can check the committee’s work, e.g.
Artificial Intelligence Act - Vote on 11 May 2023 | Von besonderem Interesse | Home | IMCO | Ausschüsse | Europäisches Parlament
AI Act: a step closer to the first rules on Artificial Intelligence | News | European Parliament
a quote from the latter:
To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses.
There are also links to all documents.
In fact, the question shouldn’t even be asked. The APIs are open source (TensorFlow is under MIT license), if programs use proprietary APIs, yes, the companies that use them MUST be sanctioned, but the one that makes open source (private or public) must be excluded and protected for good measure