They telling us open sourcing LLMs will lead to wars
In recent months, we've witnessed an interesting narrative emerge around Generative AI: Open-source will lead to mass militarization of LLMs.
The leap from language prediction to alleged warfare capabilities seems remarkably sudden - and suspiciously convenient. Yet this is precisely the narrative being pushed by VCs, lobbies, and companies with "open" in their names. One might question whether these concerns about weaponization are genuine, or simply a calculated move to justify restricting and controlling this emerging technology.
Some recent news articles even state that China is tuning Llama 2 models for warfare...
Llama 2 models folks... 18.5 on OpenLLM board. Can't count the Rs in strawberry.
Every time open source is assaulted on the claims that a foreign power is using it for evil you can bet that people having an interest in keeping technology for them are deepling involved into shaping these stories. Playing on fears, building China as some kind of overlord using latest Generative AI technology to fight against us...
Then you look at LLMs coming from China like Qwen, ranking double than Llama 2, and you start wonder why would they want you to believe they are using Meta models to fight us. Why would a nation with advanced AI capabilities choose to modify less capable models when they have access to superior technology?
When we hear claims about foreign powers "weaponizing" open source AI, it's worth considering who benefits from this narrative and has vested interests in maintaining control over emerging technologies.
As Edward Snowden points out, while genuine weapons - drone swarms and military robots - are already claiming lives in active conflicts, disproportionate attention and regulatory fervor are being directed toward generative AI models. These models, at their core, are tools of expression and creativity, precisely the kind of technological advancement that First Amendment principles were designed to protect.
The 1st Amendment exists and persist in the mind as a powerful notion deeply embedded in the American consciousness. Yet even this fundamental principle finds itself increasingly under pressure in subtle but persistent ways. Not through direct assault, but through legions of small compromises, each justified by some new "unprecedented" threat or emergency pushed by governments and people who want our good.
When it comes to AI, we're watching this same pattern unfold. Tools of expression and creativity are being reframed as potential weapons, their open development painted as a security risk. It's a familiar playbook: first create the fear, then offer control as the solution. All while actual AI-powered weapons systems advance with far less scrutiny or public debate.
To understand what is coming for all of us we can take a look at the Internet: a medium that simultaneously liberated and constrained us. While it shattered geographical barriers and created unprecedented channels for global communication and knowledge sharing, it also evolved into the most sophisticated surveillance and control apparatus humanity has ever known.
Now we stand at a similar crossroads with generative AI. Like the internet before it, this technology represents another quantum leap in human expression and creativity. But we're already seeing the same pattern emerge - the very tools that could democratize creation and knowledge are being positioned as threats that need to be "responsibly" controlled and monitored. Under the banner of safety and security, we risk turning another instrument of potential freedom into yet another layer of digital control.
The pioneers of the internet dreamed of an open, borderless world of free information flow. This is now gone. Today's reality of data harvesting, surveillance, and digital restrictions serves as a cautionary tale. As we witness similar rhetoric being deployed around LLMs - particularly open-source ones - there is little we can do to fight against a world in motion and open-source models offer a glimmer of hope. They put real power into the hands of people: the ability to access, understand, modify, and shape these models and tools without corporate oversight or centralized control.
When people can freely access model weights, fine-tune their own models, and build tools that serve their needs rather than someone else's agenda, we create a parallel path. One that doesn't lead to surveillance and control. One where the open source community is quietly building tools that answer to users, not shareholders. Maintaining transparency is essential to fight against a future where humans won't matter as much as they do today and for that we need to count on whoever wants to contribute to open-source.
Meta's transformation from the company that gave us the Cambridge Analytica mess to actually becoming one of the good guys was the best redemption arc of the last years and the continuous great releases of open-source models and research definitely helped smaller actors to matter. But Open-source is a fragile concept that's increasingly being co-opted and manipulated. When companies slap an "open" label on their products while keeping the real innovations locked away, they're hollowing out what open source truly means. Yes, Meta and other tech giants are sharing models - and that's valuable - but open weights are what really matter.
At the end we're undoubtedly heading toward a future where large language models will be embedded in military hardware. LLMs will be embedded within robots, drones and on a shorter term used for deep-fakes, mass multi-medium attacks (tv, web, phone,..) but open-source is not the gateway to that. The military industrial complex is and there are worse actors at play in direct contract with governments and militaries helping to build that future. This fight isn't just about technology, or free open-models - it's about preserving a space for human autonomy in an increasingly automated and policed world.