“Security often benefits from simplicity. Complex systems are usually very fragile, and fragile systems are vulnerable to exploitation,” said American cryptographer and Signal founder Moxie Marlinspike during the Kyiv International Cyber Resilience Forum 2026.
This is the legendary developer’s first visit to Ukraine since the start of the full-scale Russian invasion. During the discussion, particular attention was given to the Signal messenger app. The app has become so widely used by the Ukrainian Armed Forces that it has earned the nickname Blue. On the frontline, phrases like “send it on Blue” or “I’m on Blue” are common. Soldiers prefer this app because it does not feel like a cumbersome security tool: it is simple, fast, and confidential. The key challenge is balancing security, privacy, and usability for millions of soldiers who must communicate safely without cumbersome tools.
Next — the direct words of Moxie Marlinspike.
On the balance between convenience and security
I believe the answer lies in exactly this. It is possible to create technology that protects privacy while remaining user-friendly. These goals are not contradictory.
In fact, security often benefits from simplicity. Complex systems tend to break, and everything that breaks is vulnerable. That’s why products should be simple to use and simultaneously secure.
For millions of soldiers, it’s essential to have a tool that doesn’t require constant attention to settings. It should just work—and be reliable.
Privacy through policy no longer works
We live in a world where most digital services rely on ‘privacy through policy.’ Companies simply promise in their terms of service not to misuse your data. They make a legal commitment to users: everything will be safe.
But there is a huge gap between a promise and real protection. In everyday life, a data leak can cost money or reputation. In defence, the cost can be human lives.
There are fundamentally two approaches to security. The first relies on trusting the ‘perfectly secured computer’: set it up correctly, guard it well, trust the company, and it will be secure. But 30 years of practice show this is an illusion—any system can be hacked.
The second approach protects the information itself. Data is encrypted in such a way that even the service provider cannot access it. Even if servers are hacked or a court demands access, there’s nothing to hand over—no plaintext data exists.
The problem with ‘privacy through policy’ is that it depends on many factors: the company can be hacked, coerced legally, change its rules, or accidentally leak data. Promises alone are insufficient. Real security comes from privacy by design, where privacy is built into the technology itself.
There are few consumer services with this approach. Examples include Signal and WhatsApp, where encryption is fundamental to the service’s operation.
Artificial intelligence: a confessional of ideas
I am not a politician or military analyst—I am an engineer. I look at AI not through the lens of war but technically, and what worries me most is the centralisation of data.
Today there are several so-called ‘advanced’ AI models—powerful systems only accessible via API. That means when you ask a question, you are sending your data to the company’s servers, where it is processed.
For example, OpenAI’s GPT has hundreds of millions of users worldwide. It is likely that the company controls one of the world’s most sensitive datasets, as people use AI to express personal, professional, and incomplete thoughts.
AI encourages openness—you don’t just enter a dry query; you refine, doubt, brainstorm, and share ideas not ready for public release. You essentially confess your thoughts. As a result, unprecedented amounts of sensitive information are concentrated in a few large players.
Running such models locally is extremely expensive. Most people and companies cannot afford it, so everyone uses cloud services, giving away their data. This creates a dilemma: either you access powerful AI but lose control over your data, or retain full control but lack resources.
Projects like Confer aim to combine the two: a ‘Signal for AI.’ You can use cloud AI while keeping chats technically private—so that not even the provider can read your conversation history. The future of AI isn’t just who builds the smartest model; it’s who builds one with privacy by design, not just policy.
Telegram: why it’s not truly encrypted
Telegram is not fully encrypted, despite years of marketing claims. Messages are stored on cloud servers, not just on your device. Every message you have ever sent or received is stored centrally in plaintext. If you log into a new device, all your messages appear—that wouldn’t happen in a properly encrypted system, where only you hold the keys.
Since Telegram technically can read your messages, governments can demand access through legal requests. Telegram has Russian origins, and parts of the team have connections in Russia. So it’s unrealistic to assume that Russia—or any state that prioritises digital control—cannot access these messages.
Millions continue to use Telegram, often out of habit or convenience, despite the security risks. The key point: if a service can technically read messages, it is not end-to-end encrypted.
Less is safer
Security does not start with expensive tools—it starts with simplification. The bigger and more complex a system, the more potential vulnerabilities it has. Keep systems smaller, cleaner, with fewer components and logic. Reducing size and complexity reduces the attack surface and the cost of protection. Security must be embedded in the architecture, in every line of code.
With cryptography and data minimisation, even if a system is compromised, the attacker gains nothing valuable. Strong walls are not enough—you must ensure there’s nothing behind them worth stealing.
Advice for Ukrainian developers: “Attack yourself first”
The approach to security is universal, in peacetime and wartime. First, clearly identify potential attackers—not abstract ‘hackers,’ but the type of adversary, their resources, skills, and tools.
Next, think like an attacker. Apply those methods to your own product. Test its resilience. In the Ukrainian context, use your offensive cyber capabilities to test your own defence. It’s better to find weaknesses yourself than let the enemy do it.