On its 10th anniversary, Signal’s president wants to remind you that the world’s most secure communications platform is a nonprofit. It’s free. It doesn’t track you or serve you ads. It pays its engineers very well. And it’s a go-to app for hundreds of millions of people.
You don’t have to trust the server and shouldn’t have to trust the server if the client is doing proper E2E because you know the maximum amount of metadata it’s got.
Your phone number is the metadata that’s not encrypted, that’s literally the whole problem here. Signal server is able to harvest graphs of phone numbers that interact with one another.
The identifier is unavoidable for push notifications to work. It needs to know which phone to send it after all, even if it doesn’t use Google’s services, it would still need a way to know which device has new messages when it checks in. If it’s not a phone number it’s gonna be some other kind of ID. Messages need a recipient.
Also, Signal’s goal is protecting conversations for the normies, not be bulletproof to run the next Silk Road at the cost of usability. Signal wants to upgrade people’s SMS messaging and make encryption the norm, you have to make some sacrifices for that. Phone numbers were a deliberate decision so that people can just install Signal and start using E2E texting immediately.
If you want something really private you should be using Tor or I2P based solutions because it’s the only system that can reasonably hide both source and destination completely. Signal have your phone number and IP address after all. They could track your every movements.
Most people don’t need protection against who they talk to, they want privacy of their conversations and their content. Solutions with perfect anonymity between users are hard to understand and use for the average person who’s the target audience of Signal.
The identifier absolutely does not need to be your phone number, and plenty of other apps are able to do push notifications without harvesting personal information from the users.
Meanwhile, normies don’t need Signal in the first place since e2ee primarily protects you from things like government agencies snooping on your data.
Just a side note but both Simplex Chat and Briar are free of unique identifiable IDs.
For Simplex Chat it uses hash tables. It still has a centralized server (which you can self host) but you can use the built in Tor functionality to hide your IP.
For Briar it is totally decentralized. All messages go directly over Tor but it also can use WiFi and Bluetooth. It supports group content types such as Forms and blogs. The downside is that you need a connected device. You can also use Briar Mailboxes on a old phone to receive messages more reliably.
With ‘sealed sender’ your phone number, or any other identifying information, is not included in the metadata on the envelope, only the recipient’s id is visible, and it’s up to the recipient’s client to validate the sender information that is inside the encrypted envelope. It looks like a step in the right direction, though I don’t use signal enough to have looked into auditing it myself.
Again, this is a trust based system because you don’t know what the server is actually doing. The fact is that the server does collect enough information to trivially make the connection between phone numbers and the connections on the network. If trust me bro from Moxie is good enough for you, that’s of course your prerogative.
You’re correct that if you use the system the way it used to work they can trivially build that connection, but (and I know this is a big assumption) if it does now work the way they say it does, they do not have the information to do that any more as the client doesn’t actually authenticate to the server to send a message. Yes, with some network tracing they could probably still work out that you’re the same client that did login to read messages, and that’s a certainly a concern. I would prefer to see a messaging app that uses cryptographic keys as the only identifiers, and uses different keys for different contact pairs, but given their general architecture it seems they’ve tried to deal with the issue.
Assuming that you want to use a publicly accessible messaging app, do you have any ideas about how it should be architected? The biggest issue I see is that the client runs on your phone, and unless you’ve compiled it yourself, you can’t know what it’s actually doing.
Again, everything you say is based purely on faith. As you acknowledge, the design of the system is such that people operating the server can trivially build out graphs of user connections. All the same arguments people apply to no trusting server side encryption equally apply to metadata.
Meanwhile, there are plenty of examples of messaging apps that don’t require phone numbers. Matrix, Wire, SimpleX chat, are just a few examples. Being able to build your own client is also important, and there is a concept of reproducible builds which allows people to be reasonably sure that a binary being shipped is compiled from the source that’s published. These are solved problems, and there is no technical reason for Signal to do what it’s doing.
I agree that them having users’ phone numbers isn’t ideal. There are other identifiers they could use that would work just as well. However, both the client and server are open source, so you can build, at least the client, yourself. If you can content yourself that it does not leak your ID when sending messages, then you don’t need to trust the server as it does not have the information to build a graph of your contacts. Sealed sender seems to have been announced in 2018, so it’s had time to be tested.
Don’t get me wrong, the fact they require a phone number at all is a huge concern, and the reason I don’t really use it much, but the concern you initially stated was addressed years ago and you can build the client yourself to validate that.
I’m talking about the information the server has. The encrypted envelope has nothing to do with that. Your register with the server using your phone number, that’s a unique identifier for your account. When you send messages to other people via the server it knows what accounts you’re talking to and what their phone numbers are.
Whilst I absolutely agree it’s correct to be skeptical about it, the ‘sealed sender’ process means they don’t actually know which account sent the message, just which account it should be delivered to. Your client doesn’t even authenticate to send the message.
Now, I’m just going on what they’ve published on the system, so either I could be completely wrong, or they could be being misleading, but it does look like they’ve tried to address the very issue you’ve been pointing out. Obviously it’d be better if they didn’t have your phone number at all, but this does seem to decouple it in a way that means they can’t build a connection graph.
The problem is that there is no way to verify any of this. You’re just putting trust into people operating this service. That’s not how security is supposed to work.
Strictly you’re having to trust the build of the client rather than the people running the server. If the client doesn’t send/leak the information to the server, the people running the server can’t do anything with it. It’s definitely still a concern, and, if I’m going to use a hosted messaging app, I’d much rather see the client built and published by a different group, and ideally compile it myself. Apart from that I’m not sure there’s any way to satisfy your concerns without building and running the server and client yourself.
Yeah, you trust that the encryption algorithm is designed correctly and that it doesn’t leak data because many people have audited it and nobody found a flaw in it. You absolutely will not have to trust people operating servers however. If you can figure out why e2ee is important then I’m sure you’ll be able to extrapolate from that why metadata shouldn’t be seen by the server either.
Signal has been forced by court to provide all the information they have for specific phone numbers [0][1]. The only data they can provide is the date/time a profile was created and the last date (not time) a client pinged their server. That’s it, because that’s all the data they collect.
Feel free to browse the evidence below, they worked with the ACLU to ensure they could publish the documents as they were served a gag order to not talk about the request publicly [2].
Once again, even if this is the way things worked back in 2016 there is no guarantee they still work like that today. This is the whole problem with a trust based system. You are trusting that people operating the server. It’s absolutely shocking to me that people have such a hard time accepting this basic fact.
True but I find the opposite end of the spectrum hard to believe. Extraordinary claims require extraordinary proof.
What is known is that government agents from countries like Iran, China and Russia actively are spreading misinformation. Not to say that you are a government agent but you should doubt the argument on both sides. For instance, using Signal is way better than not using an audited encrypted messager. Often times I see people jump to worse platforms. I think it is important to understand the problems with Signal.
It’s well known that the US and other western countries actively spread misinformation. It’s also known thanks to Snowden that the US regime harvests personal data aggressively. Anybody who puts blind faith into a US based security company is frankly an imbecile.
I’m not very tech-savvy, and that article looks very nice, but it’s kind of old and it’s true that they haven’t been as transparent (and frequently audited) as other services and they still require a phone number to set up an account, even if you can switch to only using a username later. Also, they removed encrypted database, and Molly brings that back which is the main reason I use it.
Another thing I don’t like about Signal is how ferociously they’ve tried to shut down forks in the past, and how they don’t say that you need Google Play Services for it to work properly.
Sadly it’s the only “privacy-conscious” service I’ve managed to make most of my family and friends use, after trying for years.
They only shut down forks that violate Signal branding. Mozilla does the same thing with Firefox.
It is libre so if you fork it there is nothing they can do. Also if they were really hostile they would of used a non libre license or made it entirely proprietary.
Signal does not collect metadata.
https://signal.org/blog/sealed-sender/
that amounts to trust me bro since nobody actually knows what the server does with the data
You don’t have to trust the server and shouldn’t have to trust the server if the client is doing proper E2E because you know the maximum amount of metadata it’s got.
Your phone number is the metadata that’s not encrypted, that’s literally the whole problem here. Signal server is able to harvest graphs of phone numbers that interact with one another.
The identifier is unavoidable for push notifications to work. It needs to know which phone to send it after all, even if it doesn’t use Google’s services, it would still need a way to know which device has new messages when it checks in. If it’s not a phone number it’s gonna be some other kind of ID. Messages need a recipient.
Also, Signal’s goal is protecting conversations for the normies, not be bulletproof to run the next Silk Road at the cost of usability. Signal wants to upgrade people’s SMS messaging and make encryption the norm, you have to make some sacrifices for that. Phone numbers were a deliberate decision so that people can just install Signal and start using E2E texting immediately.
If you want something really private you should be using Tor or I2P based solutions because it’s the only system that can reasonably hide both source and destination completely. Signal have your phone number and IP address after all. They could track your every movements.
Most people don’t need protection against who they talk to, they want privacy of their conversations and their content. Solutions with perfect anonymity between users are hard to understand and use for the average person who’s the target audience of Signal.
The identifier absolutely does not need to be your phone number, and plenty of other apps are able to do push notifications without harvesting personal information from the users.
Meanwhile, normies don’t need Signal in the first place since e2ee primarily protects you from things like government agencies snooping on your data.
Just a side note but both Simplex Chat and Briar are free of unique identifiable IDs.
For Simplex Chat it uses hash tables. It still has a centralized server (which you can self host) but you can use the built in Tor functionality to hide your IP.
For Briar it is totally decentralized. All messages go directly over Tor but it also can use WiFi and Bluetooth. It supports group content types such as Forms and blogs. The downside is that you need a connected device. You can also use Briar Mailboxes on a old phone to receive messages more reliably.
With ‘sealed sender’ your phone number, or any other identifying information, is not included in the metadata on the envelope, only the recipient’s id is visible, and it’s up to the recipient’s client to validate the sender information that is inside the encrypted envelope. It looks like a step in the right direction, though I don’t use signal enough to have looked into auditing it myself.
Again, this is a trust based system because you don’t know what the server is actually doing. The fact is that the server does collect enough information to trivially make the connection between phone numbers and the connections on the network. If trust me bro from Moxie is good enough for you, that’s of course your prerogative.
You’re correct that if you use the system the way it used to work they can trivially build that connection, but (and I know this is a big assumption) if it does now work the way they say it does, they do not have the information to do that any more as the client doesn’t actually authenticate to the server to send a message. Yes, with some network tracing they could probably still work out that you’re the same client that did login to read messages, and that’s a certainly a concern. I would prefer to see a messaging app that uses cryptographic keys as the only identifiers, and uses different keys for different contact pairs, but given their general architecture it seems they’ve tried to deal with the issue.
Assuming that you want to use a publicly accessible messaging app, do you have any ideas about how it should be architected? The biggest issue I see is that the client runs on your phone, and unless you’ve compiled it yourself, you can’t know what it’s actually doing.
Again, everything you say is based purely on faith. As you acknowledge, the design of the system is such that people operating the server can trivially build out graphs of user connections. All the same arguments people apply to no trusting server side encryption equally apply to metadata.
Meanwhile, there are plenty of examples of messaging apps that don’t require phone numbers. Matrix, Wire, SimpleX chat, are just a few examples. Being able to build your own client is also important, and there is a concept of reproducible builds which allows people to be reasonably sure that a binary being shipped is compiled from the source that’s published. These are solved problems, and there is no technical reason for Signal to do what it’s doing.
I agree that them having users’ phone numbers isn’t ideal. There are other identifiers they could use that would work just as well. However, both the client and server are open source, so you can build, at least the client, yourself. If you can content yourself that it does not leak your ID when sending messages, then you don’t need to trust the server as it does not have the information to build a graph of your contacts. Sealed sender seems to have been announced in 2018, so it’s had time to be tested.
Don’t get me wrong, the fact they require a phone number at all is a huge concern, and the reason I don’t really use it much, but the concern you initially stated was addressed years ago and you can build the client yourself to validate that.
I’m talking about the information the server has. The encrypted envelope has nothing to do with that. Your register with the server using your phone number, that’s a unique identifier for your account. When you send messages to other people via the server it knows what accounts you’re talking to and what their phone numbers are.
Whilst I absolutely agree it’s correct to be skeptical about it, the ‘sealed sender’ process means they don’t actually know which account sent the message, just which account it should be delivered to. Your client doesn’t even authenticate to send the message.
Now, I’m just going on what they’ve published on the system, so either I could be completely wrong, or they could be being misleading, but it does look like they’ve tried to address the very issue you’ve been pointing out. Obviously it’d be better if they didn’t have your phone number at all, but this does seem to decouple it in a way that means they can’t build a connection graph.
The problem is that there is no way to verify any of this. You’re just putting trust into people operating this service. That’s not how security is supposed to work.
Strictly you’re having to trust the build of the client rather than the people running the server. If the client doesn’t send/leak the information to the server, the people running the server can’t do anything with it. It’s definitely still a concern, and, if I’m going to use a hosted messaging app, I’d much rather see the client built and published by a different group, and ideally compile it myself. Apart from that I’m not sure there’s any way to satisfy your concerns without building and running the server and client yourself.
True, however your claim lacks evidence. They have your phone number and a few time stamps. That isn’t going help much.
My claim is that privacy should not be based on trust. This appears to be a very difficult concept for people in this thread to understand.
You always will have to trust something at some level.
Yeah, you trust that the encryption algorithm is designed correctly and that it doesn’t leak data because many people have audited it and nobody found a flaw in it. You absolutely will not have to trust people operating servers however. If you can figure out why e2ee is important then I’m sure you’ll be able to extrapolate from that why metadata shouldn’t be seen by the server either.
Signal has been forced by court to provide all the information they have for specific phone numbers [0][1]. The only data they can provide is the date/time a profile was created and the last date (not time) a client pinged their server. That’s it, because that’s all the data they collect.
Feel free to browse the evidence below, they worked with the ACLU to ensure they could publish the documents as they were served a gag order to not talk about the request publicly [2].
[0] https://signal.org/bigbrother/
[1] https://www.aclu.org/news/national-security/new-documents-reveal-government-effort-impose-secrecy-encryption
[2] https://www.aclu.org/sites/default/files/field_document/open_whisper_documents_0.pdf#page=8
Once again, even if this is the way things worked back in 2016 there is no guarantee they still work like that today. This is the whole problem with a trust based system. You are trusting that people operating the server. It’s absolutely shocking to me that people have such a hard time accepting this basic fact.
True but I find the opposite end of the spectrum hard to believe. Extraordinary claims require extraordinary proof.
What is known is that government agents from countries like Iran, China and Russia actively are spreading misinformation. Not to say that you are a government agent but you should doubt the argument on both sides. For instance, using Signal is way better than not using an audited encrypted messager. Often times I see people jump to worse platforms. I think it is important to understand the problems with Signal.
It’s well known that the US and other western countries actively spread misinformation. It’s also known thanks to Snowden that the US regime harvests personal data aggressively. Anybody who puts blind faith into a US based security company is frankly an imbecile.
@yogthos @possiblylinux127
Sad but true. It’s definitely concerning.
I’m not very tech-savvy, and that article looks very nice, but it’s kind of old and it’s true that they haven’t been as transparent (and frequently audited) as other services and they still require a phone number to set up an account, even if you can switch to only using a username later. Also, they removed encrypted database, and Molly brings that back which is the main reason I use it. Another thing I don’t like about Signal is how ferociously they’ve tried to shut down forks in the past, and how they don’t say that you need Google Play Services for it to work properly. Sadly it’s the only “privacy-conscious” service I’ve managed to make most of my family and friends use, after trying for years.
They only shut down forks that violate Signal branding. Mozilla does the same thing with Firefox.
It is libre so if you fork it there is nothing they can do. Also if they were really hostile they would of used a non libre license or made it entirely proprietary.