https://github.com/positive-intentions/chat

Is this a secure messaging app? probably not… but id like to share some details about how my app works so you can tell me what im missing. id like to have wording in my app to say something like “most secure chat app in the world”… i probably cant do that because it doesnt qualify… but i want to understand why?

im not an expert on cyber security or cryptography. im sure there are many gaps in my knowlege in this domain.

using javascript, i created a chat app. it is using peerjs-server to create an encrypted webrtc connection. this is then used to exchange additional encryption keys from cryptography functions built into browsers to add a redundent layer of encryption. the key exchange is done like diffie-helman over webrtc (which can be considered secure when exchanged over public channels). the algorithms are fairly easy to use and interchangable as described here.

  • i sometimes recieve feedback like “javascript is inherently insecure”. i disagree with this and have opened sourced my cryptography module. its basically a thin wrapper around vanilla crypto functions of a browser. a prev post on the matter.
  • another concern for my kind of app (PWA) is that the developer may introduce malicious code. this is an important point for which i open sourced the project and give instructions for selfhosting. selhosting this app has some unique features. unlike many other selfhosted projects, this app can be hosted on github-pages for free (instructions are provided in the readme). im also working on introducing a way that users can selfhost federated modules. a prev post on the matter.
  • to prevent things like browser extensions, the app uses strict CSP headers to prevent unauthorised code from running. selfhosting users should take note of this when setting up their own instance.
  • i received feedback the Signal/Simplex protocol is great, etc. id like to compare that opinion to the observation in how my todo app demo works. (the work is all experimental work-in-progress and far from finished). the demo shows a simple functionality for a basic decentralized todo list. this should already be reasonably secure. i could add handlers for exchanging keys diffie-helman style. which at this point is relatively trivial to implement. I think it’s simplicity could be a security feature.
  • the key detail that makes this approach unique, is because as a webapp, unlike other solutions, users have a choice of using any device/os/browser.

i think if i stick to the principle of avoiding using any kind of “required” service provider (myself included) and allowing the frontend and the backend to be hosted independently, im on track for creating a chat system with the “fewest moving parts”. im hope you will agree this is true p2p and i hope i can use this as a step towards true privacy and security. security might be further improved by using a trusted VPN.

i created a threat-model for the app in hopes that i could get a pro-bono security assessment, but understandable the project is too complicated for pro-bono work. i contacted “Trail of bits” because of their work on SimpleX and they have quoted me $50,000. the best i can offer is “open-source and communicating on reddit”. (note: i asked them if i can share those details… summarized response: the SOW is confidential, but i can share the quote.)

while there are several similar apps out there like mine. i think mine is distinctly a different approach. so its hard to find best practices for the functionalities i want to achieve. in particular security practices to use when using p2p technology.

(note: this app is an unstable, experiment, proof of concept and not ready to replace any other app or service. It’s far from finished and provided for testing and demo purposes only.)

  • positive_intentions@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    thanks!

    i understand. can you help me understand what i can do for this. id appriciate if you could critique my approach:

    im putting all the weight of the initial exchange being secure on the cryptographically random ID. if you can exchange that on a channel that is secure (whatsapp?/qr-code?/sms?), then the initial connection will establish the keys. the mitm there could be the peerjs-server (or even your ISP), but because the ID is crypto random, it would not be possible to predict who is who. (of course it could be logging connections and IP’s and figuring out from other metadata, but if that’s a concern, then you should selfhost a peerjs-server).

    i previsously created something for sharing files by QR codes as described here. to enhance security more for when peers are together to exchange keys, ive taken that qr-code investigation further to create something that is able to transfer encryption keys fully offline.

    • CameronDev@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      I am not really sure what the real solution is beyond creating the out of band method of validating the public key. Historically, this would be done by publishing your public gpg key to a 3rd party key server. Most modern apps use a qr code (I don’t know how this works, may require research) you can scan when you physically meet, or scan over a different medium (email, SMS etc).

      The problem with relying on the random number is that E can decrypt the message from A, and then reencrypt it and send it to B. B won’t know it has been inspected enroute. So B could call A and tell them the random number, but it wouldnt actually be secure. Also, if later in the chat A were to tell B, “My public key is XYZ”, E could detect that and alter it to " My public key is ABC" before sending it on to B.

      If A can generate a hash of the B’s public key, and B also makes a hash, they can call each other and compare, and if the hashes don’t match, E is listening. I think that is all you need, a way to present the public key to the users so they can validate it manually.

      Aside, but I don’t think it is a good idea for you to spend money on an audit yet. Spend some time trying to break your own system, by creating the malicious E server. You can then tweak and adjust your scheme until E is either impossible or trivially detectable. Unless this become a large scale venture, an audit isn’t worth it, and I get the impression this is more of a learning exercise for yourself? Also, once you are finalised, write up a paper on your scheme. Something like: https://signal.org/docs/specifications/x3dh/. Crypto experts will be able to easily validate that your scheme based on the paper. Crypto people can easily validate your scheme based on the paper…

      • positive_intentions@lemmy.mlOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        here are my thoughts on a possible approach.

        it seems the concerns center around validating the keys.

        solution 1 (generate new keys):

        • i could add a functionality for regenerating keys.
        • add functionality for exchanging the keys in various ways (qr, email, text, nfc)
        • the remote peer app can import this file and update the contact’s keys.

        solution 2 (validate keys):

        • A generates hash of B’s public key
        • A sends a link to B (through some trusted medium)
        • link opens app validation page with the public key hash encoded in the url
        • B generates hash of own public key (related to A)
        • B is displayed “success response” if the hash is valid.
        • (and vice-versa if wanted)

        Spend some time trying to break your own system

        i sure try, but im sure i’ve developed a bias about it being secure and so i might not be seeing all the possible scenarios. this is why feedback is important for me at this stage of development.

        thanks for the link to that spec. it looks like it would be pretty unique between applications that have this type of spec. can you tell me what that kind of document is called?

        i was recently pointed to something called ProVerif it seems to have a way of describing an implementation and it has some functionality to validate/detect security risks. ive only just come across it and and while it sounds too good to be true, it looks appropriately complicated. do you have any thought on it (or other tools like that)?

        • CameronDev@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          I think both solutions are fine, but 2 might be the best usability?

          I think those docs are typically called white papers.

          Its hard to get past those biases, but its a valuable skill to critically review your own work. And it feels better on your wallet to find bugs before paying for a third party review :)

          I’m not an expert in this field, so I have never heard of ProVerif, it definitely looks interesting though, and wouldn’t hurt to try?

          • positive_intentions@lemmy.mlOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            i added a section in the contact details page for validating public keys hash.

            the experience could be improved in several ways with things like qrcodes, but for now, i think its a nice addition.

            i will try set aside some time for proverif. im also investigating what is needed for CISA certification. even without the certification it’ll be interesting to see what i can do to security audit my own code (even though it looks like without and objective observer, the assessment isnt valid… but i could share it and someone else could say it looks good. and the overhead for them to assess my app could be less)