At the moment I am currently using Cloudflare as a way to provide SSL to my self-hosted site. The site sits behind a residential connection that blocks incoming data on commonly used ports including 80 and 443. It’s a perfectly fine and reasonable solution which does what I want. But I’m looking to try something different.

What I would like to try is using Let’s Encrypt on a non standard port. I understand there are plenty of good reasons not do this, mainly that some places such as workplaces may block higher number ports for security reasons. That’s fair but I am still interested in learning how to encrypt uncommon ports with Let’s Encrypt.

Currently I am using Nginx Proxy Manager to handle Let’s Encrypt certificates. It’s able to complete the DNS Challenge required to prove I own the domain name and handles automated certificate renewals as well. Currently I have NPM acting as a reverse proxy guiding outside connections from Cloudflare on port 5050 to port 80 on NPM. Then the connection gets sent out locally to port 81 which is the admin web page for NPM (I’m just using it as a page to test if the connection is secured).

Whenever I enable Let’s Encrypt SSL and try to connect to my site, the connection times out and nothing happens. I’m not sure if Let’s Encrypt is expecting to reach ports 80/443 or if there is something wrong with my reverse proxy settings that breaks the encryption along the way. Most discussions just assume ports 80/443 are open which is fair since that’s the most common situation. The few sites discussing the use of uncommon ports are either many years dated or people talking about success without sharing any details. I’m sort of at the end of what I can search at this point.

What I’m hoping to learn out of all this is how encryption and reverse proxies work together because those two things have been a struggle for me to understand as a whole throughout this whole learning process. I would appreciate it a lot of anyone had any resources or experiences to share about this.

  • CosmicTurtle0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I am doing exactly what you’re describing with the exception of nginx proxy manager. I personally could not get it working for me and prefer the control of manually configurating the proxy. Sometimes to my own stress.

    I run home assistant and expose it to a non standard port. I use certbot to refresh my ssl cert and have never had issues.

    How are you doing your verification? If you are using HTTP acme challenge then yes you must have the standard 80 port open for the verification to occur.

    If you are using DNS verification, it does not require any port to open as your verification is done at the DNS provider.

    I have Cloudflare DNS and do DNS verification.

    • confusedpuppy@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      For verification I used the built in certificate manager in Nginx Proxy Manager. I generate an API key from Cloudflare for a DNS zone:zone:edit key with the domain I am using. Then I chose DNS verification in Proxy Manager and put the API key in the edit box. This has been successful every time.

      Do you use Cloudflare Tunnel or are you using Cloudflare as a Dynamic DNS? I’ve had issues with certbot but I think I just wasn’t using it properly, what process did you use for DNS verification?

      • CosmicTurtle0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I’m using Cloudflare DNS. What do you mean by dynamic DNS?

        I don’t use tunnel or anything other than just Cloudflare DNS

        I don’t know how proxy manager works but if it’s just an abstraction of certbot, then all you need to give it is your domain name and your Cloudflare API key.

        You should check to see if it requires a global API key. Proxy manager may not have been updated to use the more secure zone API key.

  • khalil@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Just did a quick test, the certs do not bind to the port, only the domain/fqdn. So in short your reverse proxy/application is doing something wrong. Do you have the cert files? Can you test them inside a ubuntu:24.04 docker with the script bellow? (you’ll need to copy two cert files). That does TLS and is the application all in one script, but it could be two scripts one acting as the reverse proxy or whatever, doesn’t make a difference from the point of view of the client.

    Lets Encrypt doesn’t do anything in port 80/443 unless you’re using the http challlenge AFAIK. And once you have the certs, they aren’t really involved in the connection, thus that can’t be the issue. Test by using curl against the script below, or your own infrastructure (each step/chain of it, the reverse proxy, the application ip, etc.)

    But in short I think your reverse proxy configuration is just wrong, or you’re accessing it the wrong way on the client side. For example, using https://example.com/ instead of https://example.com:5050/.

    # docker run --rm --net host -it ubuntu:24.04
    # then install python3 and run this
    import http.server
    import ssl
    
    PORT = 5201  # Change to your desired port
    CERT_FILE = "/fullchain.pem"  # Path to your certificate file
    KEY_FILE = "/key.pem"    # Path to your private key file
    
    # Create a basic HTTP request handler
    class SimpleHTTPRequestHandler(http.server.SimpleHTTPRequestHandler):
        def do_GET(self):
            self.send_response(200)
            self.send_header("Content-type", "text/html")
            self.end_headers()
            self.wfile.write(b"<h1>Welcome to the secure static server!</h1>")
    
    # Set up the HTTP server
    httpd = http.server.HTTPServer(("0.0.0.0", PORT), SimpleHTTPRequestHandler)
    
    # Set up SSL context
    ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
    ssl_context.load_cert_chain(certfile=CERT_FILE, keyfile=KEY_FILE)
    
    # Wrap the server socket with SSL
    httpd.socket = ssl_context.wrap_socket(httpd.socket, server_side=True)
    
    print(f"Serving HTTPS on port {PORT}")
    httpd.serve_forever()
    
    • confusedpuppy@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      When I get the motivation again I will give this a try. A while ago I was wondering if a tool like this existed so it’s nice to see it pop up now. Thank you for this.

  • Successful_Try543@feddit.org
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    4 days ago

    I assume, you use certbot for certificate management. In its documentation the option --http-01-port is stated which defaults to 80, the http port, which shall be reachable for the certificate generation procedure. Hence, I assume, this should be specified according to your needs.

    • confusedpuppy@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      Nginx Proxy Manager has been handling certs for me, I’m not sure how it handles certs since it’s packaged in a docker container. I can only assume it does something similar to Caddy which also automatically handles certificate registration and renewals. So probably certbot.

      All I know is that NPM has an option for DNS challenges which is how I got my certs in the first place.

  • LedgeDrop@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    I’ve got a similar set up and everything works. So, I can confirm that your assumptions are sound.

    My solution is kubernetes based, so I use cert-Manager to issue/create the Let’s Encrypt (using DNS as the verification mechanism), when gets fed into a Traefik Reverse Proxy. Traefik is running on a non-standard port, which I can access from the outside world.

    I’d suggest tearing your current system down and verify everything is configured correctly.

    For example :

    • Take a look at the SSL cert. Is it generated properly?
    • Look at the reverse proxy. Is it using the proper SSL cert and is it properly formatted? (I’ve found curl - -verbose - - insecure https://... to be helpful)
    • Maybe add a static file (ie: robots.txt) to nginx. This would allow you to see if the problem is between the outside world and nginx or between nginx and your service.
    • You can also use the “snake oil” cert, in a pinch. It’s an insecure SSL cert, but it would allow you to confirm that your nginx is properly configured and it would confirm that the issue is with the Lets Encrypt cert (or that process/payload).

    … and not to rob you of this experience, but you might want to look into Cloudflare Tunnels. It allows you to run services within your network, but are exposed/accessible directly from Cloudflare. It’s entirely secure (actually more so than your proposed system) and you don’t need to mess around with SSL.

    • confusedpuppy@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 day ago

      I’ll give your suggestions a try when I get the motivation to try again. Sort of burnt myself out at the moment and would like to continue with other stuff.

      I am actually using the Cloudflare Tunnel with SSL enabled which is how I was able to achieve that in the first place.

      For the curious here are the steps I took to get that to work:

      This is on a Raspberry Pi 5 (arm64, Raspberry Pi OS/Debian 12)

      # Cloudflared -> Install & Create Tunnel & Run Tunnel
                       -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/
                          -> Select option -> Linux
                          -> Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
                    -> Run as a service
                       -> Open new terminal
                       -> sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
                       -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/
                    -> Configuration (Optional) -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/configuration-file/
                       -> sudo systemctl restart cloudflared
                    -> Enable SSL connections on Cloudflare site
                       -> Main Page -> Websites -> DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
                          -> SSL/TLS -> Edge Certificates -> Always Use HTTPS: On -> Opportunistic Encryption: On -> Automatic HTTPS Rewrites: On -> Universal SSL: Enabled
      

      Cloudflared complains about ~/.cloudflared/config.yml and /etc/cloudflared/config.yml not matching. I just edit ~/.cloudflared/config.yml and run sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml again followed by sudo systemctl restart cloudflared whenever I make any changes.

      The configuration step is just there as reference for myself, it’s not necessary for a simple setup.

      The tunnel is nice and convenient. It does the job well. I just have a strong personal preference to not depend on large organizations. I’ve installed Timeshift as a backup management for myself so I can easily revisit this topic later when my brain is ready.

  • Boomkop3@reddthat.com
    link
    fedilink
    arrow-up
    6
    ·
    4 days ago

    TLS is not picky, as long as you’ve got your certificate you can use it pretty much on whatever port with whatever content. It doesn’t even have to be http

    • confusedpuppy@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      That’s what I thought. NPM is handling the certs just fine.

      Could it be that I’m setting up the reverse proxy wrong? Whenever I enable SSL on that reverse proxy, the connection just hangs and drops after a minute. I’m not understanding why it’s doing that.

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    i was under the impression ssl is port-agnostic except in configuration where you apply it to a service on a port.

    80/443 are just defaults… configure it right, and it shouldnt matter what port you need it on