remix logo

Hacker Remix

Deploy from local to production (self-hosted)

109 points by bypirob 20 hours ago | 59 comments

Nelkins 17 hours ago

All of these projects lack server hardening. I think for most devs it would not be a great idea to just point at a server and let 'er rip. I have a pretty extensive cloud-init script I use for this when deploying to a VPS. I workshopped it by finding existing scripts and having a back and forth with Claude. Feel free to roast it :)

https://gist.github.com/NatElkins/20880368b797470f3bc6926e35...

wongarsu 16 hours ago

There is a weird dynamic going on where defaults have become "good enough": ssh with public key configured and password auth disabled, services defaulting to only listening on localhost, etc., but those improvements have also cause people to pay much less attention to server hardening or to checking if any of their services might have unsafe defaults.

The world is made much better by safer defaults, but they also lead to a degree of complacency.

Wilder7977 16 hours ago

I had a quick look, that "deploy" user can run any sudo command without password? It's basically root at that point. I think that forcing a password (maybe using some lax timeout if you don't want to insert it so often) is a much better option. Correct me if I am wrong, but I also see that there are secrets in the file (e.g., gmail SMTP creds). Make sure the file is protected in read at a minimum. If those are your gmail app credentials, they are pretty serious and obtainable by just reading the file (same goes for postfix config).

klysm 2 hours ago

I’ve had this argument so many times over the years, and usually the response comes down to security by obscurity because people won’t know the non-root username

Wilder7977 2 hours ago

That I guess is relevant in the context of brute-force login, which given you only use key with, is not really something I would stress over. However, depending on what that user does, there might be vulnerable services running with its privileges, or there might be supply-chain vectors for tools that user runs.

simpaticoder 16 hours ago

Thank you for sharing, because I didn't know what cloud-init was until your post. I've done something similar, but packaged as a library of bash functions, designed to be called in arbitrary order. I cannot comment on the specific decisions you made in your file, but the fact that a declarative, compact, and standard solution exists for this problem is music to my ears. Out of curiosity, where did YOU learn of the existence of this feature?

shortsunblack 16 hours ago

Cloud init/Cloud config is a standard way to provision Linux hosts. It is slowly being outcompeted by Ignition and the friends, though.

mdaniel 15 hours ago

> It is slowly being outcompeted by Ignition and the friends, though.

I hope not, because I lack enough foul language to describe my burning hatred for Ignition and all its cutesy campfire-related project codenames. Hate-red.

simpaticoder 10 hours ago

Looks like it was invented by Canonical for AWS/EC2 in 2006 (!). It was then gradually adopted by other clouds over the next 10 years or so (GCP adopted in 2013, Azure a couple years later). Linode (Akamai Cloud now, I guess) adopted in 2023. Obligatory xkcd: https://xkcd.com/1053/

This got me to wondering when I first heard about HTML, HTTP, Linux, UTF-8, or any number of things, and from where, how so many of the things I've heard of once and never again, and the many important "standard" things I've never heard of.

codelion 16 hours ago

server hardening is definitely an often overlooked aspect... that gist looks comprehensive. i'm curious, have you benchmarked the performance impact of all those security measures? it's a trade-off, right? some community members mentioned using CIS benchmarks as a starting point, then tailoring from there.

klysm 2 hours ago

Security performance tradeoff is hard, but I always try to keep in mind what the downside to either is. A small performance hit can definitely matter, but for most use cases a security hit will matter a lot more

mediumsmart 18 hours ago

I use rsync in a script. One line builds and the second line deploys - but that pushes to a server someone else has standing on the ground with a disk in it that hosts the site I myself made on a local machine. If I self-hoist the production server into my flat, couldn't I just copy the folder without the internet like from local to local?

rad_gruchalski 17 hours ago

No need to even copy a folder, simply link it.

indigodaddy 19 hours ago

This seems cool and all but it's fairly trivial to docker compose whatever stuffs/apps you want and install caddy as the reverse proxy on the host (I normally don't do caddy in a container but it might be better to)

You have to setup docker compose files with airo it looks like in any case, so this just simplifies the caddy part? But caddy is so simple to begin with I'm not sure the point..

delduca 18 hours ago

Docker compose + Cloudflare Tunnels is my current setup, no need to deal with SSL, have a public IP address, and if you make use of Tailscale, you do need any open ports, witch is extremely secure and robust.

ratorx 18 hours ago

Does it even configure caddy? I can’t see how the caddy config could be generated from the env.yaml (unless it relies on the directory name etc for the path).

Seems like something that could have been solved with just docker compose (by setting a remote DOCKER_HOST). If you need “automatic” proxying, then traefik can do it off container labels.

timdorr 17 hours ago

Nope, it just copies over a hand-made Caddyfile and restarts a docker container that you need to already be running: https://github.com/bypirob/airo/blob/1a827a76f2254e5ca4f4ba4...

This looks extremely barebones and makes a large number of assumptions in its current state. This is better as a Show HN after some more dev work has been completed.

notpushkin 10 hours ago

Agreed – it’s a bit too early to publicize this. Lots of great alternatives discussed here though!

Maybe I should try to Show HN my Docker dashboard, https://lunni.dev/ :thinking:

globular-toast 6 hours ago

Surprised it doesn't use caddy-docker-proxy to automatically route traffic to your compose set up. You could just do that in dev and have a very simple compose override for prod that changes the domain names etc.

hn_rob 6 hours ago

I have been using a makefile where each target executes a shell script snippet to build, push or deploy containers. The problem is that with simple docker build it doesn't recognize modified code files and uses cached layers. To pick up changes in the code I always have to build with --no-cache which is inefficient. I wonder if Airo can detect changes in the code and rebuild only the image layers that need to be rebuild.

globular-toast 6 hours ago

This sounds like a misconfiguration on your part. I've never had this problem with docker before. Are you sure it's not your makefile skipping something because you haven't made the docker bits phony targets (if you're using make as a "script runner", everything should be a phony target. A sign that you're using the wet tool, but I digress).