Back
Jeff Triplett ✨
Load previous page…

I have never used cap but I have used resources with limits + reservations via compose which should just be passing those through as cli options: docs.docker.com/compose/compo… - I would ChatGPT this to see what args are 🤣

80% of 90% should be good enough to keep control of the box. I'm lazy and I have two or three dozen containers running without issues on a 4 GB droplet. Worse case, you can narrow it down to the service.

Thanks for the update. I haven't looked at Kamal in much detail, but it looks like a wrapper around Docker.

I use Docker and Compose a lot, and assuming Kamal allows you to pass through extra options, you should be able to throttle the CPU/memory to keep a process from running away with your droplet. That might be worth in case you keep seeing spikes. It won't limit the appearance of downtime for that service, but it'll keep the box and other containers running.

Also, if you aren't running DO's managed databases, please save yourself some grief and do that vs. trying to run it in Docker.

100% using managed databases. Don't trust myself to run those myself haha. At least with the web server, I know I can just nuke it and reset it with a few commands without any data loss.

And yes, Kamal lets you configure resources limits using Docker's config. I think it's called cap ? I haven't looked into it yet, but it seems like a good idea. So maybe set each container to use a maximum of 80% CPU usage, just so it never takes down the full machine? (assuming the other containers don't use up the remaining 20% at the same time)

I have never used cap but I have used resources with limits + reservations via compose which should just be passing those through as cli options: docs.docker.com/compose/compo… - I would ChatGPT this to see what args are 🤣

80% of 90% should be good enough to keep control of the box. I'm lazy and I have two or three dozen containers running without issues on a 4 GB droplet. Worse case, you can narrow it down to the service.

Digital Ocean VPS or Hetzner + Cloudflare is hard to beat.

I have run dozens of containers on DO droplets for years, and they are hard to beat. I heard equally good things about Hetzner (I'm also told they are cheaper).

Use the query string trick of adding ?asdf or something to the URL of your image and they should re-fetching the newer image.

You don't need one, but it's a non-zero trade-off if you get big enough to sell. So, it's not a blocker, and we can't predict success. I'd focus on the idea and build it out, and if you have enough money that you don't know what to do with, then pursue a .com is a good way to fill your week 🤣

I'm still in the phase before my first big success, so having money I don't know what do do with isn't a thing yet :D Good call about the non-zero trade off though!

I converted from Buffer to Typefully because I like the IDE and scheduling tool better than anything else. Typefully has good X/Twitter integration but also supports Mastodon and LinkedIn.

They also support Zapier which helped me post to Mastodon for a year or so before they added native support. So Zapier can probably get you anything that's missing or you could use their webhook to roll your own.

I wrote about mine last month: jefftriplett.com/2023/default…

Some that stuck out that I immediately miss when I'm without:

  • Bartender
  • Backblaze
  • Obsidian
  • 1Password

Thank you for apps and mentioning your blog post. I am aware of this series of default apps posts and actually want to reach out to people who posted them. :) So you are first who replied for a message from the future hahaha.

I think Git LFS maxes out at 1GB (or 2GB) so 4 GB is going to be too much. Packing is a little different because I have some 6GB and 8GB Docker images in their packaging product, but that is probably more hoops to jump through.

Amazon's S3 might be a good cheaper solution or even Backblazes bulk object storage product. I think it's $5 or $10 a month and would more than cover your files sizes + quoted downloads.

Thanks Jeff for explaining in deatils.
I'll look to amazon s3 again. Setting this feel too complex :)

You should be able to cache these for free via CloudFlare www.cloudflare.com

Since you mentioned it was open source, you could use a GitHub Release and link to the release files for free.

Thanks, I'll check cloud flare.
But, my 3d render files is around 4 gb+. Github will allow to use it with heavy traffic and bandwidth?

I think Git LFS maxes out at 1GB (or 2GB) so 4 GB is going to be too much. Packing is a little different because I have some 6GB and 8GB Docker images in their packaging product, but that is probably more hoops to jump through.

Amazon's S3 might be a good cheaper solution or even Backblazes bulk object storage product. I think it's $5 or $10 a month and would more than cover your files sizes + quoted downloads.

Thanks Jeff for explaining in deatils.
I'll look to amazon s3 again. Setting this feel too complex :)

Was about to suggest S3 as @jefftriplett did. Also take a look at Cloudflare R2 - zero egress charges last I checked.

Home
Search
Messages
Notifications
More