(8-10-2025) Digital Ocean App Platform and Droplets

Had the need for a server to test APIs and went with Digital Ocean. They have a managed server called the App Platform and the old standard, Droplets. Droplets being virtual Linux servers, you're the admin with total flexibility to not know what you're doing. And the App Platform has one-click deployments; which one did I choose?

The App Platform

Yes I know, it's supposed to be a shark or a submarine, but whale's just better. The App Platform is fully managed, which has pros and cons. One of the biggest pros is, when creating an app, all you need to do is select the Github/Gitlab repo that you want and afterwards, every time you push to main, the site deploys. And when it deploys, it works. Security is also taken care of with SSL/HTTPS, DDoS protection and automatic OS patching and it scales horizontally/vertically automatically.

Not all is perfect, though. There is no persistent file storage, so if your app works with user files, they're gone on app update. Digital Ocean has an add-on service called Digital Ocean Spaces that provides the file storage, but it is an extra cost (although not much). Same goes for a database; but do you really want to be a DB admin anyway? The deal-breaker for my application was the HTTPS. All traffic appears to route through Cloudflare (the likely provider of the DDos protection) and Cloudflare likes HTTPS, so even though the App Platform provides an HTTP redirect mechanism, it's not fully HTTP. So dumb dumb old iOT devices that only talk HTTP can't get through.

Droplets

The Droplet's partially tangoing with itself, which is nice, so I'm keeping it. Anyway, Droplets. Yuck. Last time I tried using a droplet I couldn't get Nginx, or it might have been Apache, to do anything but complain and whine. You have to do everything yourself. SSH in and start typing things like "sudo nano /etc/nginx/sites-available/default". Droplets do nothing except mock your incompetence while being taken over by bot-generating crypto miners. That and provide a reverse proxy HTTP server to redirect HTTP traffic to the HTTPS server.

The Droplet Worked?

Amaizingly, yes. Turns out setting up a simple reverse proxy server is not all that hard. First thing you need to do is add a redirect to your App Platform App. Go into Settings->HTTP Routes Redirect and select Add new redirect route. Here's an example of what to enter:

Now, after making a Droplet (for instance, an Ubuntu Linux Droplet), follow these steps:
  1. Just to keep it simple, open the Console from the Digital Ocean Dashboard
  2. Update Linux: sudo apt update
  3. Install Nginx: sudo apt install nginx
  4. Open the default sites file for editing: sudo nano /etc/nginx/sites-available/default
  5. Copy in the following server block (with appropriate edits for the redirect):

server { listen 80; location /my-route-path/ { proxy_pass https://my-app-platform-app.ondigitalocean.app/; proxy_set_header Host my-app-platform-app.ondigitalocean.app; proxy_ssl_server_name on; proxy_ssl_verify off; # Optional: disables SSL verification if needed } location / { return 404; } }

Now because restarting Nginx may or may not work, just power-down the Droplet using "sudo shutdown -h now". Then, turn the Droplet back on using the dashboard ON/OFF slider. If all the magic works, you now have a reverse proxy server and can test HTTP API calls using curl or whatnot. Obviously the Droplet is already also taken over by the bot farm crypto miners, but just in case, you need to lock it down.

Securing the Droplet

Now I'm no security expert so do your own research, but there are several common things I found that need doing.

Keep the OS up to date

Pretty obvious first step. You can manually check/update the OS, but who wants to do that when there's unattended-upgrades.

sudo apt update && sudo apt upgrade -y sudo apt install unattended-upgrades

That first line checks and, if there are updates, installs them. The second line installs unattended-upgrades. Now check the configuration file:
nano /etc/apt/apt.conf.d/50unattended-upgrades
Here's the block that sets what to update:

Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}"; "${distro_id}:${distro_codename}-security"; // Extended Security Maintenance; doesn't necessarily exist for // every release and this system may not have it installed, but if // available, the policy for updates is such that unattended-upgrades // should also install from here by default. "${distro_id}ESMApps:${distro_codename}-apps-security"; "${distro_id}ESM:${distro_codename}-infra-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; };

As you can see, all the security-related stuff is un-commented, so it's active. There's another important setting further down in the file:

// Automatically reboot *WITHOUT CONFIRMATION* if // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true";

This setting allows unattended-upgrades to reboot the system if needed after applying a patch. This will likely be commented out or set to false. You'll want that set to true unless you plan frequent manual reboots.
There's two complementary settings that can be useful:

// Automatically reboot even if there are users currently logged in // when Unattended-Upgrade::Automatic-Reboot is set to true //Unattended-Upgrade::Automatic-Reboot-WithUsers "true"; // If automatic reboot is enabled and needed, reboot at the specific // time instead of immediately // Default: "now" //Unattended-Upgrade::Automatic-Reboot-Time "02:00";

So if you have a bunch of users that you don't want to "boot" off (so funny), you can either set the first to 'false', which I think is unwise (thank you) and/or (the wiser choice) set the Automatic-Reboot-Time to the middle of the night where the most users live and are therefore in pleasant repose (as opposed to morbid repose; they're never logging out).

Ok, now that's setup and it's a few days later. Is it doing anything? Check the logs in /var/log/unattended-upgrades. First, that directory should exist. Second, it should contain these files:

  1. unattended-upgrades.log - If this file contains daily timestamped operation records, the process is at least running. Check it periodically for additional information such as errors.
  2. unattended-upgrades-dpkg.log - If any upgrades occurred, this file will include the raw output of dpkg commands, information about packages installations and removals and status of package operations. This file will be empty until the first package update.
  3. unattended-upgrades-shutdown.log - If an update requires a reboot, this log will contain various related information (scheduling, reason, progress). This file will be empty until the first package update.

Secure SSH Access

This step is needed even if you just use the Digital Ocean Console, so you may as well be in the cool crowd and setup remote SSH for the Drop. I'll leave that for homework. Once logged in, setup a new user with sudo access:

sudo adduser new_user_name sudo usermod -aG sudo new_user_name (puts the user into the sudo group) switch users: sudo su - new_user_name sudo -l (checks to make the new user has sudo access)

While logged in as the new user, add the root SSH key to the new user account - you could also make a new one, but since this new user is meant to replace root, just keep the same key.

Time to change some permissions in sshd_config. Open it for editing:
sudo nano /etc/ssh/sshd_config
Not the worst idea to copy the contents out for backup purposes. Then change the following settings:

Port 1000 (Make the number something other than the default value of 22.) PermitRootLogin no (Disables the ability to login as root.) PasswordAuthentication no (Disabled tunneled clear text passwords.) PublicKeyAuthentication yes (Allow SSH public key authentication.) AuthorizedKeysFile .ssh/authorized_keys (Treated as a relative path; works for all users (ex. /home/new_user_name/.ssh).) AllowUsers new_user_name (Users that can SSH in. Don't put 'root' here.)

Save and exit nano (insert nano joke here), then test the file for syntax errors:
sudo sshd -t
If there is no output from that command, the file is good. Now you can white knuckle it and restart the SSH service, then log out...did you lock yourself out? Probably better to:

  1. Make another sudo user
  2. Add that user to AllowUsers
  3. Login as that new user from another instance of the bash to check that you can, then log back out.
  4. Now, from your original bash, restart the sshd service: sudo systemctl restart sshd
  5. Try logging in as that other new user. It won't work. Oh no!
  6. Add the port to the login (ex. ssh -i ~/.ssh/digitalocean_key -p 1000 [email protected])
  7. Now does it work? If yes, great. If no, at least your're still logged in as new_user_name
  8. Assuming the happy case, try logging in as root. That should fail.
Fabulous. The Droplet has a new SSH port that the crypto miner bots need to brute force guess and they can't login with passwords nor as root. Take that crypto miner bots!

Install a Firewall

Time to block stuff with a firewall called ufw. Install/configure with these commands.

sudo apt install ufw sudo ufw default deny incoming (start by denying incoming traffic on all ports) sudo ufw default allow outgoing sudo ufw allow 1000/tcp (allow the incoming SSH port changed previously in SSH config) sudo ufw deny 22/tcp (Explicitly deny the standard SSH port so it shows up in 'sudo ufw status') sudo ufw allow 80/tcp (Need to allow port 80 or the proxy server won't work) sudo allow 443/tcp (This is the standard port for HTTPS traffic to the App Platform) sudo ufw enable

Check the status of to make sure it's up and running: sudo ufw status

Status: active To Action From -- ------ ---- 80/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere 1000/tcp ALLOW Anywhere 22/tcp DENY Anywhere 80/tcp (v6) ALLOW Anywhere (v6) 443/tcp (v6) ALLOW Anywhere (v6) 1000/tcp (v6) ALLOW Anywhere (v6) 22/tcp (v6) DENY Anywhere (v6)

If IP v6 is not needed, you can disable it by editing the ufw config file:
sudo nano /etc/default/ufw
Set 'IPV6=no'.
However, these days, you can't guarantee your iOT device won't hop onto an IP v6 network, and the App Platform has an IP v6 address (you can use https://mxtoolbox.com and select DNS lookup, pasting in the URL of your App Platform app to confirm), so probably not a good idea.

The logs for ufw can be viewed here: sudo less /var/log/ufw.log
You will probably see some incoming traffic getting blocked. Another win against the crypto miner bots.

Install fail2ban

Fail2ban is a security tool for Linux servers that protects against brute-force attacks by monitoring log files and banning malicious IP addresses. It works by scanning log files for patterns indicating failed login attempts or other suspicious activity. If it detects a certain number of failures from a single IP, it can automatically update firewall rules to block that IP address, preventing further access for a specified period.

  1. Install fail2ban: sudo apt install fail2ban -y
  2. Check the status: sudo systemctl status fail2ban
  3. If not running, start it: sudo systemctl start fail2ban
  4. Enable fail2ban to run on boot: sudo systemctl enable fail2ban
There is some customization to do. The default config file can be viewed here: cat /etc/fail2ban/fail2ban.conf
However, you don't want to touch that file. Instead, make a custom config file:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
Open it for editing:
sudo nano /etc/fail2ban/jail.local
You may see a section for [dropbear] with ssh configuration. Dropbear is used in resource-constrained Linux devices and is likely not even installed. You can check using:
ps aux | grep dropbear
If you don't see it running, it's not and you can disable it in the jail file.
Similarly, there might be a [selinux-ssh] section. It's also probably not installed. Check with the one-word command: sestatus
If it returns "Command 'sestatus' not found" it's not installed and can be disabled.

[dropbear] enabled = false [selinux-ssh] enabled = false

Open SSH is the standard for Droplets, so highly likely that it's on. Confirm it's running:
ps aux | grep sshd
The section in the jail file covering SSH is [sshd] (see below).

[sshd] enabled = true port = filter = sshd logpath = %(sshd_log)s backend = %(sshd_backend)s maxretry = bantime = findtime =

Some of the settings will be filled in already and you will need to add others. The logpath and backend settings default to those system variables. These are important to get right because fail2ban works by monitoring these logs for nefariousness. So let's confirm/change them. Confirm the log path is the default location:
sudo grep sshd /var/log/auth.log
That should return sshd entries, confirming that's where the sshd logs are going. You can get the actual path for system variable sshd_log:
grep sshd_log /etc/fail2ban/paths-*.conf
For me, that returned sshd_log = %(syslog_authpriv)s
Now another grep is needed to see where syslog_authpriv goes:
grep syslog_authpriv /etc/fail2ban/paths-*.conf
That returned: syslog_authpriv = /var/log/auth.log
Ok, great. I could keep the system variable, but too much grepping, let's just change out the logpath to /var/log/auth.log

Now let's handle the backend setting.
grep sshd_backend /etc/fail2ban/paths-*.conf
Returned:

/etc/fail2ban/paths-arch.conf:sshd_backend = systemd /etc/fail2ban/paths-common.conf:sshd_backend = %(default_backend)s /etc/fail2ban/paths-debian.conf:sshd_backend = systemd /etc/fail2ban/paths-opensuse.conf:sshd_backend = systemd

There's multiple entries, so we need to make an educated guess that Debian is the correct one for Ubuntu; not a stretch as Ubuntu is based on Debian. So the AI says on Ubuntu, Open SSH logs are sent to /var/log/auth/log (confirmed previously), not the system journal. That requires explicit configuration.
Check with command: journalctl -u sshd
I get: -- No entries --
So it's not configured. Check to see if the journal has any sshd entries:
journalctl -u sshd | grep "sshd"
That returned nothing, so more evidence that the journal system is not setup to capture ssh logs. So safest thing to do is set: backend = auto

Of note, the [DEFAULT] section will likely set backend to auto, so that's the fallback if the [sshd] section does not have it. Setting to auto will cause fail2ban to try pyinotify (most efficient real-time monitoring), gamin or polling (last resort), depending on what's available on the system. Check if pyinotify is available:
lsmod | grep inotify
I get zippo, so it's not there.
Check if gamin is installed:
dpkg -l | grep gamin
Nothing.
See if fail2ban might be using gamin as a library:
ldconfig -p | grep gamin
Not there.
Magically running?
ps aux | grep gam_server
No. So polling it is unless you want to install these packages. Here's an example of final settings.

[sshd] enabled = true port = 1000 filter = sshd logpath = /var/log/auth.log backend = auto maxretry = 5 bantime = 3600 findtime = 600

Test the fail2ban configuration:
sudo fail2ban-client -t
If that returns "OK: configuration test is successful", there are no typos in jail.local. Restart fail2ban:
sudo systemctl restart fail2ban
There are a couple commands to use for monitoring the fail2ban logs:
sudo tail -f /var/log/fail2ban.log
sudo grep sshd /var/log/auth.log

Well I'm 'bout plum worn out after all that grep'n, but the Droplet can now fight back against the crypto mining bot farmers. So the moral of the story for this proxy server is: "'bout 10 minutes to make a proxy server and 10 hours to secure it."