Build a Homelab server using an old laptop

利用旧笔记本打造家用Homelab服务器

I have a 10-year-old laptop that has been idle for a few years. Recently, I cleaned it up and transformed it, deploying a Proxmox environment on the bare metal to turn it into a simple home Homelab server. It runs Ubuntu and FnOS virtual machines, and with intranet penetration, it can be used as a long-term online VPS or as a NAS system to back up files and pictures.

This article will introduce the complete deployment solution of PVE + Virtual Machines + FRP Intranet Penetration + Nginx Reverse Proxy, which can safely expose LAN services to public domain access.

Foreword

Proxmox Virtual Environment (PVE) is an open-source virtualization platform based on KVM. The latest version is 9.1, built on Debian 13, and it is a relatively active and free virtualization platform.

The reason for not choosing to install the corresponding system directly on the bare metal is that having a virtualization platform at the bottom layer makes it more convenient to manage, switch, and run systems in parallel (of course, depending on the performance of the old laptop, it’s not possible to run too many in parallel). If a virtual machine system crashes, you can simply delete it and rebuild it or restore a snapshot in PVE, which is very convenient. It also allows me to test which NAS system is good to use as I please.

Furthermore, enabling hardware support for virtualization doesn’t cause much performance loss (KVM virtualization CPU loss is usually around 2-5%), and hardware passthrough can be configured for virtual machines to further reduce loss.

Hardware Modification

My laptop is an ASUS P453UJ, with an i7 6500U CPU, 8G DDR4 memory, and a 500G HHD. In terms of expandability, it only has one SATA interface, occupied by the HDD, and an optical drive. It’s not possible to directly insert two hard drives.

Although it only has 8G of memory, the current memory market is quite exaggerated, so I don’t plan to expand its memory. The services I want to run are sufficient with 8G of memory, and I’ve also added an additional 8G Swap space for PVE.

Optical Drive to HDD Bay Conversion

It’s already 2025 now. Originally, Win10 was installed on the HDD, and it would take several minutes from booting up to being usable, making it incredibly slow. So, the first step is definitely to replace it with an SSD. The optical drive is also no longer useful, so you can remove the optical drive bay and buy a hard drive caddy to adapt a 2.5-inch SATA drive, which costs anywhere from a few to a dozen yuan.

Additionally, I happened to have a 256G M2 NGFF SSD, which is perfect for installing the system. However, since the laptop doesn’t have this interface, it cannot be installed directly. I needed to buy an M2 NGFF to 2.5-inch SATA adapter enclosure.

Then, I inserted it into the optical drive bay. The original SATA port can be used for a separate hard drive.

Network Card Upgrade

This laptop’s built-in network card is Qualcomm QCNFA435, a mid-to-low-end network card configuration that only supports Wifi5, with 1x1 MIMO, and a maximum transmission speed of only 433Mbps.

This specification can’t even fully utilize the bandwidth now; it’s like an old ox pulling a broken cart.

So, I upgraded it with an Intel AX200 or 210 network card, which supports Wifi6, 2x2 MIMO, with a maximum speed of 2.4Gbps, far exceeding the performance of QCNFA435. The current price is between 40-60.

Installing PVE

Note: Before installing PVE, please ensure your CPU supports virtualization and that virtualization is enabled in the BIOS.
Intel: Change Advanced-CPU Configuration-Intel Virtualization Technology to Enable.

Then download the image from the official PVE website: Downloads

Find a USB drive and flash the image onto it. It is recommended to use Ventoy, which can be used as a regular USB drive and also allows you to place ISOs inside and select them to load. This way, you don’t have to flash each image repeatedly. Similar technology was used with Fbinstool years ago, but Ventoy is even more convenient.

Additionally, it is best to have the machine connected to an Ethernet cable during PVE installation. The default network is bridged, so after installation, it can be connected via the local area network.

After successful installation, screenfetch:

By default, PVE does not come with an X environment after installation. It can only be accessed via IP using a browser on a device within the local area network, with the default port being 8006.

PVE Optimization/Configuration

Connecting to Wi-Fi Network

Since the laptop has a built-in WLAN card, and I don’t want the machine to be tethered by an Ethernet cable, I hope it can connect to Wi-Fi. This way, I only need to solve the power supply issue, and I can place it anywhere.

Network Configuration

Below, I will introduce how to connect PVE to Wi-Fi and configure virtual machines to use NAT networking.

  1. Enter ip a to view the wireless network card device name; mine is wlp3s0.

  1. Install Wi-Fi packages
1
apt install -y wpasupplicant iw wireless-tools
  1. Configure and save the Wi-Fi network(s) to connect to (multiple can be configured) (/etc/wpa_supplicant/wpa_supplicant.conf)
1
2
3
4
5
6
7
8
ctrl_interface=/run/wpa_supplicant
update_config=1

network={
ssid="wifiname1"
psk="password1"
priority=5
}
  1. Configure nano /etc/network/interfaces, modify the Wi-Fi network card section, noting that wlp3s0 should be replaced with your own device name.
1
2
3
allow-hotplug wlp3s0
iface wlp3s0 inet dhcp
wpa_conf /etc/wpa_supplicant/wpa_supplicant.conf
  1. Start by executing ifup wlp3s0. At this point, entering ip a should show that the Wi-Fi network card is connected to the network.

  2. Enable IP forwarding, edit /etc/sysctl.conf

  • Remove the # before #net.ipv6.conf.all.forwarding=1 and save.
  • Remove the # before #net.ipv4.ip_forward=1 and save.

Note: In the latest version of PVE (9.1), the /etc/sysctl.conf file no longer exists. You need to create a new conf file in the /etc/sysctl.d directory.

1
2
3
4
5
6
root@homelab:/etc/sysctl.d# cat pve.conf 
net.ipv6.conf.all.forwarding=1
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.wlp3s0.rp_filter = 0
  1. Modify /etc/network/interfaces to add vmbr0 NAT rules.
1
2
3
4
5
6
7
8
9
10
11
12
13
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o wlp3s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o wlp3s0 -j MASQUERADE

post-up ip6tables -t nat -A POSTROUTING -o wlp3s0 -j MASQUERADE
post-down ip6tables -t nat -D POSTROUTING -o wlp3s0 -j MASQUERADE
iface vmbr0 inet6 auto
  1. Restart the network to connect to Wi-Fi normally and access it via the local area network.
    1
    systemctl restart networking

Reference article: PVE steps to connect to Wi-Fi using a wireless network card

DHCP

Although PVE has been connected to Wi-Fi, virtual machines still cannot obtain network access at this point because the previous configuration changed PVE’s network to NAT mode (generally, household WLAN cards do not support direct bridging). A DHCP service also needs to be added in PVE to assign IP addresses to virtual machines.

Install isc-dhcp-server:

1
apt install isc-dhcp-server

Edit /etc/dhcp/dhcpd.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Define the subnet, must match vmbr0's IP range
subnet 10.10.10.0 netmask 255.255.255.0 {
# 0. Core setting: tell all clients who the gateway is (points to PVE's vmbr0)
option routers 10.10.10.1;
# 1. IP range to assign to virtual machines (e.g., 100 to 200)
range 10.10.10.100 10.10.10.200;

# 2. Core setting: tell virtual machines who the gateway is (points to PVE's vmbr0)
option routers 10.10.10.1;

# 3. DNS servers (can use 114.114.114.114 or 8.8.8.8)
option domain-name-servers 114.114.114.114, 8.8.8.8;

# 4. Lease time settings (unit: seconds, here set to 12 hours)
default-lease-time 43200;
max-lease-time 86400;

# 5. Set static IP
host fnos-server{
hardware ethernet BC:24:11:A7:EB:99;
fixed-address 10.10.10.100;
}
}

Then start dhcp-server:

1
2
systemctl restart isc-dhcp-server
systemctl enable isc-dhcp-server

At this point, virtual machines should be able to normally obtain IP addresses and connect to the internet.

My complete /etc/network/interface is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
auto lo
iface lo inet loopback

iface nic0 inet manual
post-up echo 1 > /proc/sys/net/ipv4/ip_forward

iface nic1 inet manual

auto wlp3s0
iface wlp3s0 inet dhcp
wpa_conf /etc/wpa_supplicant/wpa_supplicant.conf

auto vmbr0
iface vmbr0 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o wlp3s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o wlp3s0 -j MASQUERADE

post-up ip6tables -t nat -A POSTROUTING -o wlp3s0 -j MASQUERADE
post-down ip6tables -t nat -D POSTROUTING -o wlp3s0 -j MASQUERADE
iface vmbr0 inet6 auto

source /etc/network/interfaces.d/*

NAT Port Forwarding

Because we used NAT networking when connecting to Wi-Fi earlier, virtual machines in PVE are isolated from the physical network and cannot directly access services within the virtual machines via ip:port.

If you want to access it directly via the local area network, you also need to configure iptables for port forwarding.

For easier expansion, you can add a post-up script in /etc/network/interface:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o wlp3s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o wlp3s0 -j MASQUERADE

post-up ip6tables -t nat -A POSTROUTING -o wlp3s0 -j MASQUERADE
post-down ip6tables -t nat -D POSTROUTING -o wlp3s0 -j MASQUERADE

# === Load port forwarding script ===
post-up /root/documents/iptables-nat.sh

iface vmbr0 inet6 auto

If you want to add new port forwarding later, you only need to modify this file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/bin/sh

# Define variables
WIFI_IF="wlp3s0" # Your Wi-Fi network card name
VM_NET="10.10.10.0/24"

# Clear old NAT rules (to prevent duplicates)
iptables -t nat -F PREROUTING
# ---------------------------------------------------
# Add your port forwarding rules below
# Format: iptables -t nat -A PREROUTING -i $WIFI_IF -p tcp --dport [Host Port] -j DNAT --to [VM IP]:[VM Port]
# ---------------------------------------------------

# Example 1: SSH (Host 10022 -> VM 22)
# iptables -t nat -A PREROUTING -i $WIFI_IF -p tcp --dport 10022 -j DNAT --to 10.10.10.100:22
# Example 2: Remote Desktop RDP (Host 33890 -> VM 3389)
# iptables -t nat -A PREROUTING -i $WIFI_IF -p tcp --dport 33890 -j DNAT --to 10.10.10.101:3389

Executing the script will make it effective. Since we modified /etc/network/interface earlier, it will be executed every time the networking service starts, so you don’t have to worry about it not taking effect after a restart.

Access method: If the host’s IP on the physical router is 192.168.1.123, then you can access 192.168.1.123:10022 to reach the virtual machine (10.10.10.100:22).

Wi-Fi Full Power Operation

By default, WLAN network cards are in standard power-saving mode, periodically entering a sleep state and only waking up when data packets need to be sent or received. However, for a device that needs to be continuously connected long-term, I want the machine to maintain the highest network stability and performance.

Therefore, the network card can be set to operate at full power by default:

1
2
3
4
5
6
7
# View network card list, find the WLAN card, mine is wlp3s0
$ iw dev
# Check the current network card's power-saving mode
$ iw dev wlp3s0 get power_save
Power save: on
# Disable power-saving mode
$ sudo iw dev wlp3s0 set power_save off

One-Click Optimization Script

There are some one-click optimization scripts for PVE that can directly configure passthrough + CPU/HDD/temperature display + change sources/remove subscriptions + CPU turbo boost mode.
It’s very convenient; pve-diy is recommended.

1
bash -c "$(curl -fsSL https://raw.githubusercontent.com/xiangfeidexiaohuo/pve-diy/master/pve.sh)"

You can select and configure as needed.

Installing X Environment

Because PVE does not have an X environment upon installation, you can only access the terminal. However, PVE itself relies on web configuration, which is very inconvenient without network access or another machine.

However, PVE is based on Debian, so we can naturally install an X environment on it, and the official documentation provides methods for installing xfce: Developer_Workstations_with_Proxmox_VE_and_X11

Simply put, just a few commands:

1
2
3
4
5
6
7
apt update && apt dist-upgrade
apt install xfce4 chromium lightdm
# adduser newusername
systemctl start lightdm

# Auto-enter X environment on boot
systemctl enable lightdm

This way, you can normally enter xfce after each boot, and with chromium installed, you can manage PVE locally via localhost:8006.

Disable Lid-Close Sleep

Laptops default to sleeping when the lid is closed, so it needs to be set to not sleep; otherwise, it will be unusable once the lid is closed.

Edit /etc/systemd/logind.conf, change the following three lines to ignore, and uncomment them.

1
2
3
4
5
6
# Not charging
HandleLidSwitch=ignore
# Charging
HandleLidSwitchExternalPower=ignore
# Connected to docking station
HandleLidSwitchDocked=ignore

Built-in Battery as UPS

Since the laptop comes with its own battery, when power is lost, the laptop’s built-in battery can be used as a simple UPS.

Note: I believe the greatest value of a UPS is to ensure the system shuts down properly after an abnormal power loss, preventing data loss and hardware damage due to power failure.

Install upower, set actions:

1
vim /etc/UPower/UPower.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[UPower]
# Enable power detection
EnableWattsUpPro=true
# Ignore lid close event (laptop lid closed doesn't sleep)
IgnoreLid=true
# Use percentage instead of remaining time
UsePercentageForPolicy=true

# Low battery warning - 50%
PercentageLow=50
# Critical battery - 40%
PercentageCritical=40
# Action execution battery - 30%
PercentageAction=30
# Critical power action (shutdown)
CriticalPowerAction=PowerOff

Then enable the upower service and set it to start automatically on boot:

1
2
3
root@homelab:~# systemctl start upower
root@homelab:~# systemctl enable upower
Created symlink '/etc/systemd/system/graphical.target.wants/upower.service''/usr/lib/systemd/system/upower.service'.

This way, when the laptop loses power and the battery capacity drops below 30%, it will automatically trigger a shutdown, ensuring that both the PVE and virtual machine environments can exit normally.

Virtual Machine Installation

I ran two systems on PVE:

  • Ubuntu
  • FnOS, a formal version of which was recently released by Feiniu. As a free domestic NAS system, its community is quite active.

You can directly enter the URL in PVE to download the image:

When creating a virtual machine, simply specify the image. You can see the screen and perform operations in each virtual machine’s console:

Under a NAT network, to access services within a virtual machine, you need to use iptables for port forwarding, as detailed in NAT Port Forwarding earlier in this article.

Disk Passthrough

Since this laptop now has two hard drives, a 256G SSD for the system and virtual machines, and a 4T HDD specifically for data storage, I passed it through to FnOS.

You can view the current disk information using ls /dev/disk/by-id:

Note down the ID of the disk you want to passthrough, then use the following command to passthrough it to the specified virtual machine:

1
2
# qm set {VM_ID} -scsi2 /dev/disk/by-id/{DISK_ID}
qm set 100 -scsi2 /dev/disk/by-id/ata-HGST_HTSXXXXXXXXXX30_RCYYYYYYYYYYEM

Then, in the virtual machine’s hardware settings, you can see this disk:

Intranet Penetration

In a previous article, I detailed how to deploy an intranet penetration service on a VPS: Using frp for intranet penetration
Similarly, virtual machines running on PVE and PVE itself can use frpc to connect to a public VPS for intranet penetration services.

However, there are some differences in how they run. Recently, I’ve gradually packaged all dependent software services into Docker images and then run them as containers on multiple machines. This unifies the execution environment, requiring only configuration maintenance.

Therefore, I packaged frp into a Docker image for easy deployment in various environments: frp-docker

You can use the docker_builder in the repository to package the image yourself, or use the prebuild version (currently the latest 0.65.0), which supports both amd64 and arm64 architectures for frpc/frps by default.

After importing the image, you can start it directly by specifying two parameters:

  • MODE: frpc or frps
  • ARCH: amd64 or arm64

docker run to start:

1
2
3
4
5
6
7
8
9
# frpc
docker run -d \
--name frpc \
--network host \
--restart=always \
-e MODE=frpc \
-e ARCH=amd64 \
-v /home/root/frp/frpc.toml:/etc/frp/frpc.toml \
frp-linux:0.65.0
1
2
3
4
5
6
7
8
9
# frps
docker run -d \
--name frps \
--network host \
--restart=always \
-e MODE=frps \
-e ARCH=amd64 \
-v /home/root/frp/frps.toml:/etc/frp/frps.toml \
frp-linux:0.65.0

This makes running intranet penetration very simple; you just need to maintain the frpc/frps configuration files.

If new penetration ports are added, the container needs to be restarted.

Nginx Reverse Proxy

After we forward the service ports from PVE virtual machines to the VPS via frp, we can also bind a domain name through Nginx to expose local area network services for public domain access.

Taking FnOS as an example, I created a subdomain on CF: fnos.xxx.com, resolving to the VPS’s IP.

Then, you can add an Nginx configuration on the VPS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# fnos.xxxxx.com.conf
server {
listen 80;
server_name fnos.xxxxx.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name fnos.xxxxx.com;

client_max_body_size 1024m;

ssl_certificate /etc/letsencrypt/live/$server_name/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$server_name/privkey.pem;

ssl_session_timeout 5m;

ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location / {
# Feiniu service forwarded to the VPS port number
proxy_pass https://localhost:15667;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto https; # Explicitly set to https
proxy_set_header X-Forwarded-Port 443; # Set port
proxy_set_header X-Forwarded-Host $host;
}
}

Then stop the nginx service and apply for a certificate:

1
2
apt-get install certbot
certbot certonly --standalone -d fnos.xxxxx.com

Then, start the nginx service.

Note: For details on applying for SSL certificates and automatic renewal, refer to the article Deploying a self-hosted MEMOS note system#SSL Certificate.

At this point, you can access the fnos.xxxxx.com domain and reach Feiniu deployed in the PVE virtual machine.

Note: If a 502 error occurs when accessing the domain, you can check the Nginx logs.

1
>tail -f /var/log/nginx/error.log

If it’s an SSL issue, you need to check the certificate permissions in the /etc/letsencrypt/ directory to ensure Nginx can read them.

1
2
3
4
5
6
ls -ld /etc/letsencrypt/

sudo chmod o+x /etc/letsencrypt/live -R
sudo chmod 644 /etc/letsencrypt/live -R
sudo chmod o+x /etc/letsencrypt/archive -R
sudo chmod 644 /etc/letsencrypt/archive -R

Then restart the Nginx service.

1
sudo nginx -s reload

Access Feiniu via domain name, with HTTPS:

Summary

The complete deployment solution introduced in this article outlines the process of PVE + Virtual Machines + FRP Intranet Penetration + Nginx Reverse Proxy, which can safely expose LAN services to public domain access.

In my deployed service load, the CPU is mostly idle in most cases, and power consumption is also very low:

While disassembling for modification, I also cleaned the dust from the laptop. Currently, the heat dissipation pressure doesn’t seem significant, and it has been running stably for several days without any anomalies. It can now serve as a simple home Homelab server.

Giving old machines new life and repurposing waste is the essence of tinkering, but the joy of tinkering is often the biggest obstacle to stability, so know when to stop; if it works, it works :).

The article is finished. If you have any questions, please comment and communicate.

Scan the QR code on WeChat and follow me.

Title:Build a Homelab server using an old laptop
Author:LIPENGZHA
Publish Date:2025/12/03 10:39
Word Count:11k Words
Link:https://en.imzlp.com/posts/54672/
License: CC BY-NC-SA 4.0
Reprinting of the full article is prohibited.
Your donation will encourage me to keep creating!