I have a 10-year-old laptop that has been idle for several years. Recently, I tidied it up and refurbished it, deploying a Proxmox environment directly on the machine to turn it into a simple home Homelab server.
It runs Ubuntu and FnOS virtual machines, and with intranet penetration, it can be used as a long-term online VPS or to deploy a NAS system for backing up files and photos.
This article will introduce the complete deployment solution for PVE + Virtual Machines + FRP intranet penetration + Nginx reverse proxy, which can securely expose local area network services to a public domain name for access.
Foreword
Proxmox Virtual Environment (PVE) is an open-source virtualization platform based on KVM. The latest version is 9.1, built on Debian 13, and it is a free virtualization platform with a relatively active community.
The reason for not choosing to directly install the corresponding system on the bare metal is that having a virtualization platform at the bottom layer allows for easier management, switching, and parallel execution of systems (of course, depending on the old laptop’s performance, too many cannot run in parallel). If a virtual machine system crashes, you can simply delete and rebuild it or restore a snapshot in PVE, which is very convenient. It also allows me to freely test which NAS system is better.
Moreover, enabling hardware virtualization support does not incur much performance loss (KVM virtualization CPU loss is usually around 2-5%), and hardware passthrough to virtual machines can be configured to further reduce loss.
Hardware Modification
My laptop is an ASUS P453UJ, with an i7 6500U CPU, 8G DDR4 RAM, and a 500G HDD. In terms of expandability, it only has one SATA interface, occupied by the HDD, and an optical drive. It’s not possible to directly insert two hard drives.
Although it only has 8G of RAM, the current memory market is extremely exaggerated, and I don’t plan to expand its memory. The 8G of RAM is sufficient for the services I want to run, and I’ve also added an additional 8G of Swap space for PVE.
Optical Drive Bay to Hard Drive Bay
It’s 2025 now. The original HDD had Win10 installed, and it took several minutes from boot to operation, making me question life. So, the first step must be to replace it with an SSD. The optical drive is also useless, so I can remove the optical drive bay and buy a hard drive caddy to convert it to a 2.5-inch SATA, which costs only a few to a dozen yuan.

Additionally, I happened to have a 256G M2 NGFF SSD, which could be used to install the system. However, since the laptop doesn’t have this interface, it cannot be directly installed. I also needed to buy an M2 NGFF to 2.5-inch SATA hard drive enclosure.
Then, insert it into the optical drive bay. The original SATA port can be used for a separate hard drive.
Network Card Upgrade
This laptop’s built-in network card is Qualcomm QCNFA435, a mid-to-low-end configuration that only supports Wifi5 and 1x1 MIMO, with a maximum transfer speed of only 433Mbps.
This specification can no longer fully utilize bandwidth; it’s like an old ox pulling a broken cart.
1 | root@homelab:~# iw dev wlp3s0 link |
The rx bitrate is only a pathetic 130MBit/s!
So, I upgraded it to an Intel AX200, which supports Wifi6, 2X2 MIMO, with a maximum speed of 2.4Gbps, far exceeding the performance of QCNFA435. The current price is between 40-50.
Under my old Wifi5 router, the AX200 still achieves a significant speed boost:
1 | root@homelab:~# iw dev wlp3s0 link |
Replace Thermal Paste / Clean Dust
As an old computer, it had never been opened for dust cleaning before. During this modification, I also replaced the thermal paste and cleaned the fan.
Note: When cleaning a laptop fan, don’t just wipe the fan blades clean. You also need to disassemble the laptop’s air duct fins and gently clean the dust. When I disassembled it, I found the air duct was completely clogged with lint.

Additionally, instead of traditional thermal paste, I bought a Honeywell 7950 phase change pad for a few yuan, which can be used for both the CPU and GPU. And since laptop CPU and GPU chips are not much larger than a fingernail, one 31*50mm piece can be used two or three times.
Cooling effect: CPU running at full load for 2 hours, CPU temperature remains below 60°:
Install PVE
Note: Before installing PVE, please confirm that your CPU supports virtualization and that virtualization needs to be enabled in the BIOS.
Intel: ChangeAdvanced-CPU Configuration-Intel Virtualization Technologyto Enable.
Then download the PVE official image: Downloads
Find a USB drive, burn the image onto it. It is recommended to use Ventoy, which can be used as a regular USB drive and also allows you to put ISOs into it and select them for loading. This way, you don’t need to re-burn each image, similar to a technology like Fbinstool used in earlier years, but Ventoy is even more convenient.
Additionally, when installing PVE, it’s best to have the machine connected via an Ethernet cable. The default is bridged networking, so after installation, you can connect via the local area network by default.
After successful installation, screenfetch it:

By default, PVE does not come with an X environment after installation. It can only be accessed by a browser on a device within the local network via IP, with the default port being 8006.

PVE Optimization/Configuration
Connect to Wifi Network
Since the laptop has a built-in WLAN network card, and I don’t want the machine to be tethered by an Ethernet cable, I hope it can connect to WIFI. This way, I only need to solve the power supply issue, and it can be placed anywhere.
Network Configuration
Below, I will introduce how to connect PVE to WIFI and let virtual machines use NAT networking.
- Enter
ip ato view the wireless network card device name; mine iswlp3s0.

- Install Wi-Fi packages
1 | apt install -y wpasupplicant iw wireless-tools |
- Configure and save the WIFI to connect to; multiple can be configured (
/etc/wpa_supplicant/wpa_supplicant.conf)
1 | ctrl_interface=/run/wpa_supplicant |
- Configure
nano /etc/network/interfaces, modify the Wi-Fi network card configuration. Note that the device namewlp3s0should be replaced with your own.
1 | allow-hotplug wlp3s0 |
Start it by executing
ifup wlp3s0. At this point, enteringip ashould show that the Wi-Fi network card is normally connected to the internet.Enable IP forwarding by editing
/etc/sysctl.conf
- Uncomment
#net.ipv6.conf.all.forwarding=1and save. - Uncomment
#net.ipv4.ip_forward=1and save.
Note: In the latest version of PVE (9.1), there is no
/etc/sysctl.conffile. You need to create aconffile in the/etc/sysctl.ddirectory.
1 | root@homelab:/etc/sysctl.d# cat pve.conf |
- Modify
/etc/network/interfacesto add NAT rules forvmbr0
1 | auto vmbr0 |
- Restart the network to connect to Wi-Fi normally and access it through the local area network.
1
systemctl restart networking
Reference article: PVE Using Wireless Network Card to Connect to Wi-Fi Steps
DHCP
Although PVE is now connected to WIFI, virtual machines still cannot get network access because the previous configuration changed PVE’s network to NAT mode (general home WLAN network cards do not support direct bridging). A DHCP service also needs to be added in PVE to assign IPs to virtual machines.
Install isc-dhcp-server:
1 | apt install isc-dhcp-server |
Edit /etc/dhcp/dhcpd.conf:
1 | # Define the subnet, must be consistent with vmbr0's IP range |
Then start dhcp-server:
1 | systemctl restart isc-dhcp-server |
At this point, virtual machines should be able to get IPs and connect to the network normally.
My complete /etc/network/interface is as follows:
1 | auto lo |
NAT Port Forwarding
Because we used NAT networking when connecting to WIFI earlier, the virtual machines in PVE and the physical network are isolated, making it impossible to directly access services in virtual machines via ip:port.
If you want to access them directly via the local area network, you also need to configure iptables for port forwarding.
To facilitate expansion, a post-up script can be added to /etc/network/interface:
1 | auto vmbr0 |
In the future, if you want to add new port forwarding, you only need to modify this file:
1 |
|
The purpose is to map the virtual machine’s port to the host’s port. For example, map the virtual machine (10.10.10.100:22) to the host’s 10022 port:
1 | iptables -t nat -A PREROUTING -i $WIFI_IF -p tcp --dport 10022 -j DNAT --to 10.10.10.100:22 |
Execute the script, and it will take effect. Because we modified /etc/network/interface earlier, it will be executed every time the networking service starts, so you don’t have to worry about it not taking effect after a reboot.
Access method: If the host’s IP on the physical router is 192.168.1.123, then you can access 192.168.1.123:10022 to reach the virtual machine (10.10.10.100:22).
Wifi Full Power Operation
By default, WLAN network cards operate in standard power-saving mode, periodically entering sleep and only waking up when data packets need to be sent or received. However, for devices that need to be continuously online, I want the machine to maintain the highest network stability and performance.
So, the network card can be set to operate at full power by default:
1 | # View network card list, find the WLAN network card, mine is wlp3s0 |
Wired Network Configuration
If you want to run with a wired connection, also based on NAT topology, then /etc/network/interface would be:
1 | auto lo |
Note: The iptables port forwarding rules also need to be modified accordingly.
One-Click Optimization Script
There are some one-click optimization scripts for PVE that can directly configure passthrough + CPU/disk/temperature display + change sources/remove subscriptions + CPU turbo mode.
They are very convenient and recommended, such as pve-diy.
1 | bash -c "$(curl -fsSL https://raw.githubusercontent.com/xiangfeidexiaohuo/pve-diy/master/pve.sh)" |
You can choose to configure as needed.
Install X Environment
Because PVE does not have an X environment installed, it can only enter the terminal. However, PVE itself relies on web configuration, which is very inconvenient without a network or another machine.
But PVE is based on Debian, so we can naturally install an X environment on it, and the official documentation also provides a method for installing xfce: Developer_Workstations_with_Proxmox_VE_and_X11
Simply put, it’s just a few lines of commands:
1 | apt update && apt dist-upgrade |
This way, after each boot, it will enter xfce normally, and chromium is also installed, allowing PVE to be managed locally via localhost:8006.
Disable Lid-Close Sleep
Laptops default to sleeping when the lid is closed, which needs to be set to not sleep, otherwise it will become unusable once the lid is closed.
Edit /etc/systemd/logind.conf, change the following three lines to ignore and uncomment them.
1 | # Uncharged state |
Built-in Battery as UPS
Since the laptop itself comes with a battery, when the power is cut off, the laptop’s built-in battery can act as a simple UPS.
Note: I believe the greatest value of a UPS is to ensure the system shuts down normally after an abnormal power loss, preventing data loss and hardware abnormalities due to power failure.
Install upower and set actions:
1 | vim /etc/UPower/UPower.conf |
1 | [UPower] |
Then enable the upower service and set it to auto-start on boot:
1 | root@homelab:~# systemctl start upower |
This way, when the laptop loses power and the battery capacity drops below 30%, it will automatically trigger a shutdown, ensuring that PVE and the virtual machine environment can exit normally.
Virtual Machine Installation
I run two systems on PVE:
- Ubuntu
- FnOS, Feiniu recently released its official version, and as a free domestic NAS system, the community is quite active.
You can directly download the image by entering the URL on PVE:
When creating a virtual machine, specify the image. You can see the display and operate in the console of each virtual machine:
In a NAT network, to access services within a virtual machine, you need to use iptables for port forwarding, as detailed in “NAT Port Forwarding” above.
Hard Drive Passthrough
Because this laptop now has two hard drives, a 256G SSD for the system and virtual machines, and a 4T hard drive specifically for data storage, I passed it through to FnOS.
You can view the current hard drive information using ls /dev/disk/by-id:
Record the ID of the hard drive you want to pass through, then pass it to the specified virtual machine using the following command:
1 | # qm set {VM_ID} -scsi2 /dev/disk/by-id/{DISK_ID} |
Then, in the virtual machine’s hardware settings, you can see this hard drive:
iGPU GVT-g Passthrough
In Feiniu, if the graphics card is passed through, it can be used to accelerate AI computations for photo libraries and video decoding.
PVE Configuration
The CPU in my old laptop is an i7 6500U, which is a SkyLake architecture and supports GVT-g. Intel 6th-10th generation CPUs should all be compatible.
Edit /etc/default/grub, modify the GRUB_CMDLINE_LINUX_DEFAULT parameter:
1 | GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1" |
Then update Grab:
1 | update-initramfs -u -k all |
You also need to load the necessary kernel modules. Edit /etc/modules and add the following content:
1 | vfio |
Then restart PVE to check if the GRUB boot parameters have taken effect:
1 | # dmesg | grep "Command line" |
And whether the kernel module kvmgt is loaded:
1 | root@homelab:~# lsmod | grep -i kvmgt |
At this point, in PVE, first shut down the virtual machine. Edit the virtual machine’s hardware, add a PCI device, and select the iGPU:
In the MDev type, you can see GVTg:
Select an available one, add it, and then start the virtual machine.
Virtual Machine Configuration
SSH connect to the Feiniu virtual machine, edit /etc/modprobe.d/i915.conf, and comment out the following content:
1 | # options i915 enable_guc=3 |
Note: This line of configuration is used to enable the GuC (Graphics MicroController) and HuC (HuC MicroController) hardware acceleration functions of Intel graphics cards.
However, it conflicts with GVT-g, preventing Feiniu from using the iGPU, so it needs to be commented out, then restart the virtual machine.
Apply changes:
1 | update-initramfs -u -k all |
Open the Feiniu App Center, search for and install the i915-sriov-dkms driver.
Restart the virtual machine, and you should see the iGPU recognized normally:
Open the photo album, modify AI album settings, and enable GPU computing:
Note: After using GVT-g for a virtual machine, the VNC display within PVE will no longer output, and configuration can only be done via SSH.
Intranet Penetration
In previous articles, I detailed how to deploy an intranet penetration service on a VPS: Using frp for Intranet Penetration
Similarly, virtual machines running on PVE and PVE itself can use frpc to connect to a public VPS for intranet penetration services.
However, the operating method has some differences. Recently, I have gradually packaged dependent software services into docker images and then run them in containers on multiple machines, which allows for a unified execution environment and only requires maintaining the configuration.
So, I packaged frp into a docker image for easy deployment in various environments: frp-docker
You can use the docker_builder in the repository to build your own images, or use the prebuild version (currently 0.65.0), which by default supports amd64 and arm64 architectures, and supports frpc/frps.
After importing the image, you can start it directly, only needing to specify two parameters:
MODE: frpc or frpsARCH: amd64 or arm64
Docker run to start:
1 | # frpc |
1 | # frps |
This makes it very simple to run intranet penetration, only needing to maintain the frpc/frps configuration files.
If you add a new penetrated port, you need to restart the container.
Note: It is best not to directly expose ports forwarded from the intranet to the public network on the VPS. They can be accessed via Nginx+Https+authentication.
You can specifyproxyBindAddr = "127.0.0.1"in the frps configuration to force the port to bind to localhost, preventing direct public network access.
Nginx Reverse Proxy
Once we forward the service ports from PVE virtual machines to the VPS via frp, we can also bind a domain name via Nginx to expose the local area network service to the public internet for domain access.
Taking FnOS as an example, I created a subdomain on CF: fnos.xxx.com, resolving to the VPS’s IP.
Then, on the VPS, you can add an Nginx configuration:
1 | # fnos.xxxxx.com.conf |
Then stop the nginx service and apply for a certificate:
1 | apt-get install certbot |
Then start the nginx service.
Note: For applying for SSL certificates and automatic renewal, refer to the article Deploy a Self-Hosted MEMOS Note System #SSL Certificate.
Now you can access the fnos.xxxxx.com domain and reach Feiniu deployed in the PVE virtual machine.
Note: If a 502 error occurs when accessing the domain, you can check the nginx logs.
1 >tail -f /var/log/nginx/error.log
If it’s an SSL issue, you need to check the certificate permissions in the /etc/letsencrypt/ directory to ensure Nginx can read them.
1 | ls -ld /etc/letsencrypt/ |
Then reload the nginx service.
1 | sudo nginx -s reload |
Access Feiniu via domain name, with HTTPS:
Summary
The complete deployment solution introduced in the article covers PVE + Virtual Machines + FRP Intranet Penetration + Nginx Reverse Proxy, which can securely expose local area network services to a public domain name for access.
In my deployed service load, the CPU is mostly idle, and power consumption is very low:

During the disassembly and modification, I also cleaned the laptop’s dust. Currently, the heat dissipation pressure doesn’t seem significant, and it has been running stably for several days without issues. It can serve as a simple home Homelab server.
Giving old machines new life and repurposing waste is the meaning of tinkering, but the joy of tinkering is often the biggest obstacle to stability, so know when to stop, if it works, it works :)
Update Log
- 2025-12-26: Added content on cleaning dust/replacing thermal paste, and thermal performance data.
- 2025-12-12: Added content on iGPU GVT-g passthrough.
- 2025-12-09: Added comparison of network speeds before and after network card replacement, and wired network configuration.
- 2025-12-04: Added Wifi network card full power operation.