Install certificate for Unifi Controller

1. Request certificate. I saved my certificate as unifi2020.crt and unifi2020.key
2. Replace certificate on unifi controller

openssl pkcs12 -export -inkey unifi2020.key -in unifi2020.crt -out unifi.p12 -name unifi  -password pass:temppass
keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/keystore -srckeystore unifi.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt

3. Restart unifi

/etc/init.d/unifi restart

Add Cloudflare Dynamic DNS Support on Unifi USG

It seems Unifi USG still doesn’t support Cloudflare Dynamic DNS, even though lots of users voted for this feature.
In the past I’m using dnsomatic to update cloudflare DDNS, but dnsomatic is not working anymore. So I spent sometime to find a solution.

1. Create a config.gateway.json and put it on unifi controller. Then provision USG.
If you don’t know how to create the file, please refer to https://help.ubnt.com/hc/en-us/articles/215458888-UniFi-Advanced-USG-Configuration

{
	"service": {
		"dns": {
			"dynamic": {
				"interface": {
					"<WAN interface eg eth0>": {
						"service": {
							"cloudflare": {
								"host-name": [
									"<insert A record name here eg. usg.example.com>"
								],
								"login": "<CloudFlare E-Mail>",
								"options": [
									"zone=<DNS Zone eg. example.com>"
								],
								"password": "<CloudFlare Global API Key>",
								"protocol": "cloudflare"
							}
						}
					}
				}
			}
		}
	}
}

2. Upgrade ddclient on USG to version 3.9.0.
Save below script as a bash file:

#!/bin/bash
# Run this script as sudo

# Specify the repo and the location of the apt sources list
DEB_REPO="deb http://archive.debian.org/debian/ wheezy main # wheezy #"
APT_SRC="/etc/apt/sources.list"

# Add deb repo to sources list if it isn't there
grep -q -F "$DEB_REPO" "$APT_SRC" || echo "$DEB_REPO" >> "$APT_SRC"

# Run Apt update
apt-get update; apt-get -y install libdata-validate-ip-perl

# Download new ddclient and replace the existing version
cd /tmp
curl -L -O https://raw.githubusercontent.com/ddclient/ddclient/master/ddclient
cp /usr/sbin/ddclient /usr/sbin/ddclient.bkp
cp ddclient /usr/sbin/ddclient
chmod +x /usr/sbin/ddclient

And chomod +x the file and run it with root. Then the script will install libdata-validate-ip-perl as well as ddclient 3.9.0.

3. As we are using standard ddclient, so we have to create a copy of the old ddclient configuration file.

cd /etc/ddclient
cp ddclient_eth0.conf ddclient.conf

4. Then restart ddclient.

/etc/init.d/ddclient

5. Done. From system log, you should be able to see ddclient update cloudflare DDNS now.

fix “[Error28] No space left on device” when upgrade ESXi to 6.7 U2

To upgrade ESXi to 6.7U2, usually use below command is enough:

esxcli software profile update -p ESXi-6.7.0-20190402001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

But this time I got error “[Error28] No space left on device”.

Someone said to enable the swap on SSD can fix the issue, but I tried, it doesn’t work.

The fix is to manually update a vib first

[root@host:~] cd /tmp
[root@host:/tmp] wget http://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_10.3.5.10430147-12986307.vib
[root@host:/tmp] esxcli software vib install -f -v /tmp/VMware_locker_tools-light_10.3.5.10430147-12986307.vib
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: VMware_locker_tools-light_10.3.5.10430147-12986307
   VIBs Removed: VMware_locker_tools-light_10.3.2.9925305-10176879
   VIBs Skipped:

Then run the update again, you should be able to update your ESXi host now.

expand AWS EC2 FreeBSD ZFS disk

For testing purpose, I setup a Freebsd instance on AWS, which is using zfs on root.
And then I add 10G disk space to the root volume. Even though I enabled auto-expand for zroot, it seems the 10G space is not added to system. Here are the steps to expand the disk for zroot:

1. Reboot the server.  Even though it’s said that reboot is not necessary, I suggest to reboot the server to make sure server can recognize the new size.

2. As the disk size change, we need to fix the GPT partition table first.

gpart recovery ada0

3. Some documents said can use “zpool online -e” to expand the disk. As shown in above picture, the command can’t auto update GPT and assign the space to zfs partition.

4. We need to use gpart to update GPT first, then expand the zfs partition.

#gpart resize -i 2 ada0
#zpool online -e zrrot /dev/adap2

Do we still need to tuning storage/SMB for PVS 7.* servers?

Recently I spent lots of time to check how to improve PVS 7.* server performance to reduce the vDisk loading time.
And it seems on the internet there are lots of articles talking about tuning storage (both storage server and PVS server) to enable ‘oplock’.
But I found that almost all these articles are based on PVS 5.* and windows 2003/2008.
So do we still need to tuning storage/SMB for PVS servers?

Citrix has a good article to answer this quetion:

PVS Internals #4: vDisk Stores and SMB3

Just check the summary part:

Using file share storage for vDisks is still a valid and recommended approach, if sufficient capacity and high availability is in place. Especially in environments without redundant resp. highly available file services, I’d rather recommend local replicated vDisk stores instead of using just a single file server that definitely constitutes a single point of failure. And no one needs to invest in expensive file clustering solutions if its only purpose would be to provide file shares for vDisk storage.
Caching will help to significantly reduce the load on file servers or filers. Unlike earlier versions, there are no tuning requirements, neither for the PVS servers, nor for file servers. The defaults are perfectly fine. Sizing guidelines for PVS server memory (RAM) from past articles still apply.
Consider the caching behavior of the SMB redirector. There is a chance that shutting down all target devices connected to a particular vDisk will also remove the vDisk’s cache entries and the cache will need to be warmed up again on the next boot.
Leverage the latest SMB protocol if possible, at the time or writing this article it’s SMB 3.1.1. Not only Windows file servers support SMB3, but also modern filers such as NetApp, and even my home lab Synology is able to support SMB3.
Leasing is key for caching, so don’t use SMB 1.x or 2.0 (in fact you shouldn’t even enable the optional SMB 1.x feature on your PVS servers), and forget about any ‘oplock tuning’. Anything from SMB 2.1 onwards is fine with its defaults, but SMB 3.x brings added features that might be beneficial for you, such as ODX when taking copies of a vDisk file.

So it seems the only thing we need to do is to disable SMB 1.0 on PVS server. How to do it? Here is the link:

https://support.microsoft.com/en-us/help/2696547/detect-enable-disable-smbv1-smbv2-smbv3-in-windows-and-windows-server