SSL error during deploying controller: "Error: port is probably allocated, or ssl_key or ssl_cert or intermediate_cert is either missing or invalid."

iofog-controller

(xiang zhou) #1

Hi there,

I tried to deploy a remote controller and got the following errors:

✘ Error during SSH Session

{“level”:“info”,“time”:1618352535828,“pid”:61400,“hostname”:“compute01”,“msg”:“Starting iofog-controller…”}
{“level”:“error”,“time”:1618352537846,“pid”:61400,“hostname”:“compute01”,“msg”:“Error: port is probably allocated, or ssl_key or ssl_cert or intermediate_cert is either missing or invalid.”}

I checked documentation and found controller needs two ports: tcp:51121, and http:80, and by using 'netstat | grep ’ and I could see port 51121 and 80 is not used on my controller.

Next I looked through controller documentation, and found out maybe it’s complaining missing of SSL keys, so I create key/crt following the example:
openssl req
-newkey rsa:2048 -nodes -keyout iofog.key
-x509 -days 365 -out iofog.crt

And then redo deploy, but still failed with the same message above.

Then I guess i need to manually configure it so controller could know which ker/crt to use, and then I could manually start up controller. So I follow command from (https://iofog.org/docs/1.0.0/getting-started/setup-your-controllers.html#creating-a-self-signed-certificate)and run it on my controller server, and it seems that it’s an invalid parameter :
sudo /usr/local/bin/iofog-controller config add --ssl-cert=./iofog.crt
{“level”:“error”,“time”:1618354084537,“pid”:65183,“hostname”:“compute01”,“msg”:“Invalid argument ‘ssl-cert=./iofog.crt’”}

Can someone give guidance what could go wrong here ? Thanks

Xiang


(xiang zhou) #2

Hi there,

The following command could work to add ssl_key/ssl_cert on controller server:
xiangzhou@compute01:~$ sudo /usr/local/bin/iofog-controller config add -k ./iofog.key
Config option ssl-key has been set to ./iofog.key
xiangzhou@compute01:~$ sudo /usr/local/bin/iofog-controller config add -c ./iofog.crt
Config option ssl-cert has been set to ./iofog.crt

Then I tried to start the controller and got the same error msg:
xiangzhou@compute01:~$ sudo /usr/local/bin/iofog-controller start
{“level”:“info”,“time”:1618357779886,“pid”:72375,“hostname”:“compute01”,“msg”:“Starting iofog-controller…”}
{“level”:“error”,“time”:1618357781904,“pid”:72375,“hostname”:“compute01”,“msg”:“Error: port is probably allocated, or ssl_key or ssl_cert or intermediate_cert is either missing or invalid.”}

Next I tried to turn iofog-controller into dev mode to bypass ssl, and then restart controller:
xiangzhou@compute01:~$ sudo /usr/local/bin/iofog-controller config dev-mode --on
Dev mode state updated successfully.
xiangzhou@compute01:~$ sudo /usr/local/bin/iofog-controller start
{“level”:“info”,“time”:1618357934425,“pid”:72617,“hostname”:“compute01”,“msg”:“Starting iofog-controller…”}
{“level”:“error”,“time”:1618357936446,“pid”:72617,“hostname”:“compute01”,“msg”:“Error: port is probably allocated, or ssl_key or ssl_cert or intermediate_cert is either missing or invalid.”}

So it seems some port could be probably allocated. How to check that and Any suggestions ?

Xiang


(Serge Radinovich) #3

Can you provide output of a command like these

sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN

The port in question is 51121


(xiang zhou) #4

Hi,

See below for the output. And port 51121 is not listed there:
xiangzhou@compute01:/var/log/iofog-controller$ sudo netstat -tulpn | grep LISTEN | grep 51121
xiangzhou@compute01:/var/log/iofog-controller$ sudo lsof -i -P -n | grep LISTEN | grep 51121

xiangzhou@compute01:/var/log/iofog-controller$ sudo lsof -i -P -n | grep LISTEN
rpcbind 2233 root 8u IPv4 110860 0t0 TCP *:111 (LISTEN)
rpcbind 2233 root 11u IPv6 110863 0t0 TCP *:111 (LISTEN)
systemd-r 2706 systemd-resolve 13u IPv4 99373 0t0 TCP 127.0.0.53:53 (LISTEN)
rpc.mount 3053 root 9u IPv4 65773 0t0 TCP *:40541 (LISTEN)
rpc.mount 3053 root 11u IPv6 65777 0t0 TCP *:35495 (LISTEN)
rpc.mount 3053 root 13u IPv4 65781 0t0 TCP *:49543 (LISTEN)
rpc.mount 3053 root 15u IPv6 65785 0t0 TCP *:55651 (LISTEN)
rpc.mount 3053 root 17u IPv4 65789 0t0 TCP *:36075 (LISTEN)
rpc.mount 3053 root 19u IPv6 65793 0t0 TCP *:41419 (LISTEN)
xrdp-sesm 3633 root 7u IPv6 51325 0t0 TCP [::1]:3350 (LISTEN)
sshd 3901 root 3u IPv4 89433 0t0 TCP *:22 (LISTEN)
sshd 3901 root 4u IPv6 89435 0t0 TCP *:22 (LISTEN)
xrdp 4191 xrdp 11u IPv6 66862 0t0 TCP *:3389 (LISTEN)
prometheu 4896 root 3u IPv6 97643 0t0 TCP *:9090 (LISTEN)
mongod 4897 root 10u IPv4 97647 0t0 TCP 127.0.0.1:27017 (LISTEN)
minidlnad 6169 root 9u IPv4 117525 0t0 TCP *:8200 (LISTEN)
node 11512 root 16u IPv4 41440 0t0 TCP *:3000 (LISTEN)
rpc.statd 14700 statd 9u IPv4 1523584 0t0 TCP *:47239 (LISTEN)
rpc.statd 14700 statd 11u IPv6 1523588 0t0 TCP *:51463 (LISTEN)
cupsd 30203 root 6u IPv6 50008985 0t0 TCP [::1]:631 (LISTEN)
cupsd 30203 root 7u IPv4 50008986 0t0 TCP 127.0.0.1:631 (LISTEN)
mongod 45209 root 6u IPv4 36532819 0t0 TCP 127.0.0.1:27019 (LISTEN)
node 45415 root 22u IPv4 36547634 0t0 TCP *:8080 (LISTEN)
sshd 51732 craig 9u IPv6 46510620 0t0 TCP [::1]:6010 (LISTEN)
sshd 51732 craig 10u IPv4 46510621 0t0 TCP 127.0.0.1:6010 (LISTEN)
mosquitto 67880 root 4u IPv4 40993181 0t0 TCP 127.0.0.1:1883 (LISTEN)
mosquitto 67880 root 5u IPv6 40993182 0t0 TCP [::1]:1883 (LISTEN)
httpd 71545 root 4u IPv6 46673602 0t0 TCP *:80 (LISTEN)
httpd 71546 root 4u IPv6 46673602 0t0 TCP *:80 (LISTEN)
httpd 71547 root 4u IPv6 46673602 0t0 TCP *:80 (LISTEN)
python3 75486 root 4u IPv4 36255537 0t0 TCP 127.0.0.1:8085 (LISTEN)
sshd 90913 craig 9u IPv6 52652363 0t0 TCP [::1]:6011 (LISTEN)
sshd 90913 craig 10u IPv4 52652364 0t0 TCP 127.0.0.1:6011 (LISTEN)
sshd 91201 craig 9u IPv6 52714637 0t0 TCP [::1]:6017 (LISTEN)
sshd 91201 craig 10u IPv4 52714638 0t0 TCP 127.0.0.1:6017 (LISTEN)
sshd 97902 craig 9u IPv6 52722033 0t0 TCP [::1]:6012 (LISTEN)
sshd 97902 craig 10u IPv4 52722034 0t0 TCP 127.0.0.1:6012 (LISTEN)
httpd 106125 root 4u IPv6 46673602 0t0 TCP *:80 (LISTEN)

xiangzhou@compute01:/var/log/iofog-controller$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:47239 0.0.0.0:* LISTEN 14700/rpc.statd
tcp 0 0 0.0.0.0:49543 0.0.0.0:* LISTEN 3053/rpc.mountd
tcp 0 0 0.0.0.0:8200 0.0.0.0:* LISTEN 6169/minidlnad
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 4897/mongod
tcp 0 0 127.0.0.1:27019 0.0.0.0:* LISTEN 45209/mongod
tcp 0 0 0.0.0.0:36075 0.0.0.0:* LISTEN 3053/rpc.mountd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 2233/rpcbind
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 45415/node
tcp 0 0 127.0.0.1:8085 0.0.0.0:* LISTEN 75486/python3
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 2706/systemd-resolv
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3901/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 30203/cupsd
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 11512/node
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 51732/sshd: craig@p
tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 90913/sshd: craig@p
tcp 0 0 127.0.0.1:1883 0.0.0.0:* LISTEN 67880/mosquitto
tcp 0 0 0.0.0.0:44795 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:6012 0.0.0.0:* LISTEN 97902/sshd: craig@p
tcp 0 0 0.0.0.0:40541 0.0.0.0:* LISTEN 3053/rpc.mountd
tcp 0 0 127.0.0.1:6017 0.0.0.0:* LISTEN 91201/sshd: craig@p
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN -
tcp6 0 0 :::55651 :::* LISTEN 3053/rpc.mountd
tcp6 0 0 :::46245 :::* LISTEN -
tcp6 0 0 :::51463 :::* LISTEN 14700/rpc.statd
tcp6 0 0 :::35495 :::* LISTEN 3053/rpc.mountd
tcp6 0 0 :::41419 :::* LISTEN 3053/rpc.mountd
tcp6 0 0 :::111 :::* LISTEN 2233/rpcbind
tcp6 0 0 :::80 :::* LISTEN 71545/httpd
tcp6 0 0 :::22 :::* LISTEN 3901/sshd
tcp6 0 0 ::1:3350 :::* LISTEN 3633/xrdp-sesman
tcp6 0 0 ::1:631 :::* LISTEN 30203/cupsd
tcp6 0 0 ::1:6010 :::* LISTEN 51732/sshd: craig@p
tcp6 0 0 ::1:6011 :::* LISTEN 90913/sshd: craig@p
tcp6 0 0 ::1:1883 :::* LISTEN 67880/mosquitto
tcp6 0 0 ::1:6012 :::* LISTEN 97902/sshd: craig@p
tcp6 0 0 :::3389 :::* LISTEN 4191/xrdp
tcp6 0 0 ::1:6017 :::* LISTEN 91201/sshd: craig@p
tcp6 0 0 :::2049 :::* LISTEN -
tcp6 0 0 :::9090 :::* LISTEN 4896/prometheus

I checked log file under /var/log/iofog-controller, there is only error level logs there. Is there a way to tune the log level to have more logs (for example, info) ?

Xiang


(xiang zhou) #5

Hi There,

I’m guessing that the remote server(for controller) is connected via VPN which may cause the issue. So today I installed a VM(ip: 192.168.56.101) on my local desktop(ip: 192.168.56.1), and what I found is:

  1. Install Iofogctl on VM, and deploy controller to my desktop. Failed. It seems it tried to install agent rather than controller.
  2. Install iofogctl on desktop, and deploy controller on VM. Successful.

So two questions remaining:

  1. is there any particular setup required when deploying controller via VPN ?
  2. Why iofogctl will try install agent while yaml is configured as ControlPlane ?

Xiang


(Serge Radinovich) #6

iofogctl deploys an Agent on the Controller host for the purposes of managing system-level microservices such as routers.

As long as iofogctl has SSH access to the Controller you are deploying to, there should not be any issues.


(xiang zhou) #7

Thanks for explaining why agent would be installed during controller installation.

When I looked in detail of what went wrong during agent installation within controller installation, it showed as attached snippet.

And at the end of the message, it say:

Any suggestions ?

Xiang


(xiang zhou) #8

Hi All,

After googling a while, I found the solution from here(https://stackoverflow.com/questions/62028180/ubuntu-19-04-error-404-not-found-ip-91-189-95-83-80-error-on-apt-update). After I disabled(i…e unchecked) the update of ubuntu eoan release from software&update app ->other software, and rerun deploy of controller.yaml. Now it can successfully install the controller.

Thanks for the support. Now I have successfully installed controller/agent, and could move onto the next steps for trial.

Xiang


(Serge Radinovich) #9

Please feel free to join the Eclipse ioFog Slack channel: https://join.slack.com/t/iofog/shared_invite/zt-6wkd9cqv-ShFrgG2piftw2YxHpQhJLw

Also please note that we run all our production (HA) Control Planes on Kubernetes. You can deploy on Kubernetes via iofogctl. You can also use our managed service here: https://v2.caas.edgeworx.io/

The managed service is at testing phase of v2 so your feedback would be appreciated.