Issue for the intallation of iofogctl v1.3.0 on Raspberry pi


(ZHOU) #1

Hello guys,
I cant install the iofogctl1.3.0 on a Raspberry pi, by bootstrap process or follow Github instructions.


The Rasp pi:
Linux raspberrypi 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l GNU/Linux

I’m not sure if this latest version is supported for raspberry pi or not?
Hope someone could help me, Thanks!


(Serge Radinovich) #2

We don’t support iofogctl on RPI at the moment. We currently support Mac and Linux (Debian 9/10 or Ubuntu Xenial/Bionic recommended)

iofogctl is intended to be used from a workstation that has remote access to your edge devices and control plane. If you are familiar with Ansible, it is a similar paradigm; when deploying the ioFog platform, iofogctl will push to remote devices over SSH.

I highly recommend following this guide if you would like to deploy an Edge Compute Network which includes your RPI: https://iofog.org/docs/1.3.0/remote-deployment/introduction.html


(ZHOU) #3

Hi SergeRadinovich,

Thank your for your help. If i want to add a RPI into my ECN, need I install the agent (https://github.com/eclipse-iofog/Agent) on my RPI manualely? Or I can remotely deploy the agent to RPI through the iofogctl installed on another platfor(e.g.linux vm)?


(Ian Martin) #4

You can deploy to it, so long as you have a valid ssh key in the ~/.ssh/authorized_keys file on the RPI from your host machine. Simply list the key in the keyFile area under ssh details, and it will automatically deploy it as an agent, and link it in your ECN, assuming you already have a controller/controlplane deployed.

You should be able to see an example here: https://iofog.org/docs/1.3.0/iofogctl/agent-config-yaml-spec.html along with a list of many of the attributes you can specify


(ZHOU) #5

Hello lan,
My network environment is very complelicated: as the RPI is behind the net, which means that the host I intended to run iofogctl on can’t access the RPI, but the controller which is running on the host is build on a public net. Do you have any suggestions of how to add this RPI into the ENC?
Also, is supporting ‘iofogctl’ on RPI in ioFog’s roadmap?

Thanks in advance.

Rui


(Serge Radinovich) #6

In the screenshot you provided above you show a shell session on the RPI.

Was this a local session (i.e. you hooked up a display to the RPI) or a remote session over SSH?

If its a local session - where is the machine you intend to run iofogctl? And why do you think it can’t connect to RPI external IP?

Ultimately, your edge machines need to be accessible over SSH from any machine you intend to run iofogctl from.

We are in talks about supporting iofogctl on Raspbian. I don’t think we would ever recommend deploying the iofog platform from the edge, however.


(ZHOU) #7

Hi SergeRadinovich,

For my testbed:
Firstly, I installed the iofogclt and locally deployed controller and agent on a VM (192.168.12.202)


Then I tried to remotely deploy agent to a RPI (192.168.12.146), which has already been set as the instruction: https://iofog.org/docs/1.3.0/remote-deployment/prepare-your-remote-hosts.html,
Here is my agent.yaml file:

But when i deplyed it : iofogctl deploy -f /tmp/agent.yaml, got the following error:

And the output if I check the agent status:

As the error message shows that it seems the agent(on the RPI) cant connect the controller(locally deployed on the VM)


(Serge Radinovich) #8

The issue is that, when you perform a local deployment (as you did in your first step), there is no guarantee that the Controller you deploy has an external IP. This means that if you try to deploy a non-local Agent with the local Controller, the Agent won’t be able to connect to the Controller.

We did add some support to avoid this issue but I am not sure to what extent this made it into 1.3.0 and how to use it.

I will talk to the team we will get back to you.

Otherwise, you can try doing a full remote deployment. In which case you provide the external IPs of Controller and Connector in they YAML files under the host field. E.g.

apiVersion: iofog.org/v1
kind: ControlPlane
metadata:
  name: ctrlplane
spec:
  iofogUser:
    name: Serge
    surname: Radinovich
    email: serge@edgeworx.io
    password: ihojhoi23h98ads
  controllers:
  - name: arkansas
    host: 34.92.102.1
    ssh:
      key: ~/.ssh/id_rsa
      user: serge
---
apiVersion: iofog.org/v1
kind: Connector
metadata:
  name: utah
spec:
    host: 34.92.102.1
    ssh:
      user: serge
      keyFile: ~/.ssh/id_rsa
---
apiVersion: iofog.org/v1
kind: Agent
metadata:
  name: alabama
spec:
    host: 35.233.177.87
    ssh:
      user: serge
      keyFile: ~/.ssh/id_rsa

(Pixcell) #9

Hi @Ray,

Looking at the output of iofogctl get all that you showed earlier, you will have issues trying to deploy your agent. Currently your controller is detected as being on the IP 0.0.0.0. this is due to the fact that you deployed the Controller using localhost (Which deployed the controller in a docker container)
This becomes a problem as soon as the remote Agent will try to reach the Controller. The agent will try to reach the controller on 0.0.0.0 , but this will fail as it will not be the “same” 0.0.0.0

The current way to work around this issue, is to disconnect from your ECN, then reconnect using your computer public IP (Since the docker container has port forwarding, it will be reachable on your computer public IP)

$> iofogctl disconnect
$> iofogctl connect --name local-controller --endpoint YOUR_PUBLIC_IP --email <email_you_used_in-iofogUser> --pass <password_you_used_in_iofogUser>


(ZHOU) #10

Thanks SergeRadinovich,

AS my RPI is located in a local private network (a company internal LAN), and our IT admin has open some ports on its FW to enable ssh remote access through a desired port (cant be the SSH default port 22), like shown in the following fig. I tried with port 7001:
image
But it seems that iofogctl has pre-defined the ssh port as 22, I’m not sure if it’s possible to midifier the ssh port as we want and which config file need I edit?


Thank you in advance!


(Pixcell) #11

Hi @Ray,

If you want to specify which SSH port to use, please use the key port under your ssh object in your yaml.

You can find more details in our documentation: https://iofog.org/docs/1.3.0/iofogctl/platform-yaml-spec.html

Thank you,
Best regards,


(ZHOU) #12

Thank you for the answer.
As you assumed, the deployment works fine now.
Really appreciated!


(ZHOU) #13

Hi Guys, @Pixcell and @SergeRadinovich

I tried to remotely deploy the iofog controller through ssh with a type of key .pem (not the default type ras )as shown in the fig below:


It seems that iofog cant accepet this type of ssh key.


(Serge Radinovich) #14

@Ray that is correct - iofogctl will only accept SSH private keys as inputs here. RSA or ECDSA should work just fine. You can generate a key pair with ssh-keygen -t rsa. You will have to make sure that the public key of the key pair is added to ~/.ssh/authorized_keys in the remote devices.


(ZHOU) #15

Thank you for the quick respond.

I have another question which is that, is there anyway that i can connect an existing agent (which was deployed by controllerA) to an existing controller (lets say controllerB). Or maybe even better which is that we can use one agent to connect to two different controllers, and both of the controllers can manage and communicate with this agent.

Thank you in advance.

Rui


(Pixcell) #16

@Ray,

As @SergeRadinovich mentioned, iofogctl expects the path to a private ssh key to be specified. We have only tested with RSA and ECDSA, however, I don’t believe it will fail with PEM.

Your problem is different: It fails to decode the yaml file.

Looking at the error message, it seems that you are providing:

ssh:
  key: <path_to_pem_key>
  ...

Whereas, according to the documentation (https://iofog.org/docs/1.3.0/iofogctl/platform-yaml-spec.html)
it should be:

ssh:
  keyFile: <path_to_private_key>

Thus, iofogctl fails to decode, because the yaml key name key is different from the expected keyFile.

Thanks,


(Pixcell) #17

Hi @Ray,

We do not support having multiple Controllers managing one Agent. One Agent will only be able to communicate with one Controller at the time.

We are currently working on the feature of being able to move on Agent (and all its deployed microservices) from one Controller to another, but this is not implemented yet.

In the meantime, you can delete the agent from ControllerA, and deploy it with ControllerB.

the easiest way would be to connect to ControllerA in a namespace:

$> iofogctl create namespace controllerA
$> iofogctl -n controllerA connect --endpoint ENDPOINT --name NAME --email EMAIL --pass PASSWORD
$> iofogctl describe agent AGENT_NAME -o agent.yaml
$> iofogctl delete agent AGENT_NAME

Then in another namespace, connect to ControllerB and deploy the agent using its yaml file (Updating the namespace in the yaml file first). See: https://iofog.org/docs/1.3.0/iofogctl/platform-yaml-spec.html#agent

$> iofogctl create namespace controllerB
$> iofogctl -n controllerB connect --endpoint ENDPOINT --name NAME --email EMAIL --pass PASSWORD
$> iofogctl deploy -f agent.yaml

If you do not wish to use namespaces, you can do it all in the default namespace by disconnecting from ControllerA (using iofogctl disconnect) before connecting to ControllerB

Let me know if you have any more question,
Thanks,


(ZHOU) #18

Hi @Pixcell,

Thanks for your quick responds.
I have another question: as mentioned on iofog’s official site (https://edgeworx.io/technology): one of iofog features is “Edge-Aware Kubuenetes”. Now I have already built a simple iofog ECN(1 controller, one connector and 2 agents), and I have existing kubernetes Cluster. But I dont know how to intergrate the deployed iofog environment with my kubernetes cluster. Could you please give me more detailed infomation on this point, such as the comunications between those two, how to orchestrate microservices with Kubernetes, etc.

Thank you in advance.


#19

Hi @Ray,

Great questions! So far we have made it so:

a) You can deploy a “vanilla” ioFog ECN (which you have done successfully), or
b) You can deploy an ioFog control plane to a Kubernetes cluster. (which you can find out how to do here: https://iofog.org/docs/1.3.0/remote-deployment/setup-your-controlplane.html)

We have not yet made it simple to take an existing ECN and move the Controller/Connector onto a k8s cluster.

For now, I would recommend deploying a control plane to k8s, and then attaching the two agents you have deployed to it (via iofog deploy).


(ZHOU) #20

Hi guys,

I just find an interseting phenomenon as you can see from the fig. below:


one of my agent (agent-pi remotely deployed on a RPI) will lose its status just after several hours. I wonder if It is possible to make the contrller could real-time monitor the remote agent status, maybe I need modify its config file or use some command?
What’s more, when I try using “iofogcli logs” to get log contents of this agent-pi:
image
It’s strange because I have alreay modifier the ssh port as 7001 not the default one 22
image

Thank you in advance.