Create a Worker Node
This section describes how to create a self-managed worker node in the EKS cluster that satisfies all of XRd's requirements on the host operating system.
Before creating a worker node, ensure that the EKS cluster is in ACTIVE
state, and the authentication and networking configuration has been applied as described in the EKS Cluster Configuration section.
This example is inline with the XRd vRouter, and uses m6in.16xlarge
instance with three interfaces:
-
One interface reserved for cluster communication.
-
Two XRd data interfaces.
Prerequisites
-
Find the number of cores on the instance
To find the number of cores, run the following command:
aws ec2 describe-instance-types \ --instance-type m6in.16xlarge \ --query "InstanceTypes[0].VCpuInfo.DefaultCores" \ --output text
This value must be substituted for
<cpu-cores>
in the EC2 run-instances command. -
Find cluster information
To find the required cluster information, use the following commands:
-
To find the API endpoint:
aws eks describe-cluster \ --name xrd-cluster \ --query "cluster.endpoint" \ --output text
-
To find the certificate:
aws eks describe-cluster \ --name xrd-cluster \ --query "cluster.certificateAuthority.data" \ --output text
-
To find the Classless Inter-Domain Routing (CIDR) IP address:
aws eks describe-cluster \ --name xrd-cluster \ --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" \ --output text
-
-
Create a user data file
Create the user data file by copying the following contents into a file named
worker-user-data.yaml
.apiVersion: node.eks.aws/v1alpha1 kind: NodeConfig spec: cluster: name: xrd-cluster apiServerEndpoint: <api-endpoint> certificateAuthority: <certificate> cidr: <cidr>
Use the values obtained for
<api-endpoint>
,<certificate>
, and<cidr>
.
Bring Up the Worker Node
Bring up the worker node by running the following command:
aws ec2 run-instances \
--image-id <xrd-ami-id> \
--count 1 \
--instance-type m6in.16xlarge \
--key-name <key-pair-name> \
--block-device-mappings "DeviceName=/dev/xvda,Ebs={VolumeSize=56}" \
--iam-instance-profile "Arn=<node-profile-arn>" \
--network-interfaces "DeleteOnTermination=true,DeviceIndex=0,Groups=<sg-id>, SubnetId=<private-subnet-1>,PrivateIpAddress=10.0.0.10" \
--cpu-options CoreCount=<cpu-cores>,ThreadsPerCore=1 \
--tag-specifications "ResourceType=instance,Tags=[{Key=kubernetes.io/cluster/xrd-cluster,Value=owned}]" \
--user-data file://worker-user-data.yaml
Make a note of the instance id, <worker-instance-id>
.
This command brings up an EC2 instance with the following settings:
-
A 56-GB primary partition - required to store any process cores that the XRd generates.
-
A single interface in the first private subnet with permissions to communicate with the EKS control plane. This interface is used for cluster control plane communications. The assigned IP address is 10.0.0.10.
-
One thread per core (SMT or Hyper-Threading turned off). This is to prevent the "noisy neighbor effect" (where processes scheduled on a different logical but same physical core hampers the performance of high priority processes) for the high-performance packet processing threads.
-
A tag that is required by EKS to display the node should be allowed to join the cluster.
-
A user data file that contains the required NodeConfig for EKS.
The requirements for XRd Control Plane are as follows:
-
You can use a smaller (and cheaper) instance type, for example,
m5.2xlarge
. -
The
--cpu-options
line is not required.
When using the base Amazon Linux 2023 for EKS AMI, ensure that the user data file is in MIME format with a bash section that runs:
echo "fs.inotify.max_user_instances=64000" >> /etc/sysctl.conf
echo "fs.inotify.max_user_watches=64000" >> /etc/sysctl.conf
Turn off the source or destination check for the instance, by running the following command:
aws ec2 modify-instance-attribute \
--instance-id <worker-instance-id> \
--no-source-dest-check
When the worker node is up, check if the worker node is added to the cluster.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-10.ec2.internal Ready <none> 1m v1.31.3-eks-48e63af
![]() Note |
If you do not see the worker node, check the EKS configuration steps. |