diff --git a/Screen Recording 2023-05-25 at 01.10.12 PM.gif b/Screen Recording 2023-05-25 at 01.10.12 PM.gif new file mode 100644 index 0000000000000..d6ad368f1f33e Binary files /dev/null and b/Screen Recording 2023-05-25 at 01.10.12 PM.gif differ diff --git a/_config.yml b/_config.yml index d4916414195c9..adbd860429c38 100644 --- a/_config.yml +++ b/_config.yml @@ -3,13 +3,13 @@ # # Name of your site (displayed in the header) -name: Your Name +name: Engineering Blog - Diego Prestamo # Short bio or description (displayed in the header) -description: Web Developer from Somewhere +description: Chasing engineering mastery one project at a time # URL of your avatar or profile pic (you could use your GitHub profile pic) -avatar: https://raw.githubusercontent.com/barryclark/jekyll-now/master/images/jekyll-logo.png +avatar: https://png.pngtree.com/png-vector/20191101/ourmid/pngtree-cartoon-color-simple-male-avatar-png-image_1934459.jpg # # Flags below are optional @@ -21,12 +21,11 @@ footer-links: email: facebook: flickr: - github: barryclark/jekyll-now + github: DiegoPrestamo instagram: linkedin: pinterest: rss: # just type anything here for a working RSS icon - twitter: jekyllrb stackoverflow: # your stackoverflow profile, e.g. "users/50476/bart-kiers" youtube: # channel/ or user/ googleplus: # anything in your profile username that comes after plus.google.com/ diff --git a/_includes/mathjax.html b/_includes/mathjax.html new file mode 100644 index 0000000000000..9aae7d153f5d1 --- /dev/null +++ b/_includes/mathjax.html @@ -0,0 +1,17 @@ + + + + diff --git a/_layouts/default.html b/_layouts/default.html index b2939c0bc4483..0cf81a5de0a25 100644 --- a/_layouts/default.html +++ b/_layouts/default.html @@ -1,9 +1,14 @@ + {% if page.title %}{{ page.title }} – {% endif %}{{ site.name }} – {{ site.description }} {% include meta.html %} + {% if page.mathjax %} + {% include mathjax.html %} + {% endif %} + **Floating View** + - In the **Hierarchy** select your vision sensor + - Right click the floating view --> **view** --> **associate view with selected vision sensor** + - Run your scene and you should see what the vision sensor is seeing +![GIF demonstration could not load!](https://i.imgur.com/N5JSYmM.gif) + +## Step 5: Copy and paste API files in our path +We will need to fetch three files from the V-REP file location and place them in the same path as our current scene +- Sim.py - Found in **_pycache_** file +- simConst.py - Found in **_pycache_** file +- remoteApi.dll - Found in **programming** --> **remoteApiBindings** --> **lib** --> **lib** --> choose 64 bit or 32 bit --> remoteApi.dll +
+

+ +

+ This is my path for the _poppy_humanoid_ file we are working with +
+ +## Step 6: Control this specific scene using a Python script +We have now created a specific scene which we would like to experiment with. We do not want to simply run a default scene. Let us write a Python script to communicate directly with the scene we have just created. The important lesson from this script is the manner in which we are communicating with V-REP. +```bash +from pypot.creatures import PoppyHumanoid +import time + +# the path to where you have your Poppy scene saved... this is mine and yours should be different +scene_path = r"C:\Users\prest\OneDrive\Documents\Poppy Research\poppy_humanoid.ttt" + +# the new way we will connect to V-REP +vrep_host = '127.0.0.1' # the IP address +vrep_port = 19990 # same port from earlier... again yours will probably be 1997 +vrep_scene = scene_path + +poppy = PoppyHumanoid(simulator='vrep', scene=vrep_scene, host=vrep_host, port=vrep_port) + +# head movement to pan around scene +poppy.head_z.goal_position = 60 +time.sleep(1) +poppy.head_z.goal_position = -60 +time.sleep(1) +poppy.head_z.goal_position = 0 +time.sleep(1) + +poppy.close() +``` +The behavior should look like this: +![GIF demonstration could not load!](https://i.imgur.com/SIfJufW.gif) + +We have gone far. You should feel good about yourself. If you spot an error (typo, code, anything really) or are stuck on something shoot me an email through my **about** page. Keep building! + + + + + + + + diff --git a/_posts/2023-5-25-Controlling-Gripper-with-Hand.md b/_posts/2023-5-25-Controlling-Gripper-with-Hand.md new file mode 100644 index 0000000000000..e66627a94b0e3 --- /dev/null +++ b/_posts/2023-5-25-Controlling-Gripper-with-Hand.md @@ -0,0 +1,238 @@ +--- +layout: post +title: Controlling Robotic Gripper with Computer Vision +--- +#### The Arduino Braccio is a customizable, Arduino-compatible robotic arm that replicates the functionality of a human arm with its shoulder, elbow, wrist, and a sophisticated gripper. The Braccio is a great entry level robotic arm, due to its small size, contextually low price, and capabilities. +![GIF demonstration could not load!](https://s12.gifyu.com/images/Screen-Recording-2023-05-25-at-01.10.12-PM-1.gif) +Controlling an Arduino Braccio gripper with my fingers + +Computer vision (CV) is a field of artificial intelligence that enables computers to interpret and understand visual data from the real world. It uses digital images and videos as input and applies techniques to process, analyze, and understand them, enabling machines to perform tasks such as identifying objects, recognizing faces, navigating autonomous vehicles, or even diagnosing diseases. Recently, I have been obsessing over the overlap between the two and the seemingly limitless possibilities. We will be modifying a gesture volume control script to move the gripper. + +## What is the point of this? +Why do I want to create a gripper that mimics human movement? First of all, its pretty cool; I have never charmed a snake but I sense the feeling would be similar. Second and most importantly, I have a fascination with the possibility of creating an intelligent robot, one that can learn from humans. This is distinct to most of our current robots, which are programmed to systematically to perform certain tasks given certain conditions and do not really know what they are doing or why. The intelligent robot I envision needs vision so this project gets me closer to that + +## Materials: +- [Arduino Braccio](https://store-usa.arduino.cc/products/tinkerkit-braccio-robot?selectedStore=us) +- Arduino Uno or any Braccio Shield compatible Arduino +- Computer with camera or webcam +## Downloads: We will mostly be modifying previously existing code +- If you do not have Python installed already, go [here](https://www.python.org/downloads/) and follow the instructions. +- Download the [Arduino IDE](https://www.arduino.cc/en/software) +- Make sure you download the Braccio library in Arduino + +## Step 1: Hand Tracking Module +We will be modifying Murtaza Hassan's gesture volume control project for our needs. It is the perfect base to work off. This is the hand tracking module that makes the gesture volume control project possible. We will save it as **HandTrackingModule.py** +```bash +import cv2 +import mediapipe as mp +import time + + +class handDetector(): + def __init__(self, mode=False, maxHands=2, model_complexity = 1, detectionCon=0.5, trackCon=0.5): + self.mode = mode + self.maxHands = maxHands + self.model_complexity = model_complexity + self.detectionCon = detectionCon + self.trackCon = trackCon + + self.mpHands = mp.solutions.hands + self.mpHands.Hands() + self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.model_complexity, + self.detectionCon, self.trackCon) + self.mpDraw = mp.solutions.drawing_utils + + def findHands(self, img, draw=True): + imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) + self.results = self.hands.process(imgRGB) + # print(results.multi_hand_landmarks) + + if self.results.multi_hand_landmarks: + for handLms in self.results.multi_hand_landmarks: + if draw: + self.mpDraw.draw_landmarks(img, handLms, self.mpHands.HAND_CONNECTIONS) + + return img + + def findPosition(self, img, handNum=0, draw=True): + lmList = [] + + if self.results.multi_hand_landmarks: + myHand = self.results.multi_hand_landmarks[handNum] + + for id, lm in enumerate(myHand.landmark): + # print(id, lm) + h, w, c = img.shape + cx, cy = int(lm.x * w), int(lm.y * h) + lmList.append([id, cx, cy]) + if draw: + cv2.circle(img, (cx, cy), 25, (255, 0, 0), cv2.FILLED) + + return lmList + +def main(): + prevTime = 0 + currentTime = 0 + cap = cv2.VideoCapture(0) + detector = handDetector() + + while True: + success, img = cap.read() + img = detector.findHands(img) + lmList = detector.findPosition(img) + if lmList: + print(lmList[4]) + + currentTime = time.time() + fps = 1/(currentTime - prevTime) + prevTime = currentTime + + cv2.putText(img, str(int(fps)), (10,70), cv2.FONT_HERSHEY_PLAIN, 3, + (255, 0, 255), 3) + + cv2.imshow("Image", img) + cv2.waitKey(1) + + +if __name__ == '__main__': + main() +``` +## Step 2: Modify gesture volume control +This is the modified version which fits our needs. It will communicate to the arduino script in the next step. +```bash +import cv2 +import time +import numpy as np +from HandTrackingModule import handDetector +import math +import serial + +# arduino serial connection +arduino = serial.Serial('/dev/cu.usbmodem21101', 9600) # replace 'COM_PORT' with the appropriate port +time.sleep(2) + +# cam parameters +wCam, hCam = 640, 480 +cap = cv2.VideoCapture(1) # if your camera is not loading, try different numbers such as '0' or '2' +cap.set(3, wCam) +cap.set(4, hCam) +prevTime = 0 +last_sent = time.time() + +detector = handDetector(detectionCon=0.7) +servo_angle = 0 +while True: + success, img = cap.read() + img = detector.findHands(img) + lmList = detector.findPosition(img, draw=False) + if lmList: + x1, y1 = lmList[4][1], lmList[4][2] # thumb coordinates + x2, y2 = lmList[8][1], lmList[8][2] # pointer finger coordinates + cx, cy = (x1 + x2) // 2, (y1 + y2) // 2 # center between thumb and pointer + + cv2.circle(img, (x1, y1), 15, (255, 0, 255), cv2.FILLED) + cv2.circle(img, (x2, y2), 15, (255, 0, 255), cv2.FILLED) + cv2.line(img, (x1, y1), (x2, y2), (255, 0, 255), 3) + cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) + + length = math.hypot(x2 - x1, y2 - y1) + + # Hand range 20 - 280 + # Servo range 0 - 180 + servo_angle = np.interp(length, [20, 280], [73, 0]) + + if time.time() - last_sent > 0.2: # 0.2 is the rate at which it is sending the aruino data, you can fine tune this number + try: + arduino.write((str(int(servo_angle)) + '\n').encode()) # sending angle to arduino + print(f'Sent angle: {int(servo_angle)}') + last_sent = time.time() + time.sleep(0.1) + except serial.serialutil.SerialException as e: + print(f"Serial communication error: {e}") + + if length < 50: + cv2.circle(img, (cx, cy), 15, (0, 255, 0), cv2.FILLED) + + # displaying servo angle + cv2.putText(img, f'Angle: {int(servo_angle)}', (40, 450), cv2.FONT_HERSHEY_COMPLEX, + 1, (255, 0, 0), 3) + + # displaying fps and video feed + currTime = time.time() + fps = 1 / (currTime - prevTime) + prevTime = currTime + cv2.putText(img, f'FPS: {int(fps)}', (40, 50), cv2.FONT_HERSHEY_COMPLEX, + 1, (0, 255, 0), 3) + cv2.imshow("Img", img) + + cv2.waitKey(1) +``` +## Step 3: The Arduino script +An Arduino is a user-friendly, open-source electronic platform that allows anyone to create interactive hardware projects. It's essentially a small computer board that can process inputs from sensors, make decisions based on code, and control various outputs like lights, motors (in our case), or displays. I modified one of the standard Braccio sketches (hence the weird comments which I left for context) to create the following Arduino sketch: +```bash +/* + simpleMovements.ino + + This sketch simpleMovements shows how they move each servo motor of Braccio + + Created on 18 Nov 2015 + by Andrea Martino + + This example is in the public domain. + */ + +#include +#include + +Servo base; +Servo shoulder; +Servo elbow; +Servo wrist_rot; +Servo wrist_ver; +Servo gripper; + +void setup() { + //Initialization functions and set up the initial position for Braccio + //All the servo motors will be positioned in the "safety" position: + //Base (M1):90 degrees + //Shoulder (M2): 45 degrees + //Elbow (M3): 180 degrees + //Wrist vertical (M4): 180 degrees + //Wrist rotation (M5): 90 degrees + //gripper (M6): 10 degrees + Braccio.begin(); + Serial.begin(9600); // Set up serial communication at 9600bps +} + +void loop() { + int angle = 73; // Default angle for gripper + if (Serial.available()) { // If data is available to read + angle = Serial.parseInt(); // Read it and store it in 'angle' + } + + /* + Step Delay: a milliseconds delay between the movement of each servo. Allowed values from 10 to 30 msec. + M1=base degrees. Allowed values from 0 to 180 degrees + M2=shoulder degrees. Allowed values from 15 to 165 degrees + M3=elbow degrees. Allowed values from 0 to 180 degrees + M4=wrist vertical degrees. Allowed values from 0 to 180 degrees + M5=wrist rotation degrees. Allowed values from 0 to 180 degrees + M6=gripper degrees. Allowed values from 10 to 73 degrees. 10: the toungue is open, 73: the gripper is closed. + */ + + //(step delay, M1, M2, M3, M4, M5, M6); + Braccio.ServoMovement(20, 0, 15, 180, 170, 0, angle); + + +} +``` +## Step 4: Flash the Arduino sketch onto the arduino that you have attached to the Braccio shield +Hit **Upload** on the Aruino sketch to flash it onto the Uno. Once you do, it will move a bit to settle into position and it will run indefinitely waiting for serial data to come in. The serial data will be coming in from the Python script once we run it. + +## Step 5: Run the Python script +Once the Arduino is flashed, we can proceed to the Python script. We will now run it and control the movement of the gripper with our thumb and index finger. + + + + + diff --git a/_posts/2023-5-31-Differential-Drive-Robot.md b/_posts/2023-5-31-Differential-Drive-Robot.md new file mode 100644 index 0000000000000..b3410f6b512af --- /dev/null +++ b/_posts/2023-5-31-Differential-Drive-Robot.md @@ -0,0 +1,176 @@ +--- +layout: post +title: Making a Differential Drive Robot Simulator +--- + +#### A differential drive robot (DDR) is a type of mobile robot with two, seperately motorized, wheels. We will discuss the motion model for a DDR and make a simple simulator that will help us interact with the motion model. +
+

+ +

+ DDR simulator we will create (GIF format worsens frame rate, real sim is smoother) +
+ +Motion models are mathematical models that describe the behavior of robots. There are two main types: kinematic models and dynamic models. Today we will be focusing on the former. +## Kinematics and Inverse Kinematics: +Kinematic models can be approached in two ways: forward kinematics (or simply kinematics) and inverse kinematics. Forward kinematics takes input values and predicts the path of the robot, whereas inverse kinematics defines a desired path and attempts to find the inputs necessary to achieve this path. + +In the context of DDR, a forward kinematic model would use user inputs of linear and angular velocity to model the behavior of the robot. On the other hand, an inverse kinematic model would start with a laid out path that we desire our robot to follow, and would then calculate the necessary linear and angular velocity inputs to achieve such a path. + + +## Global and Local Reference Frames: +Before we discuss motion models, it's important to consider the context in which we're referring to these models. The local frame (also known as the body frame) situates the DDR in its own xyz-plane. In this frame, the linear velocity is in the x-direction, rotation occurs about the z-axis, and the y-coordinate is always zero. The displacement of the DDR isn't taken into consideration in the local frame. + +In contrast, the global frame situates the DDR in a broader xy-plane, where its movements are relative to its environment. For example, if we place two DDRs in a field and aim to avoid a collision, we need to use a global frame to track their positions. A local frame would only provide information for each robot in isolation, and wouldn't account for their relative positions in the shared environment. +
+

+ +

+ XR and YR indicate the local frame and XI and YI indicate the global frame +
+ +## Motion Models: +Now to the good stuff. Given that we are dealing with two distinct reference frames — global and local — we will use two separate motion models. If this ever gets fuzzy, just remember that we are primarily focusing on two simple concepts: linear velocity and angular velocity. Linear velocity refers to the derivative of position with respect to time, while angular velocity corresponds to the derivative of angular displacement with respect to time. +### Local frame: linear velocity +Linear velocity in the local frame is quite simple. We have no y-component since wheels do not slide and the z-axis is where we rotate, meaning our linear velocity is always in the x-direction. Say we have a path gamma LaTeX(s), the linear velocity can be modeled as: +
+

+ +

+
+ +The linear velocity is tangent to the path gamma LaTeX + +### Global frame: linear velocity +Linear velocity in the global frame is only slightly more complex. We will now take theta LaTeX under consideration to find our velocity: +
+

+ +

+
+ + The heading of our DDR is in the direction of theta dot LaTeX. hence, our x and y components of our linear velocity are in terms of cos and sin. + +### Angular velocity: +To find our angular velocity, Omega LaTeX + we will take the time dependent derivative of theta LaTeX. We will refer to it as theta dot LaTeX. In other words: Omega LaTeX = theta dot LaTeX. The DDR rotates about the z-axis both globally and locally, so unlike the linear velocity we have one value that applies to both reference frames. + +### Final model: +It is common to combine angular velocity and linear velocity into one vector. Our DDR motion model is: +
+

+ +

+
+ +## Creating a Simulator: + Although there are plenty of complex tools we can use to model various DDR's, let us create one of our own. We will be using the pygame library to make our simulator: + ``` bash + import pygame +import sys +import math + +pygame.init() + +WIDTH, HEIGHT = 640, 480 # adjust sim window size + +# adjust speed of controls +LINEAR_SPEED = 1 +ANGULAR_SPEED = 0.02 + +FPS = 60 +MAX_TRACERS = 70 # tracers are the trail that is left behind by the DDR in the sim +DT = 1.0 / FPS +TWO_PI = 2 * math.pi + +screen = pygame.display.set_mode((WIDTH, HEIGHT)) + +# initializes starting dot position and direction +dot_position = [WIDTH // 2, HEIGHT // 2] +direction = 0 +old_direction = direction + +dot_positions = [] + +clock = pygame.time.Clock() + +font = pygame.font.Font(None, 36) +font1 = pygame.font.Font(None, 20) + +running = True +while running: + screen.fill((0, 0, 0)) # mess with this if you want to change background color + + text = font.render("DDR Motion Simulator", True, (255, 255, 255)) + screen.blit(text, (10, 10)) + text1 = font1.render("Linear velocity controls: 'W' and 'S' keys ", True, (255, 255, 255)) + screen.blit(text1, (10, 50)) + text1 = font1.render("Angular controls: 'A' and 'D' keys ", True, (255, 255, 255)) + screen.blit(text1, (10, 70)) + + theta = math.degrees(direction) + text2 = font1.render(f"Current angle (theta): {theta:.2f} degrees", True, (255, 255, 255)) + screen.blit(text2, (10, 90)) + + # event handling loop + for event in pygame.event.get(): + if event.type == pygame.QUIT: + running = False + + # sees which keys are pressed + keys = pygame.key.get_pressed() + + # checks if WASD is pressed. W and S moves forward and back, A and D change the direction in which the dot is pointing + linear_vel = 0 + if keys[pygame.K_w]: + dot_position[0] += LINEAR_SPEED * math.cos(direction) + dot_position[1] -= LINEAR_SPEED * math.sin(direction) + linear_vel = LINEAR_SPEED + if keys[pygame.K_s]: + dot_position[0] -= LINEAR_SPEED * math.cos(direction) + dot_position[1] += LINEAR_SPEED * math.sin(direction) + linear_vel = -LINEAR_SPEED + if keys[pygame.K_d]: + direction = (direction - ANGULAR_SPEED) % TWO_PI + if keys[pygame.K_a]: + direction = (direction + ANGULAR_SPEED) % TWO_PI + # calculates angular velocity by taking derivative of direction + angular_vel = (direction - old_direction) / DT + # calculates local velocity and rounds it to two decimal places + local_velocity = (round(100 * linear_vel) / 100, 0, round(100 * angular_vel) / 100) + + # displaying the local and global velocity + text3 = font1.render(f"Local velocity: {local_velocity}", True, (255, 255, 255)) + screen.blit(text3, (10, 110)) + + global_velocity = (round(100 * linear_vel * math.cos(direction)) / 100, round(100 * linear_vel * math.sin(direction)) / 100, round(100 * angular_vel) / 100) + + text4 = font1.render(f"Global velocity: {global_velocity}", True, (255, 255, 255)) + screen.blit(text4, (10, 130)) + + dot_positions.append(list(dot_position)) + + # makes sure we do not exceed the max number of tracer dots + if len(dot_positions) > MAX_TRACERS: + dot_positions.pop(0) + + #draws yellow circle for each position of the tracers + for pos in dot_positions: + pygame.draw.circle(screen, (255, 255, 0), pos, 1) + + # draws main blue dot + pygame.draw.circle(screen, (0, 0, 255), dot_position, 5) + + pygame.display.flip() + + old_direction = direction + + clock.tick(FPS) + +pygame.quit() +``` + +We now have a simple way to interact with the DDR motion model. I recommend you find the values manually to see if they match the simulator values to make sure you understand how to calculate them by hand. + +I hope you learned something. Keep building. + diff --git a/_posts/2023-6-4-Introduction-to-Underactuated-Robotics.md b/_posts/2023-6-4-Introduction-to-Underactuated-Robotics.md new file mode 100644 index 0000000000000..0d6b348fbe47b --- /dev/null +++ b/_posts/2023-6-4-Introduction-to-Underactuated-Robotics.md @@ -0,0 +1,236 @@ +--- +layout: post +title: Introduction Material - MIT 6.832 +tags: [Mathjax, Mathematic] +mathjax: true +--- +## Underactuated Robotics or: How I Learned to Stop Worrying and Love Dynamics +I was emailing back and forth with a professor in my engineering school when he sent me this ominous link: [https://underactuated.csail.mit.edu/](https://underactuated.csail.mit.edu/) + +I was taken to Russ Tedrake's grad level MIT course: *Underactuated Robotics*. I have been fascinated by the material since and I am documenting my learning as I wish to explore my own understanding of the material. I will introduce the topic and the fundamental ideas you need to get started with the course. I felt pretty good about the math level but there were a couple of concepts I was not familiar with so this may help if you are in the same position. I will likely update this blog post if I keep finidng relevant material. + +## Foreword: +Most of the material in this blog post (such as examples, pictures, etc.) is directly from Russ Tedrake's online notes and lecture videos. I have no idea what the copyright landscape looks like for this but the notes are published online, so I will assume I can safely republish my interpretations of the course material. Please send any *Cease and Desist* orders to my email. + +## There is something off about ASIMO: +
+

+ +

+ Click here for the demo video +
+The robotics world was shocked when Honda announced they had been secretely developing a humanoid for the previous decade. They introduced their model, ASIMO, an astounding feat of engineering. ASIMO could walk, grip, avoid collisions, amongst other things. + +But when we look at ASIMO's demo video, there is something wrong with its movement. Its just not natural. What is it that makes ASIMO's movement robotic, and [Atlas'](https://www.youtube.com/watch?v=tF4DML7FIWk&ab_channel=BostonDynamics) movement much more human-like? + +## Our underactuated world: +The fundamental reason comes down to dynamics. In order to achieve the desired control, ASIMO is essentially fighting a continous battle against its natural dynamics. Hence the bent knees and slow movements. The idea goes as follows: if we can use feedback to cancel the system's dynamics, the control problem is trivial. The idea has merit and is useful in various domains. Manipulator robots excel in the industrial setting where they are bolted down and dynamic cancellation is necessary. Full control all the time. Any desired acceleration can be achieved at any time. + +Unfortunately, this control ideology begins to hold us down when we get to more complicated control models. I want to prove this to you. Stand up, and take three steps at your normal gait. Now turn around and take three really slow steps, kind of like ASIMO would. I am willing to bet you found the second go harder. Your body was expending much more energy fighting gravity and the movement was surely not too graceful. Although unnatural, the second type of movement is very well understood in control theory and is simple as long as you have enough power and have enough actuators to control your degrees of freedom. You are fully actuated. + +The idea of exploiting mechanical systems and riding dynamics prevails in nature. An albatross can fly for kilometers without flapping its wings, a rainbow trout can ride up stream currents simply using their anatomical mechanisms, and a gymnast can do backflips with relative ease. +
+

+ +

+ A trout swims upstream. The fish is dead and only tethered to position it correctly! +
+ +This type of control occurs when there are less actuators than degrees of freedom. This is known as underactuated control. + +"If you had more research funding, would you be working with fully actuated robotics?" Prof. Manolis Kellis famously jokes with Tedrake in an MIT staff meeting. + +Underactuated robotics does not come from a lack of materials (or research funding), it comes from a desire to maximize the control of our system at the trade-off of much more complicated control problem. + +Now that we have an intuitive idea of underactuatuation, let us go into the math. + +## Prerequisite Skills: +Here is what I have found is necessary so far: + +- All of calculus (up to vector / multivariable calculus) +- Differential Equations +- Linear Algebra +- Physics - Kinematics +- Patience - If you are an engineering undergrad like I am, this material will likely demand your full abilities but this stuff is really cool so stick with it + +## Fundamental Concepts: +I found that there were a couple of concepts that I had not been exposed to or used. In this blog post, I will walk you along the following funamental concepts that you need to get started: + +- Generalized Coordinates +- Principle of Least Action (POLA) +- Lagrangian / Euler-Lagrange equation +- Control affine +- Robotic Manipulation Equation +- Underactuated vs. Fully-actuated mathematically + +This should help make sense of the [first lecture](https://www.youtube.com/watch?v=PRaSlUA78gQ&t=3609s&ab_channel=underactuateda). I derived Lagrange and Euler-Lagrange in a slightly different way to Prof. Tedrake, however the math is equivalent (unless you catch an error, if so email me please). +## Generalized Coordinates: +Generalized coordinates are parameters used to completely describe the configuration of a system. For our purposes, generalized coordinates might refer to the angles of each join in the robot. The main advantage of generalized coordinates is that they simplify system analysis. Rather than trying to keep track of the position and orientation of every component of the robot in three-dimensional space, you can instead use a smaller number of generalized coordinates that capture the essential degrees of freedom of the system. They are also quite nice because they align with Lagrangian mechanics which will be covered shortly. + +## Principle of Least Action: +POLA, A key concept in Lagrangian and Hamiltonian mechanics, states that the path taken by a system between two states is the one for which the action is minimized. In other words, a ball thrown into the air follows a parabolic trajectory because any other path requires unnecessary work. Given the kinetic and potential energy of a robot (expressed in terms of the generalized coordinates and their time derivatives), you can apply the principle of least action and the Euler-Lagrange equation to derive the equations of motion. + +## Lagrangian / Euler-Lagrange Equation: +The Lagrangian will provide us a powerful alternative to Newton's laws which are in terms of forces. It can be very difficult, if not impossible to analyze all the forces in a complex system and the Lagrangian method gives us an elegant way to simplify things, + +$\ L = T - V\$ + +where $\ T$ = kinetic energy and $\ V$ = potential energy. + +We will then use the Euler-Lagrange equation to find the equations of motion of our system: +$$\large \frac{\partial L}{\partial q_i} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}_i}\right)$$ = 0 + +where $\ q_i$ are our generalized coordinates. + + +#### Finding equations of motion of a double pendulum using Lagrangian and Euler-Lagrange equation: +This derivation is quite computationally heavy, but conceptually straightforward. Tedrake mentions using software to find the values but I found peace of mind in going through this derivation since I had never done Euler-Lagrange before. I will use software moving forward because 40 minutes for a derivation that can be done in milliseconds is unreasonable. I will not go through every step as that would be a LaTex nightmare but if you ever get stuck, follow along [this](https://www.youtube.com/watch?v=KSsZUn0bfwE&ab_channel=PhysicsExplained) video. + +
+

+ +

+
+ +The location of $\ m_1$ and $\ m_2$ is denoted by $\ p_1$ and $\ p_2$ respectively. We simply use $\ q$ = $$\ \begin{bmatrix} \ \theta_1 , \theta_2 \end{bmatrix}^T $$. We could work in terms of $\theta$ but getting used to generalized coordinates cannot hurt. + +$\ p_1$: + +$$\ \begin{bmatrix} \ x_1 \\ y_1 \end{bmatrix} $$ = $$\ \begin{bmatrix} \ l_1sin{q_1} \\ -l_1cos{q_1} \end{bmatrix} $$ + +$\ p_2$: + +$$\ \begin{bmatrix} \ x_2 \\ y_2 \end{bmatrix} $$ = $\dot p_1$ + $\dot p_2$ +$$\ \begin{bmatrix} \ x_2 \\ y_2 \end{bmatrix} $$ = $$\ \begin{bmatrix} \ l_1sin{q_1} + l_2sin{q_2} \\ -l_1cos{q_1} - l_2cos{q_2} \end{bmatrix} $$ + +We then take the time derivative of our positions in order to find our velocities, which we need for our kinetic energy: $\ T = \frac{1}{2} mv^2$ + +$\dot p_1$: + +$$\ \begin{bmatrix} \dot x_1 \\ \dot y_1 \end{bmatrix} $$ = $$\ \begin{bmatrix} \ l_1\dot q_1 cos{q_1} \\ l_1\dot q_1sin{q_1} \end{bmatrix} $$ + +$\dot p_2$: + +$$\ \begin{bmatrix} \dot x_1 \\ \dot y_1 \end{bmatrix} $$ = $$\ \begin{bmatrix} \ l_1sin{q_1} \\ -l_1cos{q_1} \end{bmatrix} $$ + +We can rewrite our kinetic energy equation as $T = \frac{1}{2}m(\dot x^2 + \dot y^2)$ + +Reducing our $\dot p_1$ and $\dot p_2$ equations we get our final kinetic energy equation: + +$T = \frac{1}{2}m_1l_1^2\dot{q}^2 + \frac{1}{2}m_2l_2^2[l_1^2\dot q_1^2 + l_2^2\dot q_2 + 2l_1l_2\dot q_1\dot q_2cos(q_1-q_2)]$ + +Finding the potential energy is trivial: + +$\ V = m_1gy_1 + m_2gy_2$ + +$\hspace{.83cm} = -m_1gl_1cos{q_1} - m_2g(l_1cos{q_1} + l_2cos{q_2})$ + +$\hspace{.83cm} = -(m_1 + m_2)gl_1cosq_1 + m_2g(l_2cosq_2)$ + +Now we plug $T$ and $V$ into our Lagrangian and we get: + +$L = \frac{1}{2}m_1l_1^2\dot{q}^2 + \frac{1}{2}m_2l_2^2[l_1^2\dot q_1^2 + l_2^2\dot q_2 + 2l_1l_2\dot q_1\dot q_2cos(q_1-q_2)] + (m_1 + m_2)gl_1cosq_1 + m_2g(l_2cosq_2)$ + +Now we move onto Euler-Lagrange to find our motion equations $q_1$ and $q_2$ + +Deriving the first term $\tau _1$: + +$\large \frac{\partial L}{\partial q_1} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}_1}\right)$ = 0 + +$\large \frac{\partial L}{\partial q_1}$ = $-m_2l_1l_2\dot q_1\dot q_2sin{(q_1-q_2)} - (m_1 + m_2)gl_1sinq_1$ + +$\large \frac{\partial L}{\partial \dot{q}_1}$ = $m_1l_1^2 \dot q_1 + m_2l_1^2 \dot q_1 + m_2l_1l_2 \dot q_2 cos(q_1 - q_2)$ + +$\large \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}_1}\right)$ = $(m_1 + m_2)l_1^2\ddot q_1 + m_2l_1l_2\ddot q_2cos(q_1 - q_2)$ + +$\hspace{6.8cm} - m_2l_1l_2\dot q_2sin(q_1 - q_2)(\dot q_1 - \dot q_2)$ + +Deriving the second term $\tau _2$: + +$\large \frac{\partial L}{\partial q_2} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}_2}\right)$ = 0 + +$\large \frac{\partial L}{\partial q_2}$ = $m_2l_1l_2\dot q_1\dot q_2sin(q_1 - q_2) - m_2gl_2sinq_2$ + +$\large \frac{\partial L}{\partial \dot{q}_2}$ = $m_2l_2^2\dot q_2 + m_2l_1l_2\dot q_1cos(q_1 - q_2)$ + +$\large \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}_2}\right)$ = $m_2l_2^2\ddot q_2 + m_2l_1l_2\ddot q_1 cos(q_1 - q_2) - m_2l_1l_2\dot q_1sin(q_1 - q_2)(\dot q_1 - \dot q_2)$ + + +Putting everything together, we get our equations of motion: + +$\tau _1 = (m_1 + m_2)l_1^2\ddot q_1 + m_2l_1l_2\ddot q_2cos(q_1 - q_2)+ m_2l_1l_2\dot q_2^2sin{(q_1-q_2)} + (m_1 + m_2)gl_1sinq_1$ + +$\tau _2 = m_2l_2^2\ddot q_2 + m_2l_1l_2\ddot q_1cos(q_1 - q_2) - m_2l_1l_2\dot q_1^2sin(q_1 - q_2) + m_2gl_2sinq_2$ + +## Robotic Manipulation Equation: +We can generalize most of our robots into this simple equation: + +$M(q)\ddot q + c(q,\dot q) = \tau g(q) + Bu$ + +where: +- M is our generalized mass / inertia matrix +- C is our coriolis terms (or velocity product term) - when one joint is moving, it can induce forces on other joints that are also moving, due to the Coriolis and centrifugal effects +- g is our gravity term - this is assuming that our potential energy only comes from gravity; springs on our robot would throw this off +- B is our actuation matrix +- u is our torque control input + +B will help us determine whether our system is underactuated or not + +## Control Affine: +A control affine system can be written in the following form: + +$\dot x = f(x) + g(x)u$ + +- $x$ is the state of the system +- $\dot x$ is the time derivative of the system +- $u$ is the control input - this is our actuation (servo torques, wheel's angular velocity, etc.) +- $f(x)$ describes the uncontrolled dynamics - this is what happens when u = 0 +- g(x) describes how the control input affects the system + +We will see the control system +The key characteristic of a control affine system is that the control input $u$ only multiplies the function $g(x)$ and does not appear inside f or g, this gives us a linear part and a constant, making control affine systems easier to handle than more general nonlinear systems. + +We will use the second-order system control in the form of: + +$\ddot q = f(q,\dot q, u, t)$ + +where u is our control vector from earlier. + +If there exists a u that achieves any desired acceleration $\ddot q$, we call the system fully actuated. If we cannot, then the system is underactuated. In simpler terms, if we can perfectly control our system and not have to deal with dynamics, then our control problem is simple and fully actuated. If we cannot, then we must lean into the dynamics of our system, making the control problem more difficult but much more exciting. + + +## Underactuated vs. Fully Actuated Mathematically: +An underactuated system has less actuators than degrees of freedom. Let us put the manipulator equation from earlier: + +$M(q)\ddot q + c(q,\dot q) = \tau g(q) + Bu$ + +in terms of $\ddot q$: + +$\ddot q = M^{-1}(q)[-c(q,\dot q) + \tau g(q) + Bu]$ + +If B has full row rank, this indicates that the control inputs are able to influence the degrees of freedom independently, which indicates full actuation. If B does not have full row rank, it means that some control inputs will affect the motion in multiple degrees of freedom, indicating underactuation. + +## Conclusion: +I hope this blog post is a nice supplement to Tedrake's online lecture 1 and online textbook. This material reminds me of the Fantaisie Impromptu by Chopin. I knew it was somewhat above my skill level when I wanted to learn it but I am stubborn person and found it too beautiful to care. I had this feeling that if I kept banging my head against the wall for long enough, the wall would eventually give, and it did. I may never play like Daniil Trifonov but I properly learned the piece, and elevated my piano playing in the process. I hope my journey with underactuated robotics is similar. Do not hesitate to send me an email if you spot an error or have questions. Keep building. + + + + + + + + + + + + + + + + + + + + + + diff --git a/_posts/2024-1-9-Network Crown Project.md b/_posts/2024-1-9-Network Crown Project.md new file mode 100644 index 0000000000000..775d49e604274 --- /dev/null +++ b/_posts/2024-1-9-Network Crown Project.md @@ -0,0 +1,330 @@ +--- +layout: post +title: Industrial Factory Mock Network +tags: [Mathjax, Mathematic] +mathjax: true +--- +## Consider a Highly Automated Roof Tile Plant +While I'm not fanatical about watches, I do enjoy them on a basic level. The way hundreds of different parts synchronize to achieve a single, straightforward goal of telling time reliably fascinates me. Similarly, a modern roof tile plant represents the same concept, albeit in a completely different context +
+> +
+ +Creating a concrete roof tile is simple. Combine sand, cement, pigments, oxides, and water to form a moist sandy mixture. Spread it onto a mold, shape the top with the appropriate extrusion profile, and punch in nail holes. Then, bake it in an oven at 50 degrees Celsius with controlled humidity for a few hours. The result is a sturdy concrete roof tile like this: +
+

+ +

+ A basic description of a modern roof tile plant +
+ +## Scaling: When doing the 'thing' is no longer about the 'thing' +As discussed, creating a rigid roof tile that can withstand high impact and adverse weather conditions is a solved problem. The complexity does not lie in making one quality roof tile, but making one million. Technology has allowed complex machinery to automate the roof tile making process. A long line of machinery works in congruency to yield the final product, but a lot can go wrong in the process. + +## A simple network topology for the machinery +We aim to monitor specific data and metrics from each of the four primary machines: the mixer, kiln, robots, and packer. To achieve this, we need to establish communication pathways from each machine to a router, which in turn connects to the CEO, COO,and plant managers' computers through an access point. +
+

+ +

+
+The red nodes represent the point to point connections that will communicate with the green and red router at the bottom of the plant. The green and red router will communicate with the green and blue router that is placed in the offices at the right side of the plant.That router is now our access point which will wirelessly communicate to the previous machines previously mentioned. + +We will now create the network topology in ns-3. ns-3 is an open-source, education-oriented network simulation tool. You can use python or c++ to create and simulate network topology. I will break apart my script to make it more digestible. +### The following are standard header files that you must include for your script to run with the protocols that we wish to implement (point2point, and wifi are the ones that are more specific for this script). It is crucial that you include the final header file which is the NetAnim simulator. We will use NetAnim as our visual simulator. This will help us make way more sense out of our outputs.You must download NetAnim as it is not included in ns-3. +```bash +#include "ns3/core-module.h" +#include "ns3/point-to-point-module.h" +#include "ns3/network-module.h" +#include "ns3/applications-module.h" +#include "ns3/mobility-module.h" +#include "ns3/internet-module.h" +#include "ns3/yans-wifi-helper.h" +#include "ns3/ssid.h" +#include "ns3/netanim-module.h" + +using namespace ns3; + +NS_LOG_COMPONENT_DEFINE ("ThirdScriptExample"); +``` +## The main function +inside our main function will be the entirety of our topology and our initializing of NetAnim for visual simulation +``` bash +int +main (int argc, char *argv[]) +{ + bool verbose = true; + uint32_t nWifi = 3; // this will dictate how many wireless nodes you create + bool tracing = false; + + CommandLine cmd (__FILE__); // set up for command line and terminal interacting + cmd.AddValue ("nWifi", "Number of wifi STA devices", nWifi); + cmd.AddValue ("verbose", "Tell echo applications to log if true", verbose); + cmd.AddValue ("tracing", "Enable pcap tracing", tracing); + + cmd.Parse (argc,argv); +``` +### These are some basic configurations for where our wifi nodes will be allowed to travel +``` bash + // The underlying restriction of 18 is due to the grid position + // allocator's configuration; the grid layout will exceed the + // bounding box if more than 18 nodes are provided. + + if (nWifi > 18) + { + std::cout << "nWifi should be 18 or less; otherwise grid layout exceeds the bounding box" << std::endl; + return 1; + } + + if (verbose) + { + LogComponentEnable ("UdpEchoClientApplication", LOG_LEVEL_INFO); + LogComponentEnable ("UdpEchoServerApplication", LOG_LEVEL_INFO); + } + +``` +### Creating nodes for the machinery +As mentioned, we will have four nodes on the factory floor (mixer, kiln, stacking robots, and packer) connecting point-to-point to a router. The router will connect to the router at the offices. This means we have six total point-to-point nodes (four machines and two routers). An important question is how come we don't feed the point-to-point connections of each machine directly to the office router. The answer is that in reality, there are more than these four machines that are all in the PLC room. There is already a connection to the PLC room so connecting straight to the office router would actually double the amount of connections necessary. Node 1 will be our factory floor router and node 0 will be our office router. + +Not only will we create the nodes but we will also dictate the specific point-to-point connections that we wish to model. +``` bash + NodeContainer p2pNodes; + p2pNodes.Create (6); + + PointToPointHelper pointToPoint; + pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps")); + pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms")); + + NetDeviceContainer p2pDevices; + p2pDevices = pointToPoint.Install (p2pNodes.Get(0), p2pNodes.Get(1)); + + NetDeviceContainer p2pDevices1; + p2pDevices1 = pointToPoint.Install (p2pNodes.Get(2), p2pNodes.Get(1)); + + NetDeviceContainer p2pDevices2; + p2pDevices2 = pointToPoint.Install (p2pNodes.Get(3), p2pNodes.Get(1)); + + NetDeviceContainer p2pDevices3; + p2pDevices3 = pointToPoint.Install (p2pNodes.Get(4), p2pNodes.Get(1)); + + NetDeviceContainer p2pDevices4; + p2pDevices4 = pointToPoint.Install (p2pNodes.Get(5), p2pNodes.Get(1)); + +``` +### Creating and assigning the wifi nodes +We are virutally doing the same process as before but now to the wifi nodes we will also assign the p2p node 0 to a new object called 'wifiApNode'. This will set our p2p node as a wifi access point, thereby linking our p2p network to our wifi network. +```bash +NodeContainer wifiStaNodes; + wifiStaNodes.Create (nWifi); + NodeContainer wifiApNode = p2pNodes.Get (0); + + YansWifiChannelHelper channel = YansWifiChannelHelper::Default (); + YansWifiPhyHelper phy; + phy.SetChannel (channel.Create ()); + + WifiHelper wifi; + wifi.SetRemoteStationManager ("ns3::AarfWifiManager"); + + WifiMacHelper mac; + Ssid ssid = Ssid ("ns-3-ssid"); + mac.SetType ("ns3::StaWifiMac", + "Ssid", SsidValue (ssid), + "ActiveProbing", BooleanValue (false)); + + NetDeviceContainer staDevices; + staDevices = wifi.Install (phy, mac, wifiStaNodes); + + mac.SetType ("ns3::ApWifiMac", + "Ssid", SsidValue (ssid)); + + NetDeviceContainer apDevices; + apDevices = wifi.Install (phy, mac, wifiApNode); +``` + +### Keeping our wifi nodes fixed +We will now install our mobility model to our wifi nodes. There are a couple of different methods we can choose from but we want our wifi nodes to remain fixed in place since they will simulate stationary computers in the office. +```bash + MobilityHelper mobility; + + mobility.SetPositionAllocator ("ns3::GridPositionAllocator", + "MinX", DoubleValue (0.0), + "MinY", DoubleValue (0.0), + "DeltaX", DoubleValue (5.0), + "DeltaY", DoubleValue (10.0), + "GridWidth", UintegerValue (3), + "LayoutType", StringValue ("RowFirst")); + + mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel"); + mobility.Install (wifiStaNodes); + + mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel"); + mobility.Install (wifiApNode); +``` +### Installing internet protocol rules +We will install the internet rules onto all our devices and we will set the IP addresses with our subnet mask +```bash + InternetStackHelper stack; + stack.Install (p2pNodes.Get(1)); + stack.Install (p2pNodes.Get(2)); + stack.Install (p2pNodes.Get(3)); + stack.Install (p2pNodes.Get(4)); + stack.Install (p2pNodes.Get(5)); + stack.Install (wifiApNode); + stack.Install (wifiStaNodes); + + Ipv4AddressHelper address; + + address.SetBase ("10.1.1.0", "255.255.255.0"); + Ipv4InterfaceContainer p2pInterfaces; + p2pInterfaces = address.Assign (p2pDevices); + + address.SetBase ("10.1.2.0", "255.255.255.0"); + Ipv4InterfaceContainer p2pInterfaces1; + p2pInterfaces1 = address.Assign (p2pDevices1); + + address.SetBase ("10.1.4.0", "255.255.255.0"); + Ipv4InterfaceContainer p2pInterfaces2; + p2pInterfaces2 = address.Assign (p2pDevices2); + + address.SetBase ("10.1.5.0", "255.255.255.0"); + Ipv4InterfaceContainer p2pInterfaces3; + p2pInterfaces3 = address.Assign (p2pDevices3); + + address.SetBase ("10.1.6.0", "255.255.255.0"); + Ipv4InterfaceContainer p2pInterfaces4; + p2pInterfaces4 = address.Assign (p2pDevices4); + + address.SetBase ("10.1.3.0", "255.255.255.0"); + address.Assign (staDevices); + address.Assign (apDevices); +``` +### Creating ports for our server node +We need to add ports to our p2p node 1, which is our server. I assigned 9-13 arbitrarily. +```bash + UdpEchoServerHelper echoServer (9); + UdpEchoServerHelper echoServer1 (10); + UdpEchoServerHelper echoServer2 (11); + UdpEchoServerHelper echoServer3 (12); + UdpEchoServerHelper echoServer4 (13); +``` +### Setting our simulation length and initiating echoes +We will set the length of our simulation to 14 seconds and we will echo specific clients to specific server ports +```bash + ApplicationContainer serverApps = echoServer.Install (p2pNodes.Get (1)); + serverApps.Start (Seconds (1.0)); + serverApps.Stop (Seconds (14.0)); + + UdpEchoClientHelper echoClient (p2pInterfaces.GetAddress (1), 9); + echoClient.SetAttribute ("MaxPackets", UintegerValue (1)); + echoClient.SetAttribute ("Interval", TimeValue (Seconds (1.0))); + echoClient.SetAttribute ("PacketSize", UintegerValue (1024)); + + UdpEchoClientHelper echoClient1 (p2pInterfaces1.GetAddress (1), 10); + echoClient1.SetAttribute ("MaxPackets", UintegerValue (1)); + echoClient1.SetAttribute ("Interval", TimeValue (Seconds (1.0))); + echoClient1.SetAttribute ("PacketSize", UintegerValue (1024)); + + UdpEchoClientHelper echoClient2 (p2pInterfaces2.GetAddress (1), 11); + echoClient2.SetAttribute ("MaxPackets", UintegerValue (1)); + echoClient2.SetAttribute ("Interval", TimeValue (Seconds (1.0))); + echoClient2.SetAttribute ("PacketSize", UintegerValue (1024)); + + UdpEchoClientHelper echoClient3 (p2pInterfaces3.GetAddress (1), 12); + echoClient3.SetAttribute ("MaxPackets", UintegerValue (1)); + echoClient3.SetAttribute ("Interval", TimeValue (Seconds (1.0))); + echoClient3.SetAttribute ("PacketSize", UintegerValue (1024)); + + UdpEchoClientHelper echoClient4 (p2pInterfaces4.GetAddress (1), 13); + echoClient4.SetAttribute ("MaxPackets", UintegerValue (1)); + echoClient4.SetAttribute ("Interval", TimeValue (Seconds (1.0))); + echoClient4.SetAttribute ("PacketSize", UintegerValue (1024)); +``` +### Setting our echoes to specific times +We will be very deliberate about our echoes and send them one after another all the way through our 14 seconds of simulation. +```bash + ApplicationContainer clientApps9 = + echoClient.Install (wifiStaNodes.Get (nWifi - 1)); + clientApps9.Start (Seconds (1.0)); + clientApps9.Stop (Seconds (14.0)); + + ApplicationContainer clientApps = echoClient.Install (p2pNodes.Get (0)); + clientApps.Start (Seconds (2.0)); + clientApps.Stop (Seconds (6.0)); + + ApplicationContainer clientApps1 = echoClient1.Install (p2pNodes.Get (2)); + clientApps1.Start (Seconds (6.0)); + clientApps1.Stop (Seconds (8.0)); + + ApplicationContainer clientApps2 = echoClient2.Install (p2pNodes.Get (3)); + clientApps2.Start (Seconds (8.0)); + clientApps2.Stop (Seconds (10.0)); + + ApplicationContainer clientApps3 = echoClient3.Install (p2pNodes.Get (4)); + clientApps3.Start (Seconds (10.0)); + clientApps3.Stop (Seconds (12.0)); + + ApplicationContainer clientApps4 = echoClient4.Install (p2pNodes.Get (5)); + clientApps4.Start (Seconds (12.0)); + clientApps4.Stop (Seconds (14.0)); + + Ipv4GlobalRoutingHelper::PopulateRoutingTables (); + + Simulator::Stop (Seconds (14.0)); + if (tracing == true) + { + pointToPoint.EnablePcapAll ("third"); + phy.EnablePcap ("third", apDevices.Get (0)); + } +``` +### Positioning our nodes in space +We will position our nodes exactly where we put them in our cad model from earlier. I made them an order of magnitude smaller (300 -->30.0) because NetAnim was not happy with the larger magnitude. However, the proportion is the same and we can visually confirm that they are correctly positioned when we compare them to where we have them in our AutoCad model. +```bash +AnimationInterface anim ("wireless-Anim-file.xml"); + + anim.SetConstantPosition (p2pNodes.Get (1), 15.95, 19.15); + anim.SetConstantPosition (p2pNodes.Get (2), 25.827, 18.024); + anim.SetConstantPosition (p2pNodes.Get (3), 31.1, 16.65); + anim.SetConstantPosition (p2pNodes.Get (4), 11.35, 13.9); + anim.SetConstantPosition (p2pNodes.Get (5), 21.95, 5); + + anim.SetConstantPosition (wifiApNode.Get (0), 35.95, 11.65); + anim.SetConstantPosition (wifiStaNodes.Get (0), 36.95, 16.15); + anim.SetConstantPosition (wifiStaNodes.Get (1), 36.95, 17.4); + anim.SetConstantPosition (wifiStaNodes.Get (2), 36.95, 11.95); + + anim.UpdateNodeColor (wifiApNode.Get (0), 0, 255, 120); // RGB format + anim.UpdateNodeColor (p2pNodes.Get (1), 0, 255, 120); + anim.UpdateNodeColor (wifiStaNodes.Get (0), 0, 0, 255); + anim.UpdateNodeColor (wifiStaNodes.Get (1), 0, 0, 255); + anim.UpdateNodeColor (wifiStaNodes.Get (2), 0, 0, 255); + + Simulator::Run (); + Simulator::Destroy (); + return 0; +} +``` +### Run your model +You shall now be able to run your network model using the ./waf file. This takes a bit of practice to get everything in the right directories but ns-3 has documentation to help. +```bash +At time +1s client sent 1024 bytes to 10.1.1.2 port 9 +At time +1.00586s server received 1024 bytes from 10.1.3.3 port 49153 +At time +1.00586s server sent 1024 bytes to 10.1.3.3 port 49153 +At time +1.02055s client received 1024 bytes from 10.1.1.2 port 9 +At time +2s client sent 1024 bytes to 10.1.1.2 port 9 +At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153 +At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153 +At time +2.00737s client received 1024 bytes from 10.1.1.2 port 9 +At time +6s client sent 1024 bytes to 10.1.2.2 port 10 +At time +8s client sent 1024 bytes to 10.1.4.2 port 11 +At time +10s client sent 1024 bytes to 10.1.5.2 port 12 +At time +12s client sent 1024 bytes to 10.1.6.2 port 13 +``` +### Visualizing output +Now that we have seen our output, we want to make visual sense of them. Go to your netanim directory and run ./NetAnim to start up NetAnim. Go to the file name you specified in your earlier script. Mine is 'Wireless-Anim-file.xml'. Now we will get the simulation that we saw at the beginning of this blog post: +
+> +
+We can see that we send various pings back and forth between p2p nodes and the wifi nodes in the offices. This signifies that we indeed do have a path of communication for our industrial machines to send information to our desktop machines in the office. + +### Conculsion +This was just a way for me to dip my toes into implementations of networks in and industrial context. In actuality, there are much more complex communication protocols happeining in real industrial plants. Please reach out if you have any questions or corrections. Keep building. diff --git a/_posts/import gymnasium as gym.py b/_posts/import gymnasium as gym.py new file mode 100644 index 0000000000000..2d2679e126ec1 --- /dev/null +++ b/_posts/import gymnasium as gym.py @@ -0,0 +1,22 @@ +import gymnasium as gym +env = gym.make("LunarLander-v2", render_mode="human") +observation, info = env.reset() + +for _ in range(1000): + action = env.action_space.sample() # agent policy that uses the observation and info + observation, reward, terminated, truncated, info = env.step(action) + + if terminated or truncated: + observation, info = env.reset() + +env.close() +# import gymnasium as gym +# env = gym.make("CartPole-v1") + +# observation, info = env.reset() +# for _ in range(1000): +# action = env.action_space.sample() # agent policy that uses the observation and info +# observation, reward, terminated, truncated, info = env.step(action) + +# if terminated or truncated: +# observation, info = env.reset() \ No newline at end of file diff --git a/about.md b/about.md index bc21f5731bf4b..c3161cf658bae 100644 --- a/about.md +++ b/about.md @@ -4,12 +4,10 @@ title: About permalink: /about/ --- -Some information about you! - ### More Information +Hi! I'm Diego. Welcome to my engineering blog. Here, you'll find some projects I worked on during my free time in college. Back when the majority of my time was not spent under NDAs and sharing was easy. Truthfully, this site is a bit outdated :( but for me it is a nostalgic trip to the past. I am currently developing the new version, which will be much nicer and will have additional content. Can't wait to share that with you! -A place to include any other types of information that you'd like to include about yourself. ### Contact me -[email@domain.com](mailto:email@domain.com) \ No newline at end of file +[diegoprestamo@gmail.com](mailto:diegoprestamo@gmail.com) diff --git a/images/Screen Recording 2023-05-29 at 10.10.13 PM.gif b/images/Screen Recording 2023-05-29 at 10.10.13 PM.gif new file mode 100644 index 0000000000000..923ebe5ae1ddf Binary files /dev/null and b/images/Screen Recording 2023-05-29 at 10.10.13 PM.gif differ diff --git a/images/Screenshot 2024-01-09 at 4.55.34 AM.png b/images/Screenshot 2024-01-09 at 4.55.34 AM.png new file mode 100644 index 0000000000000..3db08df4b13f4 Binary files /dev/null and b/images/Screenshot 2024-01-09 at 4.55.34 AM.png differ diff --git a/images/crownNetAnim.mp4 b/images/crownNetAnim.mp4 new file mode 100644 index 0000000000000..fd63455c8253e Binary files /dev/null and b/images/crownNetAnim.mp4 differ diff --git a/images/gamma_latex.png b/images/gamma_latex.png new file mode 100644 index 0000000000000..21f437e29e743 Binary files /dev/null and b/images/gamma_latex.png differ diff --git a/images/global_linear_velocity.png b/images/global_linear_velocity.png new file mode 100644 index 0000000000000..d1d226b61607c Binary files /dev/null and b/images/global_linear_velocity.png differ diff --git a/images/local_global_velocity.png b/images/local_global_velocity.png new file mode 100644 index 0000000000000..083eb9cd89293 Binary files /dev/null and b/images/local_global_velocity.png differ diff --git a/images/local_linear_velocity.png b/images/local_linear_velocity.png new file mode 100644 index 0000000000000..a014cbad67c67 Binary files /dev/null and b/images/local_linear_velocity.png differ diff --git a/images/male_avatar.png b/images/male_avatar.png new file mode 100644 index 0000000000000..b34844820b5b8 Binary files /dev/null and b/images/male_avatar.png differ diff --git a/images/omega_latex.png b/images/omega_latex.png new file mode 100644 index 0000000000000..fa2d330537d92 Binary files /dev/null and b/images/omega_latex.png differ diff --git a/images/theta_dot_latex.png b/images/theta_dot_latex.png new file mode 100644 index 0000000000000..12f9f7bbf1633 Binary files /dev/null and b/images/theta_dot_latex.png differ diff --git a/images/theta_latex.png b/images/theta_latex.png new file mode 100644 index 0000000000000..7c92e6ccd1869 Binary files /dev/null and b/images/theta_latex.png differ diff --git a/style.scss b/style.scss index 3915a90244691..8f8482cc92300 100644 --- a/style.scss +++ b/style.scss @@ -287,3 +287,296 @@ footer { // ... Otherwise it really bloats up the top of the CSS file and makes it difficult to find the start @import "highlights"; @import "svg-icons"; + + + +// --- +// --- + +// // +// // IMPORTS +// // + +// @import "reset"; +// @import "variables"; +// // Syntax highlighting @import is at the bottom of this file + +// /**************/ +// /* BASE RULES */ +// /**************/ + +// html { +// font-size: 100%; +// } + +// body { +// background: #030c1f; // bluish gray +// ; +// font: 18px/1.4 $helvetica; +// color: #F5F5F5; +// } + +// .container { +// margin: 0 auto; +// max-width: 740px; +// padding: 0 10px; +// width: 100%; +// } + +// h1, h2, h3, h4, h5, h6 { +// font-family: $helveticaNeue; +// color: $white; +// font-weight: bold; + +// line-height: 1.7; +// margin: 1em 0 15px; +// padding: 0; + +// @include mobile { +// line-height: 1.4; +// } +// } + +// h1 { +// font-size: 30px; +// a { +// color: inherit; +// } +// } + +// h2 { +// font-size: 24px; +// } + +// h3 { +// font-size: 20px; +// } + +// h4 { +// font-size: 18px; +// color: #f5f5f5; +// } + +// p { +// margin: 15px 0; +// } + +// a { +// color: $blue; +// text-decoration: none; +// cursor: pointer; +// &:hover, &:active { +// color: $blue; +// } +// } + +// ul, ol { +// margin: 15px 0; +// padding-left: 30px; +// } + +// ul { +// list-style-type: disc; +// } + +// ol { +// list-style-type: decimal; +// } + +// ol ul, ul ol, ul ul, ol ol { +// margin: 0; +// } + +// ul ul, ol ul { +// list-style-type: circle; +// } + +// em, i { +// font-style: italic; +// } + +// strong, b { +// font-weight: bold; +// } + +// img { +// max-width: 100%; +// } + +// // Fixes images in popup boxes from Google Translate +// .gmnoprint img { +// max-width: none; +// } + +// .date { +// font-style: italic; +// color: $gray; +// } + +// // Specify the color of the selection +// ::-moz-selection { +// color: black; +// background: $lightGray; +// } +// ::selection { +// color: $black; +// background: $lightGray; +// } + +// // Nicolas Gallagher's micro clearfix hack +// // http://nicolasgallagher.com/micro-clearfix-hack/ +// .clearfix:before, +// .clearfix:after { +// content: " "; +// display: table; +// } + +// .clearfix:after { +// clear: both; +// } + +// /*********************/ +// /* LAYOUT / SECTIONS */ +// /*********************/ + +// // +// // .masthead +// // + +// .wrapper-masthead { +// margin-bottom: 50px; +// } + +// .masthead { +// padding: 20px 0; +// border-bottom: 1px solid #f5f5f5; + +// @include mobile { +// text-align: center; +// } +// } + +// .site-avatar { +// float: left; +// width: 70px; +// height: 70px; +// margin-right: 15px; + +// @include mobile { +// float: none; +// display: block; +// margin: 0 auto; +// } + +// img { +// border-radius: 5px; +// } +// } + +// .site-info { +// float: left; + +// @include mobile { +// float: none; +// display: block; +// margin: 0 auto; +// } +// } + +// .site-name { +// margin: 0; +// color: $white; +// cursor: pointer; +// font-family: $helveticaNeue; +// font-weight: 300; +// font-size: 28px; +// letter-spacing: 1px; +// } + +// .site-description { +// margin: -5px 0 0 0; +// color: $gray; +// font-size: 16px; + +// @include mobile { +// margin: 3px 0; +// } +// } + +// nav { +// float: right; +// margin-top: 23px; // @TODO: Vertically middle align +// font-family: $helveticaNeue; +// font-size: 18px; + +// @include mobile { +// float: none; +// margin-top: 9px; +// display: block; +// font-size: 16px; +// } + +// a { +// margin-left: 20px; +// color: $darkGray; +// text-align: right; +// font-weight: 300; +// letter-spacing: 1px; + +// @include mobile { +// margin: 0 10px; +// color: $blue; +// } +// } +// } + +// // +// // .main +// // + +// .posts > .post { +// padding-bottom: 2em; +// border-bottom: 1px solid $lightGray; +// } + +// .posts > .post:last-child { +// padding-bottom: 1em; +// border-bottom: none; +// } + +// .post { +// blockquote { +// margin: 1.8em .8em; +// border-left: 2px solid $black; //recent changes +// padding: 0.1em 1em; +// color: $black; +// font-size: 22px; +// font-style: italic; +// } + +// .comments { +// margin-top: 10px; +// } + +// .read-more { +// text-transform: uppercase; +// font-size: 15px; +// } +// } + +// .wrapper-footer { +// margin-top: 50px; +// border-top: 1px solid #ddd; +// border-bottom: 1px solid #ddd; +// background-color: $lightGray; +// } + +// footer { +// padding: 20px 0; +// text-align: center; +// } + +// // Settled on moving the import of syntax highlighting to the bottom of the CSS +// // ... Otherwise it really bloats up the top of the CSS file and makes it difficult to find the start +// @import "highlights"; +// @import "svg-icons"; diff --git a/tempCodeRunnerFile.python b/tempCodeRunnerFile.python new file mode 100644 index 0000000000000..2d2679e126ec1 --- /dev/null +++ b/tempCodeRunnerFile.python @@ -0,0 +1,22 @@ +import gymnasium as gym +env = gym.make("LunarLander-v2", render_mode="human") +observation, info = env.reset() + +for _ in range(1000): + action = env.action_space.sample() # agent policy that uses the observation and info + observation, reward, terminated, truncated, info = env.step(action) + + if terminated or truncated: + observation, info = env.reset() + +env.close() +# import gymnasium as gym +# env = gym.make("CartPole-v1") + +# observation, info = env.reset() +# for _ in range(1000): +# action = env.action_space.sample() # agent policy that uses the observation and info +# observation, reward, terminated, truncated, info = env.step(action) + +# if terminated or truncated: +# observation, info = env.reset() \ No newline at end of file