Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Homepage for the Autonomous Robotics Club wiki page
The Autonomous Robotics Club of Purdue was created to grow the skills and abilities of its members through design projects centered around advanced autonomous robotics systems. It provides hands-on, real world experience to interdisciplinary teams using industry standard tools and practices.
Through ARC, members can meet like-minded individuals, solve real world robotics problems, gain experience with industry-standard tools, and build their career portfolios. We welcome new members of any skill level and provide an environment for members to learn about topics such as state estimation, control algorithms, and machine learning. New members have opportunities for leadership early on and can gain in depth experience working on projects that interest them. ARC projects typically include a mix of hardware and software design.
Current ARC supports five projects:
Rocket League: Teaching a system of autonomous, scaled vehicles to play head-to-head in a game of high speed soccer.
: Creating a giant, self-playing chess board inspired by the Harry Potter universe.
: Creating a biologically-inspired manipulator capable of playing the piano.
: Developing a UAV platform capable of autonomously delivering small packages.
: A groundbreaking fusion of a drone and a robotic dog, featuring autonomous navigation and camera/lidar vision
More information about all of these projects can be found on .
Each semester, students are encouraged to pitch ideas for new projects that match the interests of the club. Purdue ARC prides itself on being a self-guided exploration of state-of-the-art problems within autonomous robotics.
To get in contact or get involved, check out the links on the bottom of the home page.
ARC currently holds shop space at BIDC. Students can use this space for manufacturing and assembling parts, hosting project meetings and storing equipment.
Anyone looking to use the space must complete the required trainings. To do this, join the BIDC group on passport and obtain the following badges:
Additional badges are required for manufacturing tasks such as using the drill press, vertical bandsaw, etc.
BIDC only grants ARC a limited number of positions for people with card swipe access. Because of this, we try to equally divide those with access across each project.
To have a meeting at BIDC, you need to ensure at least one of the attending members will have swipe access to let everyone else in.
The current swipe list is:
Rocket League
Harrison McCarty (mccarth at purdue.edu)
James Baxter (baxter26 at purdue.edu)
Robert Ketler (rketler at purdue.edu)
ARC maintains two 3D printers within the shop space:
Tony: Monoprice Maker Select 2
Chelsea: Creality CR-6 SE
If you have an interest in using either printer, reach out in the ARC discord under the "3D printing" channel.
The recommended printer profiles using the :
Chelsea Settings:
Tony Settings:
TODO
Justin Lee (lee3228 at purdue.edu)
Wizard Chess
Ilina Adhikari (iadhikar at purdue.edu)
Vinitha Marupeddi (vmaruped at purdue.edu)
Emma Block (block21 at purdue.edu)
Robot Arm
Raghava Uppuluri (ruppulur at purdue.edu)
Bronson Yen (yen22 at purdue.edu)
Sara Swanlund (sswanlun at purdue.edu)
Drone Delivery
Sooraj Chetput Venkataraghavan (schetput at purdue.edu)
Cade Jarrett (jarrettc at purdue.edu)
Piano Hand
Revanth Senthilkumaran (senthilr at purdue.edu)
Rugved Dikay (rdikay at purdue.edu)
Manas Paranjape (mparanja at purdue.edu)
Layer height: 0.15 mm
Top thickness: 0.8
Bottom thickness: 0.8
Bottom layers: 6
Infill density: 20%
Print speed: 35.0 mm/s
Cooling: 75%
Raft supports are optionalPrint Speed: 50.0 mm/s
Infill density: 15%
Wall Thickness: 1.05 mm
Top/Bottom Thickness: 0.72 mm
Build Plate Temperature: 60 C
Layer Height: 0.15 mm
Initial Layer Height: 0.2625 mm
Top/Bottom Layer Height: 0.8 mm
Minimum Layer Time: 5s
Minimum Speed: 10 mm/s
Fan Speed: 100%
Cooling: 100%Goals
The main goal of perception is to take a photo of sheet music and be able to accurately turn it into note MIDI data.
Perceptions main goals for Fall 2024 are:
DogCopter optimization is in charge of determining the most efficient combination of components and sizes in order to maximize flight time. This is done through implementing math models in code and finding the optimal parts to maximize battery life and performance.
Optimization documentation: https://docs.google.com/document/d/1oxhi7ctNvE71Xk5GwWfbc6gG7TPMnhEUGtUFoMwFzwU/edit?usp=sharing
Description and tools used for the Simulation sub-team of Piano Hand
Piano Hand simulation focuses on getting a 3D ROS-based simulation setup with Piano Hand to help eliminate the dependency of the other software and hardware sub-teams for testing purposes.
Simulation progress in the past can be split into two major phases:
non-ROS: Attempting to get a basic simulator setup without having to get ROS setup for Piano Hand testing purposes. Primarily consisted of using WeBots, a simulation software.
ROS: Realizing the importance of ROS with the full system integration, Gazebo was chosen to test and implement the simulation.
Currently in the ROS phase of development, the simulation sub-team focuses on ...
The software team will develop all the software to make DogCopter function autonomously. It is split into three teams.
Controls software works on establishing communication between the different control systems and the motors. We will be working with Raspberry Pi 5s, PX4 firmware, and MIT Cheetah motor controllers. There will be some intersection between controls and electronics. We are currently working on communicating with the motors and PX4 from the Raspberry Pi.
Raspberry Pi 5: PX4: MIT Cheetah Controllers:
...
...
...
The vision team will be taking input from the lidar and camera and using it to navigate DogCopter. Currently, we are working on processing input from our lidar. Unitree 4D Lidar: https://shop.unitree.com/products/unitree-4d-lidar-l1
Simulations will work on testing the DogCopter in a virtual environment. The goal is to first create a simulated environment to test in, then to create a map of that simulated environment through our vision method. Gazebo: https://gazebosim.org/docs/latest/getstarted/
The algorithms team falls in between the Perception Team and the Electronics Team. They take the MIDI note data from the Perception Team and then work on converting it into finger/note pairings for the Electronics team to use to play the piano.
The main goals for Algorithms this semester are
Successfully parse XML input from the Perception Team
Communicate with Electronics Team to determine proper output format
Successfully implement Viterbi Algorithm and output data correctly
Our goal for this project is to be able to create a fully autonomous, life sized chess game that people will be able to play using only voice commands. This project was inspired by the Wizard's Chess game in Harry Potter and the Philosopher's Stone. We will be exploring different hardware and software concepts, including engineering design, CAD, manufacturing, computer vision, and trajectory planning.
The electrical subteam works on the power and control systems for the robot. In past semesters we have selected a number of parts which we believe will work for a final robot, and are now working on getting these individual parts tested and connected to a single system. This involves both hardware and software. We recommend being familiar with the terminology below.
Aerospace will design the mechanism to fold up the propellors to store it while in walking mode, ensure that the motors have sufficient thrust, the battery life will be good, and design how the parts for flying integrate with the quadruped. We are currently working on a thrust test jig to ensure our motors have sufficient thrust
Dogcopter uses Fusion 360 for creating CAD models. To get started with fusion 360, sign up for an educational license at this link:
https://www.openstreetmap.org/#map=5/38.007/-95.844
Converted that into a 2D map using Rasterization techniques.
Initially planned on using Octrees, but later decided not to due to the vast amount of memory that would consume.
Has a separate page, which describes the stack in much greater detail. /wiki/active-projects/drone-delivery/images/ The following is the point cloud generated from OpenStreetMap.
The Octree that was generated.
The occupancy gird that was generated.
Migrate the code for Realsense camera from Python → C++
Able to get depth matrix
Can convert to occupancy matrix
Implimenting a the D* and A* algorithms.
There are issues in how the paths are generated, in that sometimes diagonal paths are taken over straight ones, even though the latter is possible to produce.
To be improved in Spring 2023.
OpenCV integration for detecting obstacles at a greater depth.
The Output from the A* algorithm we develped:
An example of an unideal path:
Established mission objectives, such as flight time and pay load weight.
Picked a drone kit, selected baterries, motor controllers, motors and propellers.
A detailed document describing all the design choices that were made: https://docs.google.com/document/d/1YcgpvD2AsxBpSHcqvRHN5PRzlyk0kKi2nl-Srrc2DXE/edit?usp=sharing
Check the main page for details on what hardware components were selected.
Repaired the old drone, and interfaced a computer with it.
Simulated pre-programmed fight paths using Gazebo and QGroundControl.
Worked on importing OpenStreetMap data into Gazebo.
Over the course of the semester, the team reseached several important topics and developed whitepapers, and plan documents.
Deegan Osmundson: A* based navigation algorithms. https://drive.google.com/file/d/1XcB0w0IvobgjAYehDYUqe3qoPYX0miRA/view
Seth Deegan: Drone Delivery Tech Stack https://docs.google.com/document/d/1ekadDu0ogtgF6m-fK_AaudN6HzlPpPMSJ_rsvxW74-M/edit#heading=h.fliy5digh3xk
Sooraj Chetput: Steps in implimenting the software stack for DD. https://drive.google.com/file/d/1RyhzLklxlVaUF0ReHZfJ17z1Qm90ic8P/view
Jake Harrelson and several Authors: Design and Implementation of Unmanned Aerial Vehicle for Local Food Delivery https://docs.google.com/document/d/1YcgpvD2AsxBpSHcqvRHN5PRzlyk0kKi2nl-Srrc2DXE/view
Project Managers: Sooraj Chetput
Obstacle Avoidance Team Members: Guna Avula (Lead), Deegan Osmundson, Chris Qiu, Ethan Baird, Mouli Sangita
Pre-flight planning: Seth Deegan (Lead), Vincent Wang
Research: Sreevickrant Sreekanth (Lead), Vignesh Charapalli
Hardware: Jacob Harrelson (Lead), Evan Zher
Interfacing: Sooraj Chetput (Lead), Atharva Bhide
We are currently working on developing all 32 pieces for the game, as well as manufacturing the chess board ourselves. Navigate to our [hardware docs](
) to understand where we are at in terms of our hardware progress.
Battery Team
The batteries team has been researching what types of batteries we need to power the robot as well as how to do it. Their mission is to successfully provide the amount of power to allow a full game of chess to run without needing any recharges. Currently, the upper bound that they have set for one full game is 2 hours.
CAD
The CAD team is utilizing Fusion 360 to create a model of the basic chess piece. They are also our go-to team for laser cutting, 3D printing, CNCing, and general manufacturing, especially when it comes to needing certain files.
We will be working on vision using apriltags to determine the positioning of each of the pieces on the board on an x,y coordinate plane and voice inputs to direct the pieces where to go. This will all be controlled by the overarching Raspberry Pi 4 controller, intakes voice commands, determines valid moves, and calculates the correct trajectory for each of the pieces. Navigate to our [software docs](
) to get a deeper dive into these concepts.
Computer Vision
This team is tasked with figuring out how to capture the position of all pieces on the board at all times. The current design is to use an overhead camera that will encompass the whole board, and then use Apriltags on each robot and 4 corners. With this information, we will be able to digitally map them on an xy-coordinate plane and perform the trajectory and pathing calculations correctly in order to move the correct chess pieces to the correct spots.
Voice Recognition
This team is tasked with implementing full voice recognition capabilities for the chess game. They are looking into different voice recognition libraries that will be the best to implement in order to assist the creation of the software needed. The voice recognition software will take different commands (ie: Knight to e4) and use the trajectory planner to convert them to a point on the plane for the piece to move to.
This spring was our first semester on this project!
Designed the triangular chess pieces
Each robot is about 9.75" x 7.75" and laser cut from .5" plywood and each square on the field is 1.5' by 1.5'
Put together prototype with motor movement
{{< drive-player "18uXqdxGblfsNUdoEI2qEVPu5-sYCvQ7a/preview" >}}
Wrote chess algorithm using Python
Simulated robot movement in ROS
We are planning on creating low-poly structures for each of the chess pieces and combining them with LED indicators to distinguish between the different pieces. We will also be looking into creating custom PCBs for each of the robots.
ROS stands for Robot Operating System. It is an industry standard tool for creating autonomous robots. Essentially, it is a framework to create a mesh of nodes that work together to control your robot. It has an API for both C++ and Python. Through it, you have a common language to define messages that are sent from one node to another (for example a path planning node can send waypoint messages to a low level control node). The beauty of it is that all the nodes can be modular and developed independently if you have a well defined interface of messages.
Another really excellent reason to use ROS is the 5000+ number of existing packages that can act as ‘plug and play’ into your existing network. If you have a new sensor you’re looking to try, someone has probably already written a ROS driver for it. Tasks such as mapping, perception, and state estimation that many robots need to do probably have multiple packages that you can experiment with and pick the best one for your specific application. There are also pre-defined messages for standard things like sensor data and control commands. For these reasons it can be really useful for jump-starting a new project.
Through this club, you will be learning much more about ROS. The snake_tutorial package will walk you through creating a few simple nodes that can be chained together to control a snake to reach a goal. You will also have the opportunity to expand on the very basic controller given to you in an open ended challenge to create the highest performing AI for the snake.
A downside of ROS is that it works really well on Ubuntu, but isn’t well supported on other operating systems. This is why we must set up a very specific development environment for the club.
For more information on ROS, check out their wiki: http://wiki.ros.org/ROS/Introduction
A development environment (DE) is a suite of tools used to streamline writing, running, and testing code.
To simplify the installation and setup process for using ROS, ARC has created a standard DE that we recommend for all members.
ARC's DE relies on a program known as Docker. This program virtualizes various operating systems and dependencies to ensure your software is compatible to run on any device you choose to use.
ARC use to fully support another DE using Construct. If Docker fails to work on your machine, you can explore this method as an alternative.
If you have ever worked with a virtual machine, it's the same concept but avoids a bunch of uncecessary programs and overhead.
'Images' in Docker can be thought of as blueprints for a computer to be virtualized. ARC has created a generic image with all the required dependencies for ROS. When working on a ROS project, you will run a 'container', which is just a single instance of ARC's image. All of this is managed by boilerplate scripts that you just need to include in your project.
To learn more about how Docker works internally, check their documentation.
ARC's image uses Ubuntu 20.04 with ROS Noetic installed. It also has a few other handy tools, such as zsh.
The rest of this section will detail the setup process for ARC's DE.
Follow this installation guide to get Docker for your specific OS.
See the below instructions depending on what operating system you are running.
Windows
This guide was tested on Windows 10 and 11, if you are using a prior version you will need to upgrade or consider dual-booting.
To use ARC's DE on Windows, you will need WSL2. We have a seperate tutorial for this, which you can follow here.
You then need to setup X forwarding. This allows you to display GUIs (graphical user interfaces) from your WSL2 instance on your Windows machine. Again we have a seperate tutorial for this, which you can follow here.
Mac
Presently, this guide has not been fully tested on a Mac system. In theory it should because there is a version of Docker for Mac, but some additional work is required to get all features to function. If you're interested in running this on Mac, understand that some experimentation may be required.
You need to setup X forwarding. This allows you to display GUIs (graphical user interfaces) from your Docker instance on your machine. We have a seperate tutorial for this, which you can follow here.
Linux
This guide has been tested to work on Ubuntu 18.04 and 20.04. If you have a different distribution, it should also work without issue.
To ensure that everything works, follow the process below.
You don't have to understand what all of these commands do yet, this will be explained with more depth in future sections. If any of these commands return an error, take a moment to try and independently debug. If after 5 minutes you have no luck, reach out in the comments and somebody can assist.
Move to your home directory:
Setup a catkin workspace:
Move into the source code folder:
Clone the ARC tutorials package:
Build the docker image:
Run the docker container:
Your terminal should now show a new prompt indicating that you are within the docker container. This is a new environment where all of the ROS dependencies have been installed.
Again move into the catkin workspace:
Build the arc_tutorial package:
Update your environment with the newest build:
Lastly launch snake game:
Upon running this, two windows should appear:
If these windows don't appear, but every other command ran fine, then X forwarding is likely not working. Review the X forwarding tutorial and if nothing works, then reach out in the comments.
If they did appear however, then your dev environment is fully working!
You now have a basic idea of what ROS is and can run it on your machine. It's time to learn the basics of what it offers.
The environment you tested in the last section will serve as a simple platform to study ROS.
Check out the video we made to introduce this section of the tutorial.
After watching the above video, you can move on to the walkthrough.
Congrats! You are one step closer to world domination. While you can learn more about ROS in their tutorials, we recommend diving in through one of the club projects.
cdmkdir -p catkin_ws/srccd catkin_ws/srcgit clone [email protected]:purdue-arc/arc_tutorials.git./arc_tutorials/docker/docker-build.sh./arc_tutorials/docker/docker-run.shcd catkin_wscatkin buildsource devel/setup.bashroslaunch snakesim snakesim.launchDogcopter uses Fusion 360 for creating CAD models. To get started with fusion 360, sign up for an educational license at this link: https://www.autodesk.com/products/fusion-360/education

The central onboard computer for DogCopter will be a Raspberry Pi 5.
We have a dedicated flight controller (Pixhawk 6C), a specialized microcontroller that contains configurable software (PX4) that can be used to achieve flight without needing home-brewed flight control algorithms. We will be connecting this flight controller to our Raspberry Pi and using PX4's offboard control features to control the flight of DogCopter.
Brushless DC motors are a kind of motor that flips the direction of current in internal coils to spin the output shaft. Unlike brushed DC motors, you cannot apply a constant voltage to the motor and expect a consistent speed. Instead, the direction of current needs to be controlled, which is done either with an ESC or an Odrive. ESCs (Electronic Speed Controllers) are used to make brushless motors spin at a constant speed. Odrives are used to set the motor to a specific position (like a servo).
Most development tools are built for Linux systems, which is why writing software on pure Windows isn't recommended.
To get around this, developers use WSL2 (Windows Subsystem for Linux version 2). Essentially, Windows is running a full Linux system in tandem with Windows. This has much better performance than running a virtual machine (VM) or using the original WSL.
This document will guide you through setting up WSL 2 on Windows 10 & 11.
Microsoft has published an , for setting up WSL2, which contains the most up-to-date information for installing WSL2 on Windows. Follow the guide fully, and for step 6, install .
Make sure to !
While WSL2 does come with its own terminal, it isn't very aesthetic or functional. That's why we recommend installing . Once it is installed, launch it, then click the downwards arrow icon on the list of tabs. Select Ubuntu 20.04 from the dropdown, and you will be greeted with a bash terminal for your WSL2 instance.
If you'd like to make the default behavior to open WSL2 tabs, you can hit the dropdown and select settings.
When working in a larger project, you might find it a hassle to edit multiple files with vim or other CLI text editors. An alternative is Visual Studio Code, a text editor made by Microsoft to be lightweight and adaptable. With a few extensions, VS Code can be customized to work efficiently within ARC's development environment.
Note: This isn't required for working within the ARC development environment. It's purely a tool that some may find useful.
Available builds can be found .
After downloading and installing VS Code, open it to find a menu on the left hand side. Then click the icon containing stacked blocks, this is your extension menu. Here you can install applications to give VS Code more capabilities.
If you are running WSL2, you should start by installing the extension. You can use the search bar at the top to find it, then simply click install. If you aren't using WSL2, you can skip to installing the Python extension below.
After installing Remote WSL, you will need to reopen VS Code within your Ubuntu environment. You can do this by hitting CTRL + SHIFT + P, then typing:
Hit enter and it should bring up a new VS Code window running on Ubuntu. You can go ahead and close the old VS Code instance. Return to the extension menu, then you should see a new section labeled: WSL:Ubuntu - Installed
In order for any extensions to take place within WSL, you need to ensure that they are listed here.
Now we need to ensure that VS Code recognizes the primary language we use for developement, . Which has it's own extension in the marketplace.
Next, you are going to want the extension. This will enable you to run ARC docker containers.
Lastly, it's recommended you get the extension. This isn't required, however it helps in any environment where you are utilizing git version control. Git Graph can visualize branches, merges, and commits to help organize your codebase.
Now that you have the extensions installed, how do you use them?
We are already using the WSL extension, which you can verify by seeing a WSL: Ubuntu label in the bottom left corner (unless you are on Linux or Mac).
To open the console, enter:
You can now run bash commands as per usual. If there is a file or folder you want to edit in VS Code, run:
For utilizing Docker in VS Code, click the Docker icon on the menu bar. Now you will be able to see all images and containers you are using. You should still use docker-build.sh and docker-run.sh scripts for running ARC containers, however this is a useful tool for managing versions.
If you intend on using VS Code, it might be helpful to check out their own .
Now that WSL2 is setup, you'll want to setup X forwarding for running GUI applications. Additionally if you are unfamilar with bash, checkout our terminal guide.



Remote-WSL: New WindowCTRL + SHIFT + ~code (folder or file name)X forwarding is what will let you display GUIs (graphical user interfaces) on your machine that are being run from your Docker container. You need to install an X client, set up the appropriate firewall rules, then tell Docker to forward its display to your client.
X forwarding configuration varies by machine, so follow the steps that match your operating system. If you run into any issues or don't have a supported OS, reach out in the comments below.
X Client Set Up
There are several options available here. VcXsrv has been tested to work with the ARC development environment and is available on SourceForge. VcXsrv is the recommended client.
If you install VcXsrv, launch it with the XLaunch.exe executable, not VcXsrv.exe. When you launch it for the first time, you will see several configuration pages. You may leave them all at the default: multiple windows, no clients.
Windows Firewall Setup
When you first launch your X client, you should see Windows Firwall pop up. Go ahead with the default settings.
Now that everything is blocked, we need to put in an exception so that our WSL2 instance is able to access the X client.
These instructions are taken from a Reddit post
Launch Windows Firewall. Simply search "Firewall" in the Start Menu
Go ahead and save and close everything. You may need to reboot your X client.
WSL2 Setup
We will need to edit a file called the bashrc. This is a script that is run every time you open a new terminal in order to set up your environment. Modify it by copy and pasting the below command in exactly. Run these commands in WLS2, not Powershell or Windows Commandline. From now on, all commands to be listed like this should be put into the WSL2 commandline.
Since we just changed our bashrc, we need to run the below command for the changes to take effect in our current terminal:
Testing
You will need to download a small program called xeyes in order to test the X forwarding. It can be installed and run with the following commands:
You'll see a little pair of eyes follow your cursor around. You can X out of it or hit CTRL+C in order to kill the program.
X Client Set Up
The most widely used X client for Mac is X Quartz. You can install it here.
Configure X Quartz
You need to first allow connections from network clients:
Now run the following command to allow localhost access on startup:
After running the above command, restart your system.
Errors with running X Quartx
If you have errors while completing the above steps, take a look at the following trouble shooting guide.
Testing
You will need to download a small program called xeyes in order to test the X forwarding. It can be installed and run with the following commands:
You'll see a little pair of eyes follow your cursor around. You can X out of it or hit CTRL+C in order to kill the program.
The overall goal of DogCopter is to build a fully autonomous robot capable of seamlessly transitioning from a walking quadruped robot to a drone. This fusion of a drone and robotic dog was inspired by innovations in robotic dogs (Boston Dynamics' Spot), flying cars, and drones. A versatile robot like such can effortlessly traverse rocky land and take flight to climb objects otherwise insurmountable by a quadruped robot. This improves the battery life and close-search capabilities of a drone, while maintaining its ability to traverse any terrain.
The DogCopter project aims to integrate various hardware and software systems across Electrical, Mechanical, and Aerospace components. Our objective is to balance weight and capabilities to optimize both flight and walking times. The propellers must be protected during "Dog mode," and the legs must have enough torque and structural integrity to support DogCopter when in "Drone mode."
DogCopter will operate autonomously and via remote control, featuring three main control modes: Drone, Dog, and Transition. In Drone mode, DogCopter functions like a standard drone, utilizing a Pixhawk connected to four thrust motors, along with a Raspberry Pi and 4D-Lidar for both remote and autonomous control. The Raspberry Pi will also send commands to the ODrives (another microcontroller for motor control), which are better suited for driving the BLDC motors. Dog Mode primarily relies on the Raspberry Pi, four ODrives, 4D-Lidar, and 12 motors (three per joint) to control walking movement. Transition mode involves a predefined animation for shifting between modes (refer to the references for the animation).
General Criteria and Constraints:
Drone Mode: Minimum flight time of 10 minutes.
Dog Mode: Minimum walking time of 20 minutes.
Normal Use Case: Battery life of at least 15 minutes.
Thrust-to-Weight Ratio: 1.5
Hardware Criteria and Constraints:
Ensure propellers are protected when in Dog Mode.
Ensure seamless transition between Drone and Dog modes.
Ensure propellers are well-spaced for optimal control.
Ensure kinematics in Dog Mode can navigate rough terrain.
Software: Controls
Develop remote controls for both Drone and Dog modes.
Software: Path Planning
Develop pathfinding algorithms for both Drone and Dog modes.
Ensure compliance with airspace restrictions during flight.
Implement obstacle detection and avoidance (using computer vision).
Given the complexity of this project, DogCopter is divided into several subteams:
The Aerospace team is primarily responsible for designing the propellers and creating an aerodynamic body. From Spring 2022 to Spring 2023, the team accomplished the following:
Built a thrust stand.
Tested propellers.
Created CAD models of the ideal design.
Run an Ansys Simulation on a propeller.
Mechanical is in charge of the design of the robotic leg joints, and the chassis. They will work with various materials such as carbon fiber, metal, and PLA plastic to create robotic legs that are strong enough to support the weight of the robotic dog and the thrust of the drone motors attached to the legs. Mechanical will also work with stress testing software to ensure the robot is strong enough to support the weights and torque
From Spring 2022 till Spring 2023, we designed our mechanisms, and made a working mini robotic dog model to test the kinematics
The Electrical team ensures that all internal components fit within DogCopter and that the electronics send proper signals throughout the body, providing the correct voltages to components. They work with various electrical components, such as lidar/vision-based sensors, Arduinos, batteries, BLDC motors, and motor drivers.
From 2022 to 2023, the team designed a circuit for DogCopter, ordered parts, and tested the BLDC motors.
DogCopter will need to devlop controls for the
The Controls team is responsible for developing the control systems for DogCopter. Before booting up, DogCopter will be in its folding position. Once activated, it will prop itself up and retract the skids if that design is chosen. The walking algorithm, navigation, and remote control, will activate to navigate ground terrains such as stairs and buildings. When switching to Flight mode, the skids will deploy, the arms will adjust to a T-pose, and the walking algorithms will be replaced with flight control, navigation algorithms, and flight controller UI. The controller UI will override both the flight and walking navigation systems, allowing the operator to take manual control.
DogCopter's primary vision system will use a Unitree 4D Lidar. The data from the Lidar will be processed by the Raspberry Pi, which then sends commands like "move forward" to the Pixhawk/ODrives, which execute the movements. In Spring 2023, the team obtained the Lidar and conducted multiple tests with it.
The Simulation team will test DogCopter in a virtual environment. The goal is to create a simulated environment to test in and to generate a map of that environment using our vision system. The exact simulation tools and environment are still being debated, but current options include:
Gazebo: Virtual physics environment.
Rviz: Vision system/data processing.
Moveit: Physics realism for DogCopter.
Webots: All-in-one simulation environment.
The simulation should be as realistic as possible, with the model in URDF format
Optimization
To help determine the ideal parts for fulfilling our requirements, we developed an optimization program similar to linear programming. This program was developed from Fall 2022 to Fall 2023.
The team has identified the following key objectives to advance towards the overall project goal:
Electrical Testing: Ensure all electrical components function as intended.
Leg Fabrication and Testing: Complete the construction of a DogCopter leg, and test its movements along with the transition animation from Drone mode to Dog mode.
Simulated Motion Testing: Evaluate the kinematics and obstacle detection capabilities in a simulated environment.
Fabrication of Additional Components:
Dogcopter But with wheels instead of legs
Bipedal Walking/Flying robot
James Bruton
DogCopter folding animation -
Fusion 360 Tutorial :
Fusion 360 Playlist :
Lidar + Rasberry PI :
Our robot will have drone motors and propellers at the bottom of each of its legs. By rotating its legs outwards from the hip, it will transform from a quadruped to a drone. The walking and flying aspects will be controlled by semi-independent control systems. It will be equipped with a Lidar system and cameras which will be processed by computer vision code to enable autonomous navigation.
During Fall 2024, we will be working on finishing our small-scale quadruped robot, getting our electronics systems set up to communicate with motors and each other, and setting up our software systems to use data from the Lidar and camera and working on modeling a full-scale robot.
See the subpages to learn about what each subteam works on. Further onboarding and introduction will be done during the first few meetings.
Build and develop a drone that can be used in the applications of last-mile delivery.
Hardware: Build the physical drone, develop the controls algorithm for a flight controller.
Navigation: Develop a robust navigation software stack that can autonomously takeoff, fly to a given GPS waypoint(inputted by user on the app), and then precisely land at that waypoint.
Mobile App: Build a mobile app that enables users to interface with the drone.
The Navigation stack is split into two main parts: the PX4 side and the ROS side. PX4 is the software for the pxihawk flight controller and ROS is an open source robotics communication software. The Navigation stack runs through the following process. A gps waypoint is sent from the mobile app to the navigation backend. That GPS waypoint is converted into a point in the local space of the drone(a point in the drones internal map of the enviornment) through a ROS node. Then the PX4 software arms the drone, and changes it into takeoff mode. Then once the drone reaches its flying height, the PX4 software changes the mode of the drone to offboard mode, so it can now be controlled by the offboard computer(jetson orin nano). The drone now will autonomously navigate to the specified goal location using a behavior tree to control its decision making, and with theta* and navFn path planners, MPPI trajectory controllers to determine where to move for the shortest path to its goal(which is all implement through the ROS Nav2 package). Once it reaches its goal position(within some error margin), the PX4 software will switch the drone to landing mode, and it will execute a precision landing algorithm implemented through PX4 to precisely land the drone at its location.
Additionally, the global map of the environment is maintained as a 2D costmap, for efficiency and ease of use, with 3D spatio-temporal voxels making up the local map(~10m square around the current location of the drone) for more accurate obstacle avoidance. Additionally, localization is performed at two levels: globally, and locally as the global localization can create jumps in the location of drone but it is much more accurate, while the local localization is not as accurate but it is continous/there are no jumps in the location of the drone. The global localization is implemented through the LIO-SAM algorithm, which uses a velodyne lidar, and imu and gps. The local localization is performed by a Visual Inertial Odometry implemented through PX4 using an extended kahlman filter.
Motors: T-Motor U7 V2.0
6 total
4.55kg Lift / Motor
Over 27kg of Thrust!
: PX4 is the firmware that runs on the Pixhawk 6c. It controls and recieves data all of the motors and sensors attached.
: QGroundControl is the application used to connect to, configure, and program the drone to fly autonomously.
(ROS): This is a standard middleware software that allows each sensor to robustly communicate with the flight controller(pixhawk) and the onboard computer(Jetson Orin Nano)
Managing an ARC-supported project can be a challenging, but rewarding experience. Throughout this journey, you will pick up on many skills and lessons which can be carried through the rest of your life. The uncertainty of what lies ahead prevents us from outlining every possible milestone, but here we intend to provide general action items for how to successfully carry out your project within ARC.
In general, a project manager's responsibilities can be broken into three categories:
Development of project mission
Proposal of project resources
Guiding of project progress
Let's breakdown the responsibilities associated with each category:
TLDR list is included in the conclusion
The fundamental component of a project is its mission. It serves to inform laypeople about the project's process and goal. The mission can also be considered the project's . If a project's mission cannot be condensed into 2-3 sentences, then you may want to consider structural revision.
When a new project is formed, the project manager is responsible for crafting this mission.
Resources are the fuel required to reach your project's mission. This encompasses any of the following units:
Team size
Knowledge
Funding
Workspace
When a new project is formed, the project manager is responsible for drafting a budget of these resources. There is no special format for this budget, it just needs to acknowledge what you need to be successful.
The leadership team will review this budget before work takes place. If approved, it is the leadership team's responsibility to ensure these resources are provided.
One of the more challenging problems is connecting people to the required knowledge, also known as onboarding. This requires understanding the present distribution of your team's skills, identifying the gaps, then directing them to the proper resources. If you have any issues with this process, let the leadership team know.
After onboarding new members, the project manager is responsible for submitting a roster of current students and their basic information (e.g. grade, major).
As a Purdue organization, funds must be verified with the leadership team before being used. So if you budgeted $100 for a new part, then later in the year you would need to double-check with leadership before making the payment.
Currently, our organization can only provide reimbursements. This means money held by Purdue ARC cannot be spent directly on purchases, only to reimburse the purchaser. So to buy the $100 part mentioned above: a student within the team would have to buy the part, then file for reimbursement. We are actively working on obtaining a , which would change this process. If reimbursement is an issue, notify leadership and we will try our best to work something out.
Lastly, project managers have an expectation to oversee the team's progress throughout the semester. Within ARC, PM's have a unique opportunity to apply their own leadership styles and techniques. As a general recommendation, try not to get overly bogged-down by bizz-buzz-words. Focus on breaking down the project's mission into actionable items, then organizing the completion of those items.
To ensure projects are reaching their mission, it is expected check-ins are performed at ARC's weekly leadership meeting. While project managers aren't explicitly responsible for delivering the status updates, they should oversee that it gets done.
If the check-ins indicate some underlying issues, then the leadership team may request to meet with the project manager and discuss solutions.
At the end of each semester, project managers are responsible for organizing a presentation to cover their team's work. Again the PM doesn't single-handily write this presentation, just oversees its completion.
If you made it this far, then you are one step closer to seeing your ideas come to life. Writing about project overhead is probably just as painful as reading it, but it's often a necessary evil to achieving results.
The following is a compiled list of the PM's responsibilities:
Find the project's mission
Budget necessary resources
Onboard new members
Submit a team roster
The leadership team will explicitly state when these tasks need to be completed.
SAO provides on the process of completing tasks such as scheduling rooms, submitting activity forms and more.
We understand lots of details in completing these tasks are vague, this is intentional. This offers freedom in how you choose to complete them. It cannot be overstated that if you have any questions, consult the leadership team.
Best of luck!
If you are using Windows, you'll want to install WSL2 to follow along.
The terminal is an interface for interacting with your computer. While traditional applications are intuitive, they can hold you back from accomplishing complicated tasks.
The terminal is itself an interface to a shell on the system, which handles command execution and other nice features such as piping. Most Linux systems come packaged with BASH (Bourne Again SHell).
Early on, you'll want to memorize a small subset of commands that can handle a majority of tasks.
This stands for 'print working directory', which as you can guess, prints the directory you are currently in. The shell automatically provides you with a location within your computer's file system. This can change the outcome of certain commands as we'll see.
This is short for 'list'. It will print all of files in your working directory. If you modify the command:
It will print all files (including hidden ones).
This means 'change directory'. It allows you to modify your working directory. You can specify the <folder> argument in a variety of ways. If the folder you want to move into is contained within the current working directory, then you can specify a relative path using a period:
Similarly, if the folder is contained within your home directory, then you can use the tilde key:
Sometimes it is also useful to specify parent directories, which is done using sequential periods:
This will change the working directory to the parent of the current working directory. You can chain this shortcut to move multiple levels:
Purdue's USB has a with some additional information.
It's recommended you also explore shell text editors (such as ) and version control systems (like ).
We are partnering with Imagination Station and Professor Severin to create an automated robotics-based educational display for students. Our goal is to use a swarm of ~30 robot "Sphero" balls to produce visually appealing and interactive molecular formations on a grid. We plan to also have a graphical user interface for users to interact with the display, where they will be able to choose which chemical formation to display, and by the simple click a of button, watch the autonomous movement. We hope this unique application will enhance education for children in a fun and visual mode!
The Sphero Bolt is a spherical robot that includes a gyroscopic sensor, accelerometer, an LED matrix, among other features. It can be connected and interfaced through the Sphero application, however, in order to control the large among of robots, we bypass the UI.
The primary objective of Piano Hand is to explore the perspective of robotics in replicating human-motion. Fascination for this included looking at tasks that would help us understand biomechanics, which is an area of robotics that has gained a lot of traction recently.
The goal of the project is to build a fully autonomous robot arm that can play the piano. The human hand, with 27 degrees of freedom (DOF), has so far been the most dextrous mechanism to play the piano and the closer we get to replicating that degree of freedom and movement, the better it is to move the arm and play the piano.
This semester (Fall 2022), we are planning to refine our working model of the animatronic hand built last semester with the help of accurate servo motor and flex sensor functioning. We would expect to extend functioning to two hands as well. Additionally, we are starting a new path in software along the lines of machine learning and optical image recognition by building a model that can read sheet music, given an image format.
Build a system of autonomous, scaled vehicles to play head-to-head in a game of high-speed 2v2 soccer. Inspiration draws from the game, Rocket League, in which rocket-powered vehicles play soccer in 3v3 matches.
This semester, we will rebuild the project from the ground up and create version two of Rocket League. We will produce more optimized systems by drawing on the knowledge of the past four years of this project.
Specific tasks will be developed over the coming week, but tasks are expected to include:
echo -e '\nexport DISPLAY=$(awk '\''/nameserver / {print $2; exit}'\'' /etc/resolv.conf 2>/dev/null):0\nexport LIBGL_ALWAYS_INDIRECT=1' >> ~/.bashrcsource ~/.bashrcsudo apt install -y x11-apps
xeyesdefaults write org.macosforge.xquartz.X11.plist nolisten_tcp 0echo 'export DISPLAY=host.docker.internal:0 \nxhost + 127.0.0.1' >> ~/.zshrcsudo apt install -y x11-apps
xeyesPotentially relevant topics: computer vision, hardware interfaces,
Developing a system to control the physical car(s)
Potentially relevant topics: hardware design/interfacing, microcontrollers
Developing a system to determine how the car(s) should move
Potentially relevant topics: autonomy, machine learning
Developing a simulator to aid development and training of the model
All systems are expected to use ROS in some way. We will target ROS 2.
Our current goal for the end of the semester is to build a functional simulator and basic model. These will be better defined over the coming week.
Below is an overview of the previous Rocket League system. It gives some insight into what we are working to make, but bear in mind that our work for this semester will likely differ from what is seen below.
Our system is organized into several components, which function together to create an autonomous Rocket League car.
The system consists of several specialized layers, which each reduce abstraction as information flows from the top of the diagram to the bottom. For example, a control effort (ex: put the steering wheel at 10 degrees) is less abstract than a collision goal (ex: hit the ball at position (3,5) cm in the (1,0) direction). Each layer refines the previous command until it is eventually something usable by the car's hardware.
Each layer also utilizes feedback in order to correct errors. For example, the velocity controller may notice the car's velocity is too fast, and then step slightly off the throttle to correct. Each layer uses a common perception system as a truth to compare against.
For more information on each layer / subsystem, see the below sections to learn more about its input and output, and how it processes it (including the software and languages in use).
Like most ARC projects, Rocket League uses ROS to handle communication between each component. ROS is compatible with the Python, C++, and Arduino used on the project.
Documentation on setting up and learning about your ROS development evironment can be found here.
This component uses deep reinforcement learning in order to develop strategies for playing Rocket League. It is still being prototyped in Python, using PyTorch and Keras (Tensorflow). When complete, it will recieve the full state of the game, and output a collision goal (where, when, and in what direction to hit the ball) to the Mid Level Software stack.
Rocket League High-Level Planner Update: Our team has made significant progress in developing the High-Level Planner for the Rocket League project using deep reinforcement learning to generate game strategies. The current prototype is built in Python using Stable Baselines 3, an OpenAI deep learning library built on PyTorch and Keras (Tensorflow). It will ultimately provide instructions to the car, such as acceleration and steering direction, via radio communication.
From Fall 2021 till Spring 2023, we accomplished the following: • Created an initial training script that enables the training of multiple simulators in parallel for efficient performance. It uses a vectorized simulator environment, and creates a Proximal Policy Optimization model with the Stable Baselines 3 library. • Developed a hyperparameter tuning script that tests different combinations of network hyperparameters to find the most effective ones. • Created a multi-processing script that enables the simultaneous training of multiple models with different reward and environment variables. • Successfully trained a model to consistently score goals in one goal, overcoming issues such as riding against walls, excessive turning, and imprecise turns.
For future work, we plan to explore alternative models to PPO and train one agent to compete against another agent. We will also work on training the agent to perform well with noisy data.
It utilizes a simulator for training, which is described below.
Simulator
This component exists for training the High Level Planner, and testing / debugging Mid Level Software. It is written in Python, using the Box2D physics engine, and must realistically simulate all physical elements of the game. It can be used to replace everything below Mid Level Software (including the Velocity Controller and Perception) if the entire game is to be run in simulation.
Together, the Trajectory Planner and Waypoint Controller (both implemented in Python) recieve a collision goal and are responsible for guiding the car to acheive the goal by outputting instantaneous velocity commands for the car.
Trajectory Planner
This component considers the car's current location and velocity, and the collision goal, to generate a trajectory for the car to follow. It can operate in several modes to generate trajectories via different mathematical functions, and has many many many configurable settings.
Waypoint Controller
This component is what enables the car to follow the generated trajectories. It commands the car to follow specific velocities and wheel angles by applying the pure pursuit algorithm on the path given by the trajectory planner.
This component (also called the low-level controller) adjusts the control efforts (specific throttle and steering values) such that the velocity and heading of the car matches the desired setpoint from the waypoint controller. It implements a PID controller, and is currently written in Python. Future work may see it ported to a different language, such as C++ or MATLAB.
This component allows communication to occur between the ROS network and the RC car.
Control efforts to the car are broadcasted using a FrSky XJT transmitter. These messages are encoded by an Arduino script running on a Teensy 3.1, which communicates to the radio using a digital PPM signal. ROS Serial is used to send the desired efforts to be encoded to the Teensy from the ROS network.
An image of the hardware for one car is shown below:
The car is the complete physical system of one player on the field. Tests were performed on off-the-shelf cars, however none met the desired criteria for acceleration and control. To solve this issue, the team upgraded the electronics of the best-tested car and found much increased performance.
{{< drive-player "1hoZkHQMXcIDrOJjSXYIXwCfNXiyw8jH6/preview" >}}
Left: upgraded car, middle & right: stock cars
The car's upgrades replaced the servo motors, receiver, speed controller, and battery.
Documentation will be created in the Fall 2021 semester to have step-by-step upgrade instructions to build a matching car
Work has been done towards creating a consistent environment for operating the cars and providing infrastructure for localization.
In Spring 2021, physical tests were performed on Krach's carpet and used plywood planks to provide boundaries. Tripods with PVC tubes were also used to hold multiple cameras necessary for localization.
Future work intends on using aluminum square tubing to rigidly mount cameras with the addition of 3D printed mounts.
The perception system is responsible for tracking odometry (position, orientation, linear velocity, and angular velocity) for each car and the position and velocity of the ball.
In the prototype system, cars are tracked through AprilTags and the ball through OpenCV color thresholding techniques. ARC uses C++ for both systems.
In order to capture the size of the operating field, multiple cameras are required. The current system uses two PointGrey (FLIR) cameras and two Basler cameras.
Processing AprilTags for each camera is computationally expensive, so the team invested in a "Computation Cart" with two desktop PCs. Each PC is responsible for two cameras.
Information from each desktop is then communicated over the ROS network:
The team has outlined the following objectives in working towards the overall goal:
Proving the high-level framework on snake game
Testing and tuning of simulator for usage in training high-level planner
Completion of camera mounting infrastructure
Completion of field manufacturing
Perception system scaling / redesign
The geospatial data source we decided to use for our initial implementation was OpenStreetMap. OpenStreetMap is a free-to-use map of the world that can be contributed to by anyone. A large variety of features can be mapped in OpenStreetMap including all of the obstacles we need to avoid: buildings, trees, and lightposts. Because OpenStreetMap data is free, can be updated by ourselves if data is missing, and has all the obstacles we need to avoid, it is a clear choice over Google Maps which is costly, cannot be updated, and does not have data on eveything we need to avoid.
OpenStreetMap data is is made up of features. There are 3 types of features: points, lines, and areas. Lines and areas are ordered lists of points. Each point has a laditude and longitude and thus determines the position of a feature or the shape of a line or area in the world. To describe the type of feature such as "building" and attributes that feature has such as "levels" or "color", every feature has a list of key-value pairs (a map/dictionary).
We only want to get all of the buildings, trees, and streetlight features and not any of the other features in an area in OpenStreetMap, so we need to filter the features in a selected area by their tags.
We can use the Overpass API querying service to get only these features. The Overpass API can be easily interfaced through by using Overpass Turbo. We can easily create a query clicking the Wizard button at the top and inputting the tags we want.
Every building in OSM has a tag with the key building. The value of the building tag determines the type of the building. For example, building=school. We don't care about the type of the building so we can just add building=* to get all of the buildings that have a tag with the key buildling regardless of its value.
OSM allows users to document map individual parts of buildings to distinguish which parts have different attributes. For example, one part of a building might have more levels than another. The key for getting these "building part" features is building:part. So, we can add or building:part=* to also select all of these building parts.
Finally, we want to get all of the trees. The tag for a tree is natural=tree so we can add or natural=tree to also select all of the trees.
So, our final input inside of the query wizard should be building=* or building:part=* or natural=tree. You can then click Build query and the actual Overpass Query Language query text will appear on the left.
To select the area of data you want to get, click the picture icon in the upper left of the map and click adjust the box on the map to "manually select the bbox".
Now, we click Run in the upper right to run the query and return the data we need.
Note, you might have selected a large area and it will give you a warning you are returning a lot of data. Click
continue anyway. You can minimize the impact this will have on your computer by disabling the results from showing on the map. This is done by removing the>; out skel qt;at the end of the query and re-running it.
You can browse the data returned visually on the map or by clicking the Data button in the upper right.
The next step is to export the data. This can be done by clicking the Export button at the top and selecting which file format to export to. What file format should be exported is dependent on the type of occupancy matrix needed. Read the following section to understand what an occupancy matrix is and how it will help us find a path.
The data outputted from OSM is a list of shapes and points and their locations. We want to run a path planning algorithm on this data. We cannot do this though with the data in this format. Path planning algorithms require a graph to traverse (see how this works here). The best way to convert multi-dimensional space into a graph is by breaking it up into square chunks where each chunk is a node in the graph. One structure these chunks can be broken into is a 2D or 3D matrix. Another structure is a quadtree (2D) or octree (3D). If a node in each type of matrix intersects an obstacle, we give it a value of 1 (occupied). Otherwise it has a value of 0. This is where the name "occupancy matrix" comes from. We can then take all of the nodes that are not occupied and link them together to create a graph that the path finding algorithm can traverse.
So, we need a solution that will convert our OSM data to an occupancy matrix.
The first occupancy matrix we pursued creating was a 3D one.
We first researched if there were any libraries that could convert a 3D scene file like a .gltf to an occupancy matrix. We found a MATLAB function that could do this for us however we decided not to use this as it would lock us into using MATLAB's technology and if in the future we wanted to deploy this, we would have to pay MATLAB for computation. Therefore, we pushed on to try to find an open-source library that could do this. We ended up finding Open3D, a library that can convert a mesh like .gltf to a point cloud and then convert the point cloud to an octree. At the same time, we also found that the library OSM2World included a Command Line Interface that could convert a .osm file to a .gltf file. So, we used both tools together and ended up with a beautiful octree of campus!
We were anticipating that the 3D occupancy matrix would be used in the initial flight planning and testing of the drone, however this was determined to be unecessary after we learned that we would would not be flying over buildings or trees for the first tests of the drone. Because we will not be flying over them, we could just use a 2D matrix that would document where every obstacle is located regardless of their height and thus the path planned would avoid flying over any of those obstacles.
Researching libraries that could do this for us, we found an R library that could take GeoJSON and output a matrix. However, we soon realized that the process of taking 2D shapes and converting them to a matrix is identical to the rasterization process your computer does when it is given the points of a shape like text and needs to calculate which pixels should be colored to show that text. So, we refined our search to look for libraries that could rasterize geospatail files. We finally found the Python library Rasterio that could rasterize GeoJSON into a TIFF file (an image file). All we needed to do is change the resolution of the image to fit our expected node dimensions.
The resulting converter created is located at this repository.
In the future we are going to use photogrammetry for our obstacle data source. Photogrammetry is the technology used to generate 3D buildings in Google Maps. It is generated by taking images of features from multiple angles and the location and direction they were taken to create a 3D mesh.
Photogrammetry can be collected by flying our drone around its operating area and using images captured to generate photogrammetry.
The advantage of photogrametry over using OpenStreetMap data is it much more precise and easier to keep up-to-date than OpenStreetMap data. Photogrammetery can outline the precise geometries and locations of all obstacles in an area whereas OpenStreetMap can only document features to such detail and requires manually editing the map to document it. Photogrametry can also be much easier to keep up-to-date than OpenStreetMap. If a temporary construction area is setup where the drone typically flies, the drone can forward the imagery collected while it avoided the newly-found obstacle to the photogrametry world server to create fresh set of obstacle meshes that can will be accounted in the calculation of the flight path the next time a flight is planned through that area.
The reason photogrammetry was not chosen initially as the obstacle dataset was that we did not have a functional drone at the time of developing it to take the images used for photogrammetry.
Total Weight: Less than 20 pounds.
Simulink: For algorithms and controls.
ROS: Framework linking all components.
Joint State Publisher: To move joints in the virtual environment.
Vision and Controls Integration: Seamlessly integrate the vision system with the control mechanisms.
Comprehensive Testing: Conduct a series of tests, including transition functionality, battery life during flight and walking, and various safety assessments.
Final Benchmark Test: Execute a final benchmark test to verify that all systems and components are fully operational and meet the project requirements.
Setting up a Pixhawk : here
47.5A Draw at 100% Throttle
Props: Tarot 1855
18'' Diameter
5.5'' Pitch
Carbon fiber
Frame: Tarot T960
Hexacopter Configuration
960mm Diameter
Battery: Tattu Plus LiPo Battery Pack
22000mAh
25C Discharge Rate
6S
22.2V
Power Delivery (ESC): xRotor 40A
60A Max Current
Rated for 6S LiPo (22.2V)
Flight Controller: Custom controller.
Based on Raspberry PI Pico.
3 axis Gyro, and 3 axis acclereometer.
Camera: Intel RealSense D453
Stereoscopic Depth Sensing
< 2% Error Within 2m
Companion Computer: Jetson Nano
Quad-core AMD Cortex
4GB Onboard Memory
128 Cuda Cores
Infrared sensor: TBD
Used to find precise distance from ground to see if landing area is safe.
Camera rotator/gimbal
Will rotate camera from forward-facing to downwards to ensure safe landing area.
Stabilization of camera during flight to minimize noise in optical data
Parcel container
Structure
Minimize impact on aerodynamic performance
Safe to access for users
Food Preservation
Keep food hot, or cold, to ensure minimal loss in quality during delivery
Check-in with the leadership team weekly
Present project progress each semester
$ pwd$ ls$ ls -acd <folder>cd ./relative-directorycd ~/directory-in-homecd ../cd ../../This project consists of five sub-teams: controls, algorithms, perception, hardware, and interface. Controls handles the connection and movement of the balls on the field. Algorithms works on path planning and mapping out the most optimal route for the robots to take when displaying each formation. Algorithms also takes into account error correction in the case of a faulty movement occurring. We plan to have a camera mounted over the display, which Perception (focused on computer vision) uses to track the Spheros during their movement, and send the relative positions back to algorithms. Hardware handles the design and prototype of the grid the robots move on, integration of the charging stations, as well as any other non-software related task. Lastly, Interface creates the user interface desiged to easily choose different molecular formations and interact with the display.
Successfully connected to and able to control at least 6 robots simultaneously through multithreading
Completed thorough testing on the Sphero balls
Discovered and learned spherov2 module to control balls
Integrated server-client system to provide an easier method for Algorithms to send instructions to the sphero
Developed interface to provide ease of implementation for Algorithms to communicate to Controls
Ideated position-based, pseudo-random algorithm to simulate polymer step growth
Built a simulation of a test grid and test Spheros creating a molecular formation using Pygame
Created the list of polymers we hope to display
Optimized circle detection algorithm for identification of balls
Created custom algorithm to identify and track 3+ robots on a field using OpenCV and Python
Looked into camera options and set up for placement of camera on field
Conceptualized field and charging layout (charging for sphero bolts)
Manufactured first full field prototype
Created node and path design for field
Used Flutter to create the interface that will be used to choose molecules to form
Created multiple screens such as the Molecule List View and Molecultes View on the app
Successfully connect and control 5+ Sphero bolts
Implement a stream set up for commands; potentially rospy
Begin feedback control to optimize movement
Interface with Hardware about field material to slow ball movement
Optimize and test position-based algorithm on 6 Spheros on grid prototype
Implement more chemical accuracy in simulation; for example, limited certain "randomness" in initial randomized state to align to chemical properties
Implement error-correction into algorithm with Perception feedback after each node-to-node movement
Deal with edge cases in tracking algorithm; currently there are issues regarding having a set number of balls and balls leaving and returning the screen
Interface with algorithms about error catching and the format of the positions to send back to them
Improve precision in algorithm; essential since most of perception will be to optimize the node-to-node movement of each Sphero
Test tracking algorithm on 5+ Spheros
Integrate charging station concept into grid prototype
Integrate perception camera into display and design mount
Complete final designs of each concept
Finish designing all pages of user interface
Finalize list of molecules (interface with algorithms)
Work with Controls regarding sending bluetooth signal to Spheros

Project GitHub Repository: https://github.com/purdue-arc/arc-piano-hand
Spring 2024
Fall 2023
With guidance of PhD. student mentor Sheeraz Athar, we decided to focus on the fingers hitting the keys first. So we designed and build a single-phalange-finger design that includes some curvature and bends on the fingers to make it look and feel human-like for processing. More clarity in terms of actuating the hand was also obtained, and methodologies on using specifiec file formats between different software subteams was established. A new subteam, Algorithms, was also established to work with hand and finger states as the hand is playing the piano.
Ideated and designed new mechanisms for the hand that involves the usage of linear actuators and bevel gears. This design will be assembled and tested in Fall 2022 as a new iteration from the hand design in Spring 2022.
Worked on fixing issues that came about in software in Spring 2022, and in getting ready the design to implement on the hand in Fall 2023. Introduced new course of action alongside software to start with machine learning and model development in Optical Music Recognition.
Clip from working of servos:
{{< drive-player "12XIlDrckEdbvJNI8A9Jp3um2TsCtu46J/preview" >}}
Worked on improving model developed in Fall 2021 by printing and testing. An add-on for attaching the servo motors was developed and the design was 3D-printed.
Software work primarily included setup of environment on Arudino/Raspberry Pi, along with flex sensors. Issues with the usage of continuous servos were addressed. 8-bit ADCs were also used to improve testing.
{{< drive-player "1XUN3-VXOXNSEd7ECZbARMhXspTH0es3h/preview" >}}
Worked primarily on developing models and getting an idea of the different parts necessary to 3-D print. Produced the following first iteration of the hand by the end of the semester from hardware.
Software primarily worked on simulation and testing, and the following simulation was produced on TinkerCAD (TinkerCAD's electronic component simulator had Arduino testing capabilities and hence was useful for the first stage of testing). TInkerCAD's use-cases for simulation testing were visible from early testing with the software for multiple fingers, using MG90S servos.
{{< drive-player "1FLBJuV58_8QRsNKVJZqaC88cPaUbHi6O/preview" >}}
Project Manager
Visuwanaath Selvam, Computer Engineering
Hardware
Hardware primarily works on making and refining CAD models with tools available, 3D printing models, assembling, testing and identifying points of improvement in the model and testing functionality.
Rugved Dikay, Aeronautical and Aerospace Engineering
Ian Ou, Computer Engineering
Archis Behere, Mechanical Engineering
Software
Software primarily works on developing the code and algorithms for the movement of the hand to locations computed, along with setup of the electrical systems. More recent initiatives include model development for optical music recognition and Raspberry Pi conversion from Arduino.
Taran Kondamuru, Computer Engineering
Nathan Huang, Computer Engineering
...
A team of researchers attempted replicating the pressure applied in the grasping mechanism and achieved 17 DOF in a 5-finger hand. The actuators used are the most interesting: McKibben Actuators, which move on the basis of difference in Air Pressure. ScienceDirect article
The ILDA Robot Hand: A 15 DOF, highly tactile robot hand, motion along surface of palm, and stepper motor actuation. Some of its capabilities include crushing cans, delicate grasping, and tactile tasks such as tapping. Overview and Videos of Functioning | Nature article
Soft actuated robots - Harvard paper |
WPI -
|
Robot Nano Hand - |
InMoov Robot Hand -
Other similar project resources: |
Video Links/Pages referred: | Automation Robotics'
Piano Keys Research - Dimensions
Joint Design - Ball and Socket
Optical Music Recognition Datasets - Apacha Database List | Deepscores | Primus
Parts - | Multi-channel Servo Controller (, ) | - ,
Add-ons: Strings/Thread , |
DOF Analysis - | | |
Interesting Actuation Methods -
Goals
The Electronics Team will take the output of the Algorithms Team, which will be finger/key pairings along with timestamps of when that key should be pressed. The Electronics team will then actuate the servos and stepper motors properly so that the key is pressed at the correct time.
For Fall 2024, the main goals are
On board new members
Create a working communication between computer/Arduino to control fingers
Figure out how to split power to servos/actuators from Arduino PWM data
Move from Arduino to a faster microcontroller, like an ESP32
Meet with algorithms team and figure out how to use their output data
Create a PCB to minimize size of our current solution
Before starting this tutorial, we recommend:
Setting up your development environment
Before we dive into ROS, let's break down what we will be accomplishing. Our end goal is to automate a game of Snake. If you have never heard of Snake, then check out an example . We have made a ROS adaptation of it, which is what you will be automating.
There are two parts to this tutorial:
Controller walkthrough: Here we will walk you through the creation of your own snake-controller. This will teach the basics of ROS and principles related to automation.
Controller expansion: By the end of the previous section, you will have a simple but functional controller. To strengthen your ROS skills, you will now look at improving the controller and seeing how high of score your snake can get.
If you've correctly setup your development environment, then you should have a catkin_ws folder with src/arc_tutorials a folder inside. Move into the arc_tutorials folder and run the docker container:
You should now be inside the container.
Ensure your file structure matches the following:
We will explain the file structure in-depth later, so don't worry if it doesn't make any sense now.
To complete this tutorial, simply follow each provided documents in order. Ensure you are reading thoroughly, as these tutorials are packed with tons of information.
If you run into issues, start by doing independent research. If you haven't found a solution after sufficiently making an attempt on your own, then add your question at the .
In this section, we will be covering high-level ROS concepts and run the snake game example to see what we will be working towards.
We discussed previously what ROS is, but now we are going to dive into how ROS works.
In summary, ROS manages a system through 4 key components:
Nodes: These are independent processes that perform computations. Nodes often relate nicely to physical parts within a system, like a camera or an arm.
Messages: In order for different nodes to communicate, they need messages to define the structure of the data they are sending. If a map node wanted to tell a navigation node which way to move, it would have to use a message (like a 3-point coordinate) to pass that data.
Topics: Messages help define the type of data we are sending, but topics are what deliver the messages. Without topics, a node would never know where to send it's messages. Nodes can either publish or subscribe to a topic, which means they can either receive all messages on that topic, send messages to that topic, or both.
Here's a helpful visualization of node communication:
As a supplementary resource / explanation, we recommend watching until 5 minutes and 53 seconds.
If some of these concepts aren't yet clear, don't worry. Through the walkthrough we will continue to strengthen our understanding.
Now let's try to apply some of the concepts we've learned to the snake game. We will start by running the snakesim package to see what we will be controlling.
To verify that your workspace is correct, run:
If everything is correct, you should simply see a src/ folder. Let's now move into our workspace by running:
Now we need to build our workspace, which will setup our environment to run packages. This will all be explained in the next section, so hang in there. We do this with the command:
Now if we print our directory contents again:
We should see a few new folders, including a devel/ folder which is also important for configuring our environment.
Again this will be covered in the next section, so just run these commands and it will make sense soon.
With our workspace and environment fully configured, we can now run the packages we have stored.
Lets start by moving into our package directory:
If we print our directory again with ls, we should see 5 folders:
clock_tutorial/ a deprecated tutorial for ROS in C++
docker/ a set of tools to run ARC's docker environment
docs/ the collection of documents to follow this tutorial
In order to run the snake simulation, we will use what is known as a launch file. It's not important for you to know how a launch file fully works at the moment, but just know that it starts up ROS nodes in a set way so you don't have to do it manually.
To run the launch file:
Upon running this, two windows should appear. One shows a display of the snake game and another is a control board for the snake (this is what we will be replacing). Here's what they look like:
If you get an error message saying
snakesimis not a package, then you need run:source devel/setup.bashin thecatkin_ws/directory.
If you get an error message about failing to display, it's likely because X forwarding is incorrectly setup. Return to setup tutorial #3 for more detail.
To control the snake, you can either move it with WASD and shift keys or manually adjust the speed with the scroll bars. Play around with it and see how far you can get.
In this example, we are using only one node and that's to control the position of the snake. Every time you adjust the speed, you are sending messages on a topic that the snake node is receiving. These messages adjust the snake's velocity so it knows what position to move to.
Lets now close both of those windows and use CTRL + C in the console to exit the program. Next we are going to run the snake controller example to see what we are building towards.
Again we will use a launch file:
The snake game should appear again, but this time it should be autonomously playing the game. If you watch it long enough, eventually it will fail because its logic is very simple. By the end of this tutorial, we will have built this same controller from the ground up.
Congratulations, you just finished getting familiarized with the snake game and the controller! You'll develop this same controller by following this tutorial series. In the next step, you'll create a package to hold your controller, then you'll write some nodes in order to control the snake.
Up to this point, we have explored high-level ROS concepts and ran the Snake game example controller. We will now go through practical concepts related to development within ROS.
Let's take a deeper dive into our project file structure.
Software in ROS is organized in packages. Packages contain our nodes, launch files, messages, services and more. When working on a project, you may have multiple packages all responsible for various tasks. For instance, snakesim/ and snake_tutorial/ are both examples of packages.
Packages are often managed within a catkin workspace, which is what you setup as part of your ARC development environment.
The simplified file structure for a workspace is as follows:
src/ is where the source code for all packages are stored. This commonly contains repositories tracked by Git.
build/ and logs/ are created when building your code. We won't go into any detail on these.
devel/ and install/ contain executables and bash files for setting up your environment. It is optional to have an install/ directory; we won't be using one in this tutorial. There are a few distinctions between the two that we won't get into.
Each time you open your workspace in a new terminal you will need to source this setup file by running:
There is a way to do that automatically, which is covered at the end of this document.
Here is a simplified file structure for Python ROS packages (C++ is different):
nodes/ contains node Python files for running ROS nodes. You'll end up making several of these when following the tutorial.
launch/ contains launch files. These can launch one or more nodes and contain logic.
CMakeLists.txt is a build file used by your catkin workspace.
package.xml is a manifest that contains general package information.
We will start by moving into the src/ directory of our catkin workspace. As a reminder, you should still be within the container environment and have the following file path:
~/catkin_ws/src/
Now we will use this command to create our new package:
Looking at this command, snake_controller is the name of package and all the parameters that follow are other packages that snake_controller depends on. Don't worry about what those packages are right now, you'll learn about them when completing the tutorials.
Lets now move into our newly created package:
Upon printing the directory contents, you can see have three items within our package:
CMakeLists.txt
package.xml
src/
In order to make things better set up for a purely Python ROS package, we're going to modify the CMakeLists.txt and package.xml. We will also be making a few directories. If you aren't comfortable making directories on the commandline, you can do these steps in an IDE like VS Code.
The package.xml file exists to capture some basic information about your package, such as the author, maintainer, license, and any dependencies that your package has. If we open the existing one, you can see it has a lot of information, and some comments about how to edit it:
We're going to go ahead and remove a lot of the stuff that we don't need:
Go ahead and make a few changes to personalize it:
add a version number if you want
add yourself as the maintainer
add a license if you want
add yourself as an author
You should end up with something like this:
The CMakeLists.txt is similar to package.xml in that it contains a lot of templated information for you to go in and edit. For a C++ package, it is very important, rather complicated, and will be used to build the code. For a purely Python package, it is actually really simple. It is only needed to make your code compatible with the catkin build system.
If you open the existing one, you should see something like this:
Once we remove everything we don't need, it is pretty short:
Optionally, you can leave the install and test section in there, but we won't be using it for this tutorial.
We're going to remove the existing src/ directory, which is commonly used for C++ code or Python modules (we are making Python nodes, which are different).
Instead, make the directory structure that we talked about earlier:
Remember how we said that you need to source the workspace for every new terminal window that you open? The command looks like this:
Thankfully, there is a way to make that happen automatically by modifying a script called the bashrc. This is a script that is run every time you open a new terminal. If we put that source command at the end of it, then we don't need to worry about running it manually for every new terminal we open. You can add that command by running the following in your shell:
We can now run that to make it take effect in our current terminal:
Congratulations, you just finished creating a ROS package! In the following documents, we'll build three nodes that will allow your new package to control the snake game that you ran earlier.
Since our snake recieves linear and angular velocity inputs, making a heading controller seems like a good place to start. This will allow us to control the heading of the snake using feedback control.
Feedback control (also called closed-loop control) means that we are using sensor readings (in this case the output pose of the snake) in order to create our control signal that we send to the snake. As a block diagram, it may look like this:
This node is also going to handle the linear velocity command too. We're just going to keep that at a constant value in order to keep things simple.
Let's go ahead and implement this controller in Python now.
We need to start by creating a new file. In the last tutorial, we were left with the proper directory structure to start writing code. We need to make a new file in the nodes directory to house our code. We'll call this file snake_heading_controller.
Note that it is general practice not to add a .py file extension to nodes. This is because the file name becomes the name of the node when building our package with catkin. Ex: rosrun snake_controller snake_heading_controller is cleaner than rosrun snake_controller snake_heading_controller.py.
Next, we need to make this file an executable:
You will need to do this for every node you create.
Let's start the file by writing a shebang and docstring.
A shebang is a one line comment at the start of the file that tells the computer how to run it. We will always use #!/usr/bin/env python for Python files. You can read more about them
A docstring is a comment in triple double quotes (""") that serves as a comment for that file, class, function, or method. They are part of the PEP 8 style guide for Python, which we'll be following for our Python code. You can read more about docstrings , and more about PEP 8 .
Now, let's move on and start laying the foundation for our file by writing our __main__ check and creating a class to hold our logic.
What we've done in the first part of this program is create a class called SnakeHeadingController. A class lets you group methods and variables together inside an object. It is a key part of Object Oriented Programming (OOP). It is useful for us, because we're going to have several variables holding data that we'll need to access from different methods.
This class has a docstring like before, and it only has one method defined, __init__. This function is called when you create a new instance of that class and is used for initialization. Right now, we've put pass inside, which is a Python keyword meaning "do nothing." It is a handy placeholder because leaving the body of that function blank would result in an error.
In the second part of the program, we've told the program what to do if it is executed. The variable __name__ is set to __main__ if this file is the main one being executed. If this file is included in another through import, then the following statements do not get run. We will call this the "__main__ check" moving forward. This is the first part of the program that will be executing commands once ROS starts the node. In this case, it is creating an instance of our SnakeHeadingController class and doing nothing else.
If you'd like, you can run this piece of code from your terminal with the command:
this command works because of the shebang :)
A keen observer will notice that nothing happened, and our program ended immediately. This is because our code doesn't do anything yet. We call the second part of our code, which creates an instance of a SnakeHeadingController, which gets initialized through the __init__ method, which does nothing. You can put in print commands in various locations if you are confused about the order in which that all happens:
Let's start our ROS specific setup now.
Let's modify the body of the __init__ function to be the following. This is replacing the pass keyword.
Very importantly, we also need to tell Python to import the rospy module, so that we have access to these functions. Add the following just below the docstring:
An import command lets you pull in functionality from other Python files. This is super useful to be able to re-use code and write small, modular files. If you look at the source of the snakesim package, you will see it is full of imports.
Here is our current file for reference:
Let's run the program again to see what happens. Note that the commands are a little bit more involved since we need the ROS core to be running.
Terminal 1:
Terminal 2:
Compared to the first time we ran this program, it may not seem like much has changed. However, notice that the program will run forever. We need to manually kill it with ctrl+C. This is due to the spin command. Essentially, this command tells the node to wait indefinitely and process message subscriptions. We don't have any current subscriptions, so the program isn't doing anything.
We can see the effect of the init_node command by running this in a third terminal:
You will see your node, snake_heading_controller in the list of running nodes!
Feel free to try changing the name of the node or removing the spin command to see what it does.
In the last section, we learned that spin tells the program to wait indefinitely and process message subscriptions. Let's create some of those subscriptions now.
Before our spin command, add the following lines:
The Subscriber command tells ROS to subscribe to a topic (argument 1), of a specific type (argument 2), and call a function (argument 3) when it recieves a new message. We need to define those callback functions now.
Put these lines after __init__:
We are defining methods for our SnakeHeadingController and simply using pass so that we can hold off on the implementation for them. If you remember our block diagram from earlier, we are subscribing to the desired heading and true heading respectively.
The callbacks have one argument each (ignoring self), which is the contents of the ROS message recieved. We'll talk more about how to interpret those shortly.
Lastly, we need to tell Python where to find these message types. Put this right after our existing import command:
Here is our current file for reference:
If we run the file now, we should see the two new callbacks. We'll use the same first two commands, but we'll use a slightly different third command in order to inspect the node.
Terminal 1:
Terminal 2:
Terminal 3:
You will see it is subscribed to snake/pose and controller/heading like we intended!
Thinking back to our block diagram, our program needs to output the commanded linear velocity. We will need a publisher in order to do this.
Put this by the subscribers code. It can go before or after, but I like to put them before.
This Publisher command is similar to the Subscriber command. It will create a ROS publisher on a given topic (argument 1), of a given type (argument 2), with a specific queue size (argument 3). A queue size of 1 means that the most recent message will always be sent and any older messages waiting to be sent will get dropped. This is good for our application, since we don't want a delay caused by old messages stuck in a buffer. You can read more about queue sizes on the .
Another difference is that we are keeping a reference to the Publisher object. This is important so that we can publish messages via that reference in the future.
Again, we need to tell Python where to find this new message type. Replace the existing PoseArray import command with the following:
Here is our current file for reference:
We can test with the same commands as earlier:
Terminal 1:
Terminal 2:
Terminal 3:
You will see it is now publishing to snake/cmd_vel as we hoped!
Let's quickly talk about how to pull data out of the ROS messages that our callbacks are recieving. The first thing you'll want to do is determine the layout of the message you're receiving. This can be done by browsing the API online or with a terminal command.
Online:
Terminal:
Note that message definitions can be nested. Our PoseArray message holds an array of Pose messages, which is a different message type you can also find on the .
Let's look at the heading callback (heading_cb) first since it has a simpler message. There is only one field, data, which holds a 64 bit floating point number. This is the same as a float type for most Python implementations.
We'll save that to a local variable and figure out what to do with it later. Replace pass with the following command:
Let's look at the pose callback (pose_cb) now. You can see that it contains an array of Poses. From the README.md file included in the snakesim package, we know that this is an array with the pose of each element of the snake starting at the head. We are only interested in the heading of the first segment, which corresponds to the yaw of the pose at index 0.
We know we'll be looking at something like this:
To make things a little bit trickier, this orientation is encoded as a quaternion, not an Euler angle. There are many good reasons ROS uses quaternions instead of Euler angles. It avoids singularities, is compact, and there is no ambiguity about what convention is in use. For humans though, Euler angles are normally easier to understand, so we'll convert to them with the help of a module included with ROS.
First, we'll import the module. Put this with your other import commands:
You can read the documentation on this function .
Next, we need to put the quaternion data into a tuple in (x,y,z,w) order, then call the function to get the Euler angles as a tuple in (roll, pitch, yaw) order. Replace pass with the following:
Here is our current file for reference:
You can run it again if you want, but there shouldn't be any difference from the last time we ran it.
Now that everything is nicely laid out, let's get to the actual logic behind the controller.
The first thing we want to do is figure out where the main loop is going to take place. Since the pose callback is going to happen at a reasonably high rate, it seems safe to put our code in there. If we wanted to be more careful, we could look into using a timer or rate object.
In order to keep things simple, we're going to implement what is called a . Essentially, it will output a fixed magnitude, positive or negative command depending on the sign of the error. If we are too far left, it will shoot us right. If we are too far right, it will shoot us left. If we wanted to be fancier, we could look at something like a PID controller, but this will be eaiser to implement and work just fine for our puproses.
On a basic level, our logic is going to look something like this:
From this pseudocode, we know a few things:
we need a variable to hold the magnitude of the angular velocity output
we need to get the heading command data from the other callback
we need a good way to subtract angles that can handle +/- pi being the same
we need a math library to copy the sign of the error
Let's handle these in order:
First, we'll create an ANGULAR_VELOCITY_MAG variable. Put this line in the __init__ method after init_node:
Next, we'll make a variable for tracking the heading command. Put this by the previous line:
Update the heading callback to use this variable rather than a local variable:
In order to handle the difference of angles, we'll use the angles module. Put this line with your imports:
In order to handle the sign of the error, we'll use the math module. Put this line with your imports:
Now that we have all that out of the way, let's actually write the logic for the loop. Modify the body of the pose callback to be the following:
Here's the current file for reference:
We'll be able to test this shortly, but it isn't ready just yet.
Earlier we made a publisher and we wrote the logic to get the value to publish. One of the last things we need is actually publishing the value.
Like we stated earlier, this node is going to use a constant value for the linear velocity command for the snake. Let's define that constant now in the __init__ function:
Note that we are using all uppercase for our contants. This is just a convention to make it easier for others to understand our code.
Now that we have the value, we can publish our ROS message. We'll construct a new message of the correct type, populate our values, then publish it. This will take place at the end of the pose callback (pose_cb).
Here's the current file for reference:
We could go ahead and test this right now, but we're going to do a little bit of finishing touches first.
Currently, we have two contants in our code, ANGULAR_VELOCITY and LINEAR_VELOCITY. If we want to change them, we would need to edit the Python file for the node. That may be OK if we never expect these values to change, but ROS actually has a method for handling parameters that let's you set them the moment you launch a node. This can be useful if the parameters might need to be different under different use cases, or it is a value you want the end user to tune.
Replace our existing definitions of these variables with the below:
Now, we are using rospy in order to get these parameters from a parameter server. We'll talk about how to set those through ROS in just a minute. The name of the parameter is the first argument, and the default value is the second.
Note the leading tilda, which makes these local parameters. In general, you will always want to use local parameters. Also note that this needs to take place after the call to init_node.
Here's the current file for reference:
Again, we could go ahead and test this script right now. However, we're going to set something up to make that an easier process in the next step.
Launching this program is somewhat involved. You need to start the ROS core in one terminal window, then you need to launch this node in another. Can you imagine how many terminal windows you would need for a large project? Thankfully, ROS has a system to let you launch multiple nodes at once. It even let's you do some fancy scripting to set parameters, launch nodes based off conditionals, and nest files through include tags.
We need to create a new file inside the launch directory that we created earlier. Let's name it snake_controller.launch.
Open this file up, and paste this in:
This is an XML description (similar to HTML) of the ROS network we're going to launch. You can see we put in the type of the node (the filename for Python nodes), the package, and a name. The name can be anything we want, but it made sense to repeat the name of the node type.
As we get more nodes, we'll put them in here too so that we can launch them all at once.
Let's test our code now. Since we've added new nodes since the last time we've built our package, we need to re-build our project. Run the following command from anywhere inside your catkin_ws directory.
We also need to source the project. This let's the shell know what ROS programs are available to call. If you've added it to your bashrc (if you've followed the guide, you've done this) you can run the below command from anywhere:
You only need to do that in any terminal windows that are currently open. Any new ones will have that done automatically. You could also close and re-open them all instead of running that command if you really wanted ...
If you haven't added it to your bashrc, or just want to know how to source a workspace manually, this is the command (from the catkin_ws directory):
Now, let's launch the code:
This isn't super exciting because it isn't recieving any input and isn't connected to the snake game. We'll fix that shortly.
You could launch the snake game in another shell with this command:
Or you could launch both of them at once by creating a launch file that includes both of them. Make a new launch file in the same directory as the last one called snake.launch. This will be our primary launch file to get the whole thing running.
Put the following lines in:
This file will call the snakesim.launch from that package, then call the snake_controller.launch file that we just wrote. We can run it with the below command:
You can also check that all the connections between the nodes are working by visualizing it with rqt_graph. In a new terminal (with display capabilities), run the following command.
You should see a cool little diagram showing how your nodes are connected by various topics.
Right now, it still isn't super exciting to run our node. The heading controller isn't recieving any input so it isn't telling the snake to do anything. There are a few ways we can fix this.
Let's look at a way to do it through the terminal, then we'll look at a way to do it through a GUI.
You can manually publish ROS messages from the terminal using the rostopic pub command. To give the snake a heading, you can run the following:
You can see we need to specify the topic, the type, and the data. This will tab complete, so that is cool. You can manually give the snake a few headings this way and make sure the controller works. 0 corresponds to the right and it increases counter-clockwise, measured in radians.
If you want a GUI to send messages, we can use a program called rqt_publisher.
Launch it with the following command on a terminal with display capabilites:
You need to use a drop down to select the topic, then hit the plus to add it. Then you need to hit a drop down arrow to be able to access the data field. Once you have all that, hit the checkbox to start publishing. The controller should behave the same way regardless of where the data is coming from.
This is a quick note about how to work with arguments and parameters. This is super useful for creating a robust system of modular launch files. We won't go super in depth, and it isn't needed for the tutorial. However, you might find it useful to know.
Remember how we made our two constants ROS parameters? Let's say that we really want to be able to make our snake's linear velocity an argument that we can control by specifying a value when we run roslaunch. We want to be able to do this:
The cool thing is that we can, and it isn't super difficult. First we need to modify the snake_controller.launch:
Here, we've specified that we're expecting an argument called linear_velocity. If we don't have anything explicitly set, we'll use a default value of 2.0. When we start the snake_heading_controller node, we'll pass the value of our argument to the node as a parameter.
Remember the distinction that arguments are for launch files, parameters are for nodes.
If we try the above command, nothing different will happen? Why? Because we are calling snake.launch, not snake_controller.launch. We need to set up a basic pass through. Let's modify snake.launch to do that now:
Now our above command will work :)
Note that you can have different default values at all the different levels. If you want, make them all different and experiment with the speed of the snake when you remove some of the new stuff that we just added.
If you're curious about what other shenanigans you can achieve with launch files, the has a full guide to the syntax.
Congratulations, you made it to the end! You just made your first ROS node to build a controller for the snake. This node is relatively simple, but hopefully you learned a lot about using ROS that will help you in creating the next two nodes. Those next two tutorials are much faster since we got all the basic groundwork taken care of in this one.
Below you'll see the final file we developed, plus a breakdown of it in case if you forget what a specific section is doing. Feel free to experiment with making changes to this node, or move on to the next one.
This is our shebang. It tells the command line how to execute our program.
This is the docstring for the file. It gives a quick description and may also include a license.
These are our import statements. They let us pull functionality from other Python files. We've split them into two groups for visual purposes / convention.
This is the class definition for SnakeHeadingController and the init method. The init method is run when a new SnakeHeadingController is made. It initializes the node through ROS and gives it a name. The heading_command variable is initialized as None so that we can distinguish between a lack of a command and a command of 0. Two constants are created from ROS parameters, which can be set in launch files or on the commandline.
The publishers and subscribers are created by specifying the topic and type. The publisher also specifies a queue size and the subscribers specify a callback, which is a function that gets called when a new message is received. A reference to the publisher is retained so that we can publish to it in the future.
Lastly, we call spin. This command will block forever until the node is shutdown. While it is blocking, the node responds to input by running the callback methods for each new message it recieves.
This is our callback for heading commands. It has a short docstring, and we simply save the data from the message so that we can retrieve it later.
This is the callback for pose messages from the snake. We've given it a nice docstring and have a little check before running the main logic for the controller. The orientation quaternion is converted into a yaw angle, we calculate the error, then determine our control output. The control output is wrapped up into a ROS message then published.
This is the section of our code that gets called first when we start the program. We simply make a SnakeheadingController object, then let the __init__ method take over.
Our snake is now controlled by heading. Why don't we try to extend it another layer and control with with position? That seems useful if we want to tell it to chase the goal or give it a series of waypoints.
We'll create a closed loop controller like last time. Our inputs will be the current position of the snake's head and the commanded position. Our output will be the required heading to reach that positition.
This will be just like the last program. Let's call it snake_position_controller and put it in the nodes folder like last time. Make sure to make it executable!
Again, we'll start the program by writing a shebang and docstring. The basic groundwork for the file will be really similar to last time too. See if you can write out the __main__ check, declare a class, and prototype all the methods our class will need by writing a docstring then using pass.
You should have gotten something like this:
This section will also be very similar to last time. We're going to initilize the node, create our subscribers, and create our publishers. Take a look on the ROS wiki, at both the and packages and see if you can pick out a good message type for the subscribers and publishers. Remember that the message types need to match if we're recieving data from or sending data to nodes that have already been written.
Once you've got a handle on that, go ahead and write the __init__ method. You can also update the argument names in the callbacks to be more explicit. Don't forget any imports too!
You should have gotten something like this:
You can see the Point message was selected for the controller/position topic. Other notable options were Pose, Pose2D, and time-stamped variants of those messages (PointStamped and PoseStamped, there is no Pose2DStamped). Since our goal is just a position and doesn't need any orientation data, we won't use a Pose type. Additionally, Pose2D is depreciated and shouldn't be used anyways. We could have time-stamped the message, but a timestamp isn't needed. We'll always just chase the latest position command.
There are many other messages worth looking at on the if you have a specific need in the future.
This controller can be implemented much like the last one. Create a variable to track the desired position, then modify the positon callback in order to set it.
Try two write out the logic in the snake callback. You'll need a quick check, then somehow you'll need to calculate the heading command. This can simply be the heading from the point you are currently at, to the point you want to be at. Once you have that, publish it as a ROS message. Don't forget any imports!
If you need a hint, the atan2 function will help you out.
You should have come up with something like this:
Something you'll notice is how a local copy of position is made. Look at these two lines:
In rospy, the callbacks all happen in different threads. A thread is a separate, simultaneously running, task. In Python, they aren't actually simultaneous, but instead the computer will jump back and forth between different threads at a high rate. A bytecode command, which is the smallest chunk that Python commands can be split into can get interleaved between the two threads from it jumping back and forth.
Essentially, you're not guaranteed that a callback will run all the way through before a different callback will get started. If we were to remove the two lines that were referenced earlier, it is possible that the call to atan2 will be started and the program will run a few bytecode commands in order to calculate the value of the first argument. Then, the execution could go to a different thread, where the first callback is being run. This could changes the self.position tuple to a completely different number while the earlier thread calculating the atan2 is 'paused'. Execution jumps back to the earlier thread and keeps working on the atan2 call. The second argument is now calculated with the new point. When atan2 is actually called, it would receive the Y value from the first point, and the X value from the second. It would then be calculating the heading to a location that isn't either of the two points we wanted it to go to!
For this specific case, it wouldn't be a critical issue since it can't lead to a program crash (as far as I know ...). The pose callback would also be running at a much higher rate than the position callback gets new data, so any weirdness would be quickly fixed in the next iteration.
Regardless of how big an issue it could potentially be, it is still good to write threadsafe code. This is discussed further in 06_next-steps, along with how making a local copy can resolves that issue. Making a local copy will not always resolve the issue, but it does for this specific case. It also discusses another technique to write threadsafe code using Locks.
Updating the launch file is relatively simple. Put this line in snake_controller.launch right below the other node:
Running the code follows the same procedure as before. Make sure to catkin build and source!
For testing, you can use either rostopic pub or rqt_publisher. See if you can figure out the command for rostopic pub. Tab completion will be your friend :)
Congratulations, you just finished your second node and are almost done! This tutorial was much more hands off, so hopefully you were able to apply a lot of the knowledge you gained while writing the first node in order to write this one.
Below, you'll see the final file we developed. Feel free to experiment with making changes to this node, or move on to the next one.
Our snake is now controlled by position. If we want to keep things really simple, the snake can just chase the goal with no regard for walls or its own body. You'll find it is pretty effective at the start of the game, but the approach has some very clear flaws once the snake builds up any decent length.
This simple relay is what we'll develop now. In the future, you can expand on this controller by fixing these problems. We'll give you a few more notes on that in the next document.
This is going to start like the last program in creating the file and making it executable, then adding the shebang and docstring. However, things are going to be different once we start writing code. Go ahead and do all the above, and write pass after the __main__ check.
You should have something like this:
We could create a class like we've done in the past. However, this node is going to be really simple. We don't have any persistent data to keep track of like commanded headings or positions. We'll also only have a single subscriber, where we can put all of the logic. The logic won't even be anything spectacular. It will simply be publishing the data from a PointStamped message as a Point type. For something really small like this, it can be appropriate to skip over creating a class, and put everything after the __main__ check.
Look at the below program to see for yourself:
We've used something called a lambda in order to put the logic right into the call to create a subscriber. Rather than include the name of a function, we used the lambda keyword, which lets us put the arguments separated by commas, then the logic after a semicolon. Note that you can also pass in extra arguments and even assign them using an equal sign in the argument list.
You can learn more about lambdas .
Updating the launch file is no different from the position controller. Give it a shot and see if the snake runs. Remember to catkin build and source!
You should have gotten something like this:
You will find you no longer need to manually plug in data in order to test. Instead the snake controller is done, and will run automatically. Use rqt_plot to see how all the nodes work together!
Congratulations, you just finished writing a basic controller for the snake! Hopefully you learned a great deal about how ROS works, and how to write nodes in Python. You should be confident to experiment with your own ideas in to improve upon (or completely rewrite!) the controller we've just finished making.
Below, you'll see the final file we developed. Feel free to experiment with making changes to this node, any of the previous ones, or start making your own. There is some information to help you in the next document.
You did it. You finished the guided tutorials. You likely learned a lot about ROS and Python along the way. Hopefully you take some pride in your accomplishment because learning to use ROS is no easy feat.
In this document, we'll present some ideas on how you could improve upon this controller in order to expand on your ROS knowledge. We'll also give you some tips and insight into additional topics you may encounter while working on your improvements.
Feel free to use some of these ideas in the list, or go off and do your own thing. This is designed to be open ended. You can build on the existing controller or tear it down completely to make your own system.
Self Avoidance System
The snake dies when it intersects with itself. You could create a new node to sit between the goal relay and the position controller that will re-route the snake if it is detecting a collision.
You could also remove the goal relay completely and design your own waypoint controller that uses an graph based planning algorithm. For example, grassfire, Dijkstra, or A* (pronouced "A star") algorithms could all work.
If you want to get really fancy, you could consider the fact that the snake is going to move as you execute the motion plan, so it may be acceptible to intersect with the current position by some amount. Cutting corners like this may save you valuable time if you're trying to maximize the score in a fixed time frame.
Wall Avoidance System
Another way that the snake dies is by intersecting with the wall. Commonly, this is an issue when the snake approaches a goal near the wall in a way that doesn't leave room for the snake to get out.
Like the above suggestion, you could create a new node between the goal relay and position controller that re-routes the snake to a safe path. You could also remove it completely and build it right into a fancier waypoint controller.
This could be built in addition to the self avoidance system using some sort of planning algorithm that looks past reaching the next goal. For example Rapidly Exploring Random Trees (RRT) could be used to find a safe path to the goal and an arbitray secondary location nearby.
Machine Learning
If you have experience with machine learning, you can probably apply it to this problem. If you want GPU acceleration, take a look at modifying the Docker scripts to use the . You may also run into issues with Python versions. There is a way to run ROS with Python 3, but it requires a little bit of extra work. You can also develop a multicontainer application and use Docker's networking tools to bridge between this container and another running your machine learning stack. These are all more advanced topics so we won't discuss how to do that in this document.
Beginners sometimes like to put all the logic for a system in one big node. This is like if we put the heading, position, and goal relay nodes all in one place. Technically, it will work fine with ROS if you have all your inputs and ouputs set up correctly. However, it may not be the best way to do things.
A really big benefit of ROS is how modular a system of nodes can be. Let's say that you want to experiment with using a more intelligent waypoint controller rather than just feeding in the goal position. You can write a new node for that purpose and create a different launch file to use it. You could even make the launch file pick the correct node programatically using commandline arguments!
It can also be advantagous to split things up for debugging purposes. If we put all the logic for the controller in once node, then started having issues with the heading control to the system, we'd need to go in and put print statements or use a debugger in order to check the output. With the way we wrote it with ROS, we can use commandline tools like rostopic echo in order to validate the output. We could also isolate the one node we want to verify and perform unit testing using rostopic pub.
Lastly, using ROS messages defines a command standard for nodes to communicate. If you put everything in one big node, you probably will use internal Python data structures to send information back and forth. This may make things more difficult to maintain later on if you don't define these interfaces very clearly.
As a general rule of thumb, split up logic into multiple nodes if:
there are multiple logical steps that may be re-used or replaced in the future
you want to be able to debug data sent between two logical steps using ROS
The data sent between multiple logical steps has a complex format that a ROS message exists for
Here is the general rule of thumb for naming ROS specific items:
packages: snake_case
nodes: snake_case
topics: snake_case/with/optional/nesting
For Python, follow the :
classes: PascalCase
variables: snake_case
"private" variables: _snake_case_with_leading_underscore
Generally, you want to use a ROS message type that is intended for the application you have in mind. For example, the Vector3 message and the Point message both have three fields, x, y, and z. However, Vector3 is designed to represent vectors in 3D space, while Point is designed to represent points in 3D space.
Another bad thing is to use a message and use the fields for different purposes than intended. For example, ROS has a Quaternion message, but no standard EulerAngle message. If you took a Quaternion message and made the x,y, and z field correspond to roll, pitch, and yaw, that would be really confusing for future users. It is also generally advisable to use quaternions over Euler angles since there is no confusion about how to interpret them.
ROS has some standard message types that you can look at online:
If you feel really adventurous, you can define your own messages, but that is outside the scope of this document.
Earlier, we discussed that for rospy, all of the subscriptions happen in their own threads. This can be an issue if you have code like the following:
What could happen is that callback_two gets called and performs the check. Then callback_one gets called and sets important_variable to a negative number. Then, the execution goes back to callback_two and we crash the program since math.log throws an error if you try to take the natural logarithm of a negative number.
You could also have issues with something like iterating through an array if the array is being changed in a different callback.
There are a few ways to get around this:
use locks
use atomic operations
Locks
Locks (similar to mutex's in other langues) let you enforce rules about how threads can access shared data. Let's see what the earlier example looks like when lock are being used:
This works because one callback can't acquire the lock until the other has released it. It is a blocking call by default. Locks are provided through the threading library, and you can read more about them .
Atomic Operations
This is a more advanced concept, but it can make things a little bit simpler if you understand it well.
In CPython atleast, there is something called the Global Interpreter Lock or GIL. This essentially makes a restriction that multithreaded programs can really only run in a single thread. Instead it switches back and forth between running bytecode commands from the different "threads". The benefit is that if you know a certain command is atomic (meaning it is one bytecode instruction), you don't need to worry about breaking things as much.
This is why it was safe in the snake_position_controller node to make a local copy. Making a local copy of a tuple is atomic. Tuples are immutable, meaning that when it is updated in another thread, a new one is made, the variable is changed to point to the new one, and the old one is thrown out if nothing else is still using it. For our case, we were still using it with our local variable, so it was kept safe for us. Lists are a good example of a datatype that is not immutable. If our X and Y values had been stored in a list, we would not have been thread safe.
A full list of atomic operations are proved .
There are going to be many times where ROS wants to make you slam your head into a wall. Luckily there are tools to reduce that frustration and help spot issues.
This is a short list of the many options avaliable:
: This ROS package allows users to visualize the ROS computation graph. In other words, you can see what nodes are active as well as how nodes are communicating. Here is an example of a graph:
: This ROS package is an automated debugging tool that finds issues by searching your workspace and graph. It can find things like improperly set up packages and nodes with unconnected topics. Additionally it has a fun name.
: This command-line tool can be used in debugging nodes by the messages sent on connected topics. The main commands you might find most useful are list, echo and pub
If you're part of the Autonomous Robotics Club of Purdue, you can post questions to other members on Slack. This is a really good way to get quick feedback.
If you're looking for other resources online, there are a few official ones:
And of course, there are always other forum sites like . You can also find resources on YouTube that may help with the explanation of general concepts.
If you're looking for a textbook like experience, there is an excellent free book called .
Feel free to give us feedback on this tutorial. If you are part of the Autonomous Robotics Club of Purdue, let us know about any potential improvements or errors you found through Slack. If you have an interest in creating your own tutorial to add to this package, lets us know.
If you're outside of the club and found this useful, please let us know. You can give the repo some stars on . You can also create GitHub Issues for any problems you found, or open up a PR if you have a fix. If you want to get in contact with the maintainer, send us an email through .
snake_tutorial/ a ROS python package to teach basic control of the snake gamesnakesim/ a ROS python package to run the snake game
snake_case/with/optional/nestingCAPITALIZED_WITH_UNDERSCORESfunctions: snake_case
"private" functions: _snake_case_with_leading_underscore
listechopubrosnode: This is another useful command-line tool for debugging nodes. You can use list to see all active nodes, then info to print out various information about a given node.
Python Specific Debugger (PDB): This is a tool built to help debug python applications. It has nothing to do with ROS. It has a bit of a learning curve, but it was used heavily in making the snakesim package. It may be worth using if you think you have issues inside of a node that you can't debug by looking at ROS messages going in and out. It will allow you to step through your code using breakpoints, and monitor the value of variables as you do so.
./arc_tutorials/docker/docker-run.shcatkin_ws/ <-- You should be here
│
└───src/
│
└───arc_tutorials/
│
└───clock_tutorial/ (deprecated tutorials)
│
└───docker/ (virtualization)
│
└───docs/ (setup instructions)
│
└───snake_tutorial/ (ROS tutorial)
│
└───snakesim/ (snake_tutorial backend)ls catkin_wscd catkin_wscatkin buildlssource devel/setup.bashcd src/arc_tutorialsroslaunch snakesim snakesim.launchroslaunch snake_tutorial snake.launchcatkin_ws/
│
└───src/
│ └───example_package/
│ └───another_package/
│
└───devel/
| └─setup.bash
|
└───build/
|
└───logs/
|
└───install/
source devel/setup.bashexample_package/
│
└───nodes/
│
└───launch/
│
└───CMakeLists.txt
│
└───package.xmlcatkin_create_pkg snake_controller rospy geometry_msgs std_msgs snakesim tf python-anglescd snake_controller/<?xml version="1.0"?>
<package format="2">
<name>snake_controller</name>
<version>0.0.0</version>
<description>The snake_controller package</description>
<!-- One maintainer tag required, multiple allowed, one person per tag -->
<!-- Example: -->
<!-- <maintainer email="[email protected]">Jane Doe</maintainer> -->
<maintainer email="[email protected]">jamesb</maintainer>
<!-- One license tag required, multiple allowed, one license per tag -->
<!-- Commonly used license strings: -->
<!-- BSD, MIT, Boost Software License, GPLv2, GPLv3, LGPLv2.1, LGPLv3 -->
<license>TODO</license>
<!-- Url tags are optional, but multiple are allowed, one per tag -->
<!-- Optional attribute type can be: website, bugtracker, or repository -->
<!-- Example: -->
<!-- <url type="website">http://wiki.ros.org/snake_controller</url> -->
<!-- Author tags are optional, multiple are allowed, one per tag -->
<!-- Authors do not have to be maintainers, but could be -->
<!-- Example: -->
<!-- <author email="[email protected]">Jane Doe</author> -->
<!-- The *depend tags are used to specify dependencies -->
<!-- Dependencies can be catkin packages or system dependencies -->
<!-- Examples: -->
<!-- Use depend as a shortcut for packages that are both build and exec dependencies -->
<!-- <depend>roscpp</depend> -->
<!-- Note that this is equivalent to the following: -->
<!-- <build_depend>roscpp</build_depend> -->
<!-- <exec_depend>roscpp</exec_depend> -->
<!-- Use build_depend for packages you need at compile time: -->
<!-- <build_depend>message_generation</build_depend> -->
<!-- Use build_export_depend for packages you need in order to build against this package: -->
<!-- <build_export_depend>message_generation</build_export_depend> -->
<!-- Use buildtool_depend for build tool packages: -->
<!-- <buildtool_depend>catkin</buildtool_depend> -->
<!-- Use exec_depend for packages you need at runtime: -->
<!-- <exec_depend>message_runtime</exec_depend> -->
<!-- Use test_depend for packages you need only for testing: -->
<!-- <test_depend>gtest</test_depend> -->
<!-- Use doc_depend for packages you need only for building documentation: -->
<!-- <doc_depend>doxygen</doc_depend> -->
<buildtool_depend>catkin</buildtool_depend>
<build_depend>geometry_msgs</build_depend>
<build_depend>python-angles</build_depend>
<build_depend>rospy</build_depend>
<build_depend>snakesim</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>tf</build_depend>
<build_export_depend>geometry_msgs</build_export_depend>
<build_export_depend>python-angles</build_export_depend>
<build_export_depend>rospy</build_export_depend>
<build_export_depend>snakesim</build_export_depend>
<build_export_depend>std_msgs</build_export_depend>
<build_export_depend>tf</build_export_depend>
<exec_depend>geometry_msgs</exec_depend>
<exec_depend>python-angles</exec_depend>
<exec_depend>rospy</exec_depend>
<exec_depend>snakesim</exec_depend>
<exec_depend>std_msgs</exec_depend>
<exec_depend>tf</exec_depend>
<!-- The export tag contains other, unspecified, tags -->
<export>
<!-- Other tools can request additional information be placed here -->
</export>
</package><?xml version="1.0"?>
<package format="2">
<name>snake_controller</name>
<version>0.0.0</version>
<description>The snake_controller package</description>
<!-- One maintainer tag required, multiple allowed, one person per tag -->
<!-- Example: -->
<!-- <maintainer email="[email protected]">Jane Doe</maintainer> -->
<maintainer email="[email protected]">jamesb</maintainer>
<!-- One license tag required, multiple allowed, one license per tag -->
<!-- Commonly used license strings: -->
<!-- BSD, MIT, Boost Software License, GPLv2, GPLv3, LGPLv2.1, LGPLv3 -->
<license>TODO</license>
<!-- Author tags are optional, multiple are allowed, one per tag -->
<!-- Authors do not have to be maintainers, but could be -->
<!-- Example: -->
<!-- <author email="[email protected]">Jane Doe</author> -->
<buildtool_depend>catkin</buildtool_depend>
<exec_depend>geometry_msgs</exec_depend>
<exec_depend>python-angles</exec_depend>
<exec_depend>rospy</exec_depend>
<exec_depend>snakesim</exec_depend>
<exec_depend>std_msgs</exec_depend>
<exec_depend>tf</exec_depend>
</package><?xml version="1.0"?>
<package format="2">
<name>snake_controller</name>
<version>1.0.0</version>
<description>A basic controller for the snakesim package</description>
<maintainer email="[email protected]">Purdue Pete</maintainer>
<license>BSD 3 Clause</license>
<author email="[email protected]">Purdue Pete</author>
<buildtool_depend>catkin</buildtool_depend>
<exec_depend>geometry_msgs</exec_depend>
<exec_depend>python-angles</exec_depend>
<exec_depend>rospy</exec_depend>
<exec_depend>snakesim</exec_depend>
<exec_depend>std_msgs</exec_depend>
<exec_depend>tf</exec_depend>
</package>cmake_minimum_required(VERSION 3.0.2)
project(snake_controller)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
geometry_msgs
python-angles
rospy
snakesim
std_msgs
tf
)
## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)
## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()
################################################
## Declare ROS messages, services and actions ##
################################################
## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
## * add a build_depend tag for "message_generation"
## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
## but can be declared for certainty nonetheless:
## * add a exec_depend tag for "message_runtime"
## * In this file (CMakeLists.txt):
## * add "message_generation" and every package in MSG_DEP_SET to
## find_package(catkin REQUIRED COMPONENTS ...)
## * add "message_runtime" and every package in MSG_DEP_SET to
## catkin_package(CATKIN_DEPENDS ...)
## * uncomment the add_*_files sections below as needed
## and list every .msg/.srv/.action file to be processed
## * uncomment the generate_messages entry below
## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)
## Generate messages in the 'msg' folder
# add_message_files(
# FILES
# Message1.msg
# Message2.msg
# )
## Generate services in the 'srv' folder
# add_service_files(
# FILES
# Service1.srv
# Service2.srv
# )
## Generate actions in the 'action' folder
# add_action_files(
# FILES
# Action1.action
# Action2.action
# )
## Generate added messages and services with any dependencies listed here
# generate_messages(
# DEPENDENCIES
# geometry_msgs# std_msgs
# )
################################################
## Declare ROS dynamic reconfigure parameters ##
################################################
## To declare and build dynamic reconfigure parameters within this
## package, follow these steps:
## * In the file package.xml:
## * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
## * In this file (CMakeLists.txt):
## * add "dynamic_reconfigure" to
## find_package(catkin REQUIRED COMPONENTS ...)
## * uncomment the "generate_dynamic_reconfigure_options" section below
## and list every .cfg file to be processed
## Generate dynamic reconfigure parameters in the 'cfg' folder
# generate_dynamic_reconfigure_options(
# cfg/DynReconf1.cfg
# cfg/DynReconf2.cfg
# )
###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES snake_controller
# CATKIN_DEPENDS geometry_msgs python-angles rospy snakesim std_msgs tf
# DEPENDS system_lib
)
###########
## Build ##
###########
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
# include
${catkin_INCLUDE_DIRS}
)
## Declare a C++ library
# add_library(${PROJECT_NAME}
# src/${PROJECT_NAME}/snake_controller.cpp
# )
## Add cmake target dependencies of the library
## as an example, code may need to be generated before libraries
## either from message generation or dynamic reconfigure
# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
# add_executable(${PROJECT_NAME}_node src/snake_controller_node.cpp)
## Rename C++ executable without prefix
## The above recommended prefix causes long target names, the following renames the
## target back to the shorter version for ease of user use
## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")
## Add cmake target dependencies of the executable
## same as for the library above
# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Specify libraries to link a library or executable target against
# target_link_libraries(${PROJECT_NAME}_node
# ${catkin_LIBRARIES}
# )
#############
## Install ##
#############
# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html
## Mark executable scripts (Python etc.) for installation
## in contrast to setup.py, you can choose the destination
# catkin_install_python(PROGRAMS
# scripts/my_python_script
# DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark executables for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_executables.html
# install(TARGETS ${PROJECT_NAME}_node
# RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark libraries for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_libraries.html
# install(TARGETS ${PROJECT_NAME}
# ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}
# )
## Mark cpp header files for installation
# install(DIRECTORY include/${PROJECT_NAME}/
# DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
# FILES_MATCHING PATTERN "*.h"
# PATTERN ".svn" EXCLUDE
# )
## Mark other files for installation (e.g. launch and bag files, etc.)
# install(FILES
# # myfile1
# # myfile2
# DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
# )
#############
## Testing ##
#############
## Add gtest based cpp test target and link libraries
# catkin_add_gtest(${PROJECT_NAME}-test test/test_snake_controller.cpp)
# if(TARGET ${PROJECT_NAME}-test)
# target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
# endif()
## Add folders to be run by python nosetests
# catkin_add_nosetests(test)cmake_minimum_required(VERSION 3.0.2)
project(snake_controller)
# Find catkin macros
find_package(catkin REQUIRED)
# generates cmake config files and set variables for installation
catkin_package()snake_controller/
│
└───nodes/
│
└───launch/
│
└───CMakeLists.txt
│
└───package.xmlsource devel/setup.bashecho 'source ~/catkin_ws/devel/setup.bash' >> ~/.bashrcsource ~/.bashrcchmod +x snake_heading_controller#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
pass
if __name__ == "__main__":
SnakeHeadingController()./snake_heading_controllerprint "I am here #1" rospy.init_node('snake_heading_controller')
rospy.spin()import rospy#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
import rospy
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
rospy.spin()
if __name__ == "__main__":
SnakeHeadingController()roscore./snake_heading_controllerrosnode list # Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb) def heading_cb(self, heading_msg):
"""Callback for heading goal."""
pass
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
passfrom geometry_msgs.msg import PoseArray
from std_msgs.msg import Float64#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
import rospy
from geometry_msgs.msg import PoseArray
from std_msgs.msg import Float64
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin()
def heading_cb(self, heading_msg):
"""Callback for heading goal."""
pass
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
pass
if __name__ == "__main__":
SnakeHeadingController()roscore./snake_heading_controllerrosnode info snake_heading_controller # Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)from geometry_msgs.msg import Twist, PoseArray#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
import rospy
from geometry_msgs.msg import Twist, PoseArray
from std_msgs.msg import Float64
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
# Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin()
def heading_cb(self, heading_msg):
"""Callback for heading goal."""
pass
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
pass
if __name__ == "__main__":
SnakeHeadingController()roscore./snake_heading_controllerrosnode info snake_heading_controllerrosmsg info std_msgs/Float64
rosmsg info geometry_msgs/PoseArrayheading_command = heading_msg.dataorientation = pose_msg.poses[0].orientationfrom tf.transformations import euler_from_quaternion quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
import rospy
from geometry_msgs.msg import Twist, PoseArray
from std_msgs.msg import Float64
from tf.transformations import euler_from_quaternion
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
# Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin()
def heading_cb(self, heading_msg):
"""Callback for heading goal."""
orientation = pose_msg.poses[0].orientation
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)
if __name__ == "__main__":
SnakeHeadingController()if have_heading_command:
error = heading - heading_command
angular_velocity = sign(error) * ANGULAR_VELOCITY_MAGself.ANGULAR_VELOCITY_MAG = 2.0self.heading_command = Noneself.heading_command = heading_msg.datafrom angles import shortest_angular_distancefrom math import copysign if (self.heading_command is not None):
quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)
error = shortest_angular_distance(heading, self.heading_command)
angular_velocity_command = copysign(self.ANGULAR_VELOCITY, error)#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
# Python
from math import copysign
from angles import shortest_angular_distance
# ROS
import rospy
from geometry_msgs.msg import Twist, PoseArray
from std_msgs.msg import Float64
from tf.transformations import euler_from_quaternion
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
self.heading_command = None
self.ANGULAR_VELOCITY = 6.28
# Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin()
def heading_cb(self, heading_msg):
"""Callback for heading goal."""
self.heading_command = heading_msg.data
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
if (self.heading_command is not None):
quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)
error = shortest_angular_distance(heading, self.heading_command)
angular_velocity_command = copysign(self.ANGULAR_VELOCITY, error)
if __name__ == "__main__":
SnakeHeadingController()self.LINEAR_VELOCITY = 2.0 twist_msg = Twist()
twist_msg.linear.x = self.LINEAR_VELOCITY
twist_msg.angular.z = angular_velocity_command
self.twist_pub.publish(twist_msg)#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
# Python
from math import copysign
from angles import shortest_angular_distance
# ROS
import rospy
from geometry_msgs.msg import Twist, PoseArray
from std_msgs.msg import Float64
from tf.transformations import euler_from_quaternion
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
self.heading_command = None
self.ANGULAR_VELOCITY = 6.28
self.LINEAR_VELOCITY = 2.0
# Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin()
def heading_cb(self, heading_msg):
"""Callback for heading goal."""
self.heading_command = heading_msg.data
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
if (self.heading_command is not None):
quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)
error = shortest_angular_distance(heading, self.heading_command)
angular_velocity_command = copysign(self.ANGULAR_VELOCITY, error)
twist_msg = Twist()
twist_msg.linear.x = self.LINEAR_VELOCITY
twist_msg.angular.z = angular_velocity_command
self.twist_pub.publish(twist_msg)
if __name__ == "__main__":
SnakeHeadingController() self.ANGULAR_VELOCITY = rospy.get_param('~angular_velocity', 6.28)
self.LINEAR_VELOCITY = rospy.get_param('~linear_velocity', 2.0)#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
# Python
from math import copysign
from angles import shortest_angular_distance
# ROS
import rospy
from geometry_msgs.msg import Twist, PoseArray
from std_msgs.msg import Float64
from tf.transformations import euler_from_quaternion
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
self.heading_command = None
self.ANGULAR_VELOCITY = rospy.get_param('~angular_velocity', 6.28)
self.LINEAR_VELOCITY = rospy.get_param('~linear_velocity', 2.0)
# Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin()
def heading_cb(self, heading_msg):
"""Callback for heading goal."""
self.heading_command = heading_msg.data
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
if (self.heading_command is not None):
quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)
error = shortest_angular_distance(heading, self.heading_command)
angular_velocity_command = copysign(self.ANGULAR_VELOCITY, error)
twist_msg = Twist()
twist_msg.linear.x = self.LINEAR_VELOCITY
twist_msg.angular.z = angular_velocity_command
self.twist_pub.publish(twist_msg)
if __name__ == "__main__":
SnakeHeadingController()<launch>
<node type="snake_heading_controller" pkg="snake_controller" name="snake_heading_controller"/>
</launch>catkin buildsource ~/.bashrcsource devel/setup.bashroslaunch snake_controller snake_controller.launchroslaunch snakesim snakesim.launch<launch>
<include file="$(find snakesim)/launch/snakesim.launch"/>
<include file="$(find snake_controller)/launch/snake_controller.launch"/>
</launch>roslaunch snake_controller snake.launchrosrun rqt_graph rqt_graphrostopic pub /controller/heading std_msgs/Float64 "data: 0.0"rosrun rqt_publisher rqt_publisherroslaunch snake_controller snake.launch linear_velocity:=5.0<launch>
<arg name="linear_velocity" default="2.0"/>
<node type="snake_heading_controller" pkg="snake_controller" name="snake_heading_controller">
<param name="linear_velocity" value="$(arg linear_velocity)"/>
</node>
</launch><launch>
<arg name="linear_velocity" default="2.0"/>
<include file="$(find snakesim)/launch/snakesim.launch"/>
<include file="$(find snake_controller)/launch/snake_controller.launch">
<arg name="linear_velocity" value="$(arg linear_velocity)"/>
</include>
</launch>#!/usr/bin/env python
"""Node to control the heading of the snake.
License removed for brevity
"""
# Python
from math import copysign
from angles import shortest_angular_distance
# ROS
import rospy
from geometry_msgs.msg import Twist, PoseArray
from std_msgs.msg import Float64
from tf.transformations import euler_from_quaternion
class SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
self.heading_command = None
self.ANGULAR_VELOCITY = rospy.get_param('~angular_velocity', 6.28)
self.LINEAR_VELOCITY = rospy.get_param('~linear_velocity', 2.0)
# Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin()
def heading_cb(self, heading_msg):
"""Callback for heading goal."""
self.heading_command = heading_msg.data
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
if (self.heading_command is not None):
quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)
error = shortest_angular_distance(heading, self.heading_command)
angular_velocity_command = copysign(self.ANGULAR_VELOCITY, error)
twist_msg = Twist()
twist_msg.linear.x = self.LINEAR_VELOCITY
twist_msg.angular.z = angular_velocity_command
self.twist_pub.publish(twist_msg)
if __name__ == "__main__":
SnakeHeadingController()#!/usr/bin/env python"""Node to control the heading of the snake.
License removed for brevity
"""# Python
from math import copysign
from angles import shortest_angular_distance
# ROS
import rospy
from geometry_msgs.msg import Twist, PoseArray
from std_msgs.msg import Float64
from tf.transformations import euler_from_quaternionclass SnakeHeadingController(object):
"""Simple heading controller for the snake."""
def __init__(self):
rospy.init_node('snake_heading_controller')
self.heading_command = None
self.ANGULAR_VELOCITY = rospy.get_param('~angular_velocity', 6.28)
self.LINEAR_VELOCITY = rospy.get_param('~linear_velocity', 2.0)
# Publishers
self.twist_pub = rospy.Publisher('snake/cmd_vel', Twist, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.pose_cb)
rospy.Subscriber('controller/heading', Float64, self.heading_cb)
rospy.spin() def heading_cb(self, heading_msg):
"""Callback for heading goal."""
self.heading_command = heading_msg.data
def pose_cb(self, pose_msg):
"""Callback for poses from the snake."""
if (self.heading_command is not None):
quat = (pose_msg.poses[0].orientation.x,
pose_msg.poses[0].orientation.y,
pose_msg.poses[0].orientation.z,
pose_msg.poses[0].orientation.w)
__, __, heading = euler_from_quaternion(quat)
error = shortest_angular_distance(heading, self.heading_command)
angular_velocity_command = copysign(self.ANGULAR_VELOCITY, error)
twist_msg = Twist()
twist_msg.linear.x = self.LINEAR_VELOCITY
twist_msg.angular.z = angular_velocity_command
self.twist_pub.publish(twist_msg)if __name__ == "__main__":
SnakeHeadingController()#!/usr/bin/env python
"""Node to control the position of the snake.
License removed for brevity
"""
class SnakePositionController(object):
"""Simple position controller for the snake."""
def __init__(self):
pass
def position_cb(self, msg):
"""Callback for position."""
pass
def snake_cb(self, msg):
"""Callback for poses from the snake."""
pass
if __name__ == "__main__":
SnakePositionController()#!/usr/bin/env python
"""Node to control the position of the snake.
License removed for brevity
"""
# ROS
import rospy
from geometry_msgs.msg import PoseArray, Point
from std_msgs.msg import Float64
class SnakePositionController(object):
"""Simple position controller for the snake."""
def __init__(self):
rospy.init_node('snake_position_controller')
# Publishers
self.heading_pub = rospy.Publisher('controller/heading', Float64, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.snake_cb)
rospy.Subscriber('controller/position', Point, self.position_cb)
rospy.spin()
def position_cb(self, point_msg):
"""Callback for position."""
pass
def snake_cb(self, pose_msg):
"""Callback for poses from the snake."""
pass
if __name__ == "__main__":
SnakePositionController()#!/usr/bin/env python
"""Node to control the position of the snake.
License removed for brevity
"""
# Python
import math
# ROS
import rospy
from geometry_msgs.msg import PoseArray, Point
from std_msgs.msg import Float64
class SnakePositionController(object):
"""Simple position controller for the snake."""
def __init__(self):
rospy.init_node('snake_position_controller')
self.position = None
# Publishers
self.heading_pub = rospy.Publisher('controller/heading', Float64, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.snake_cb)
rospy.Subscriber('controller/position', Point, self.position_cb)
rospy.spin()
def position_cb(self, point_msg):
"""Callback for position."""
self.position = (point_msg.x, point_msg.y)
def snake_cb(self, pose_msg):
"""Callback for poses from the snake."""
if self.position is not None:
pose = (pose_msg.poses[0].position.x, pose_msg.poses[0].position.y)
# make a local copy to avoid threading issues
position = self.position
heading = math.atan2(position[1] - pose[1], position[0] - pose[0])
self.heading_pub.publish(heading)
if __name__ == "__main__":
SnakePositionController() # make a local copy to avoid threading issues
position = self.position<node type="snake_position_controller" pkg="snake_tutorial" name="snake_position_controller"/>#!/usr/bin/env python
"""Node to control the position of the snake.
License removed for brevity
"""
# Python
import math
# ROS
import rospy
from geometry_msgs.msg import PoseArray, Point
from std_msgs.msg import Float64
class SnakePositionController(object):
"""Simple position controller for the snake."""
def __init__(self):
rospy.init_node('snake_position_controller')
self.position = None
# Publishers
self.heading_pub = rospy.Publisher('controller/heading', Float64, queue_size=1)
# Subscribers
rospy.Subscriber('snake/pose', PoseArray, self.snake_cb)
rospy.Subscriber('controller/position', Point, self.position_cb)
rospy.spin()
def position_cb(self, point_msg):
"""Callback for position."""
self.position = (point_msg.x, point_msg.y)
def snake_cb(self, pose_msg):
"""Callback for poses from the snake."""
if self.position is not None:
pose = (pose_msg.poses[0].position.x, pose_msg.poses[0].position.y)
# make a local copy to avoid threading issues
position = self.position
heading = math.atan2(position[1] - pose[1], position[0] - pose[0])
self.heading_pub.publish(heading)
if __name__ == "__main__":
SnakePositionController()#!/usr/bin/env python
"""Node to relay goal position to the position controller.
License removed for brevity
"""
if __name__ == "__main__":
pass#!/usr/bin/env python
"""Node to relay goal position to the position controller.
License removed for brevity
"""
# ROS
import rospy
from geometry_msgs.msg import PointStamped, Point
if __name__ == "__main__":
rospy.init_node('snake_goal_relay')
# Publishers
goal_pub = rospy.Publisher('controller/position', Point, queue_size=1)
# Subscribers
rospy.Subscriber(
'snake/goal',
PointStamped,
lambda msg, pub=goal_pub: pub.publish(msg.point)
)
rospy.spin()<node type="snake_goal_relay" pkg="snake_tutorial" name="snake_goal_controller"/>#!/usr/bin/env python
"""Node to relay goal position to the position controller.
License removed for brevity
"""
# ROS
import rospy
from geometry_msgs.msg import PointStamped, Point
if __name__ == "__main__":
rospy.init_node('snake_goal_relay')
# Publishers
goal_pub = rospy.Publisher('controller/position', Point, queue_size=1)
# Subscribers
rospy.Subscriber(
'snake/goal',
PointStamped,
lambda msg, pub=goal_pub: pub.publish(msg.point)
)
rospy.spin()def callback_one(self, msg):
self.important_variable = msg.data
def callback_two(self, msg):
# Check that we can take the log of important_variable
if self.important_variable > 0:
self.some_variable = msg.data * math.log(self.important_variable)import threading
# ...
self.lock = threading.Lock()
# ...
def callback_one(self, msg):
self.lock.acquire()
self.important_variable = msg.data
self.lock.release()
def callback_two(self, msg):
self.lock.acquire()
# Check that we can take the log of important_variable
if self.important_variable > 0:
self.some_variable = msg.data * math.log(self.important_variable)
self.lock.release()